{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone,
In this article, I will give you some information about the Auth Service offered by Huawei AppGallery Connect to developers and how to use it in cross-platform applications that you will develop with Flutter.
What is Auth Service ?
Many mobile applications require membership systems and authentication methods. Setting up this system from scratch in mobile applications can be difficult and time consuming. Huawei AGC Auth Service enables you to quickly and securely integrate this authentication process into your mobile application. Moreover, Auth Service offers many authentication methods. It can be used in Android Native, IOS Native and cross-platform (Flutter, React-Native, Cordova) projects.
Highly secure, fast and easy to use, Auth Service supports all the following account methods and authentication methods.
Mobile Number (Android, IOS, Web)
Email Address (Android, IOS, Web)
HUAWEI ID (Android)
HUAWEI Game Center account (Android)
WeChat account (Android, IOS, Web)
QQ account (Android, IOS, Web)
Weibo account (Android, IOS)
Apple ID (IOS)
Google account* (Android, IOS)
Google Play Games account* (Android)
Facebook account* (Android, IOS)
Twitter account* (Android, IOS)
Anonymous account (Android, IOS, Web)
Self-owned account (Android, IOS)
Development Steps
Integration
After creating your application on the AGC Console and completing all of the necessary steps, the agconnect-services file should be added to the project first.
The agconnect-services.json configuration file should be added under the android/app directory in the Flutter project.
For IOS, the agconnect-services.plist configuration file should be added under ios/Runner directory in the Flutter project.
Next, the following dependencies for HMS usage need to be added to the build.gradle file under the android directory.
Java:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.5.0'
classpath 'com.huawei.agconnect:agcp:1.4.2.301'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Then add the following line of code to the build.gradle file under the android/app directory.
Code:
apply plugin: 'com.huawei.agconnect'
Finally, the Auth Service SDK should be added to the pubspec.yaml file. To do this, open the pubspec.yaml file and add the required dependency as follows.
Code:
dependencies:
flutter:
sdk: flutter
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
agconnect_auth: ^1.1.0
agconnect_core: ^1.1.0
And, by clicking “pub get”, the dependencies are added to Android Studio. After all these steps are completed, your app is ready to code.
2. Register with Email
Create a new Dart file named AuthManager that contains all the operations we will do with Auth Service. In this class, the necessary methods for all operations such as sending verification code, registration, login will be written and all operations will be done in this class without any code confusion in the interface class.
When registering with the user’s email address, a verification code must be sent to the entered email address. In this way, it will be determined whether the users are a real person and security measures will be taken. For this, create a method called sendRegisterVerificationCode that takes the email address entered by the user as a parameter and sends a verification code to this email address. By creating a VerifyCodeSettings object within the method, it is specified for what purpose the verification code will be used by making the VerifyCodeAction value “registerLogin”. Finally, with EmailAuthProvider.requestVerifyCode, the verification code is sent to the mail address. Yo can find all of the method on the below.
Java:
void sendRegisterVerificationCode(String email) async{
VerifyCodeSettings settings = VerifyCodeSettings(VerifyCodeAction.registerLogin, sendInterval: 30);
EmailAuthProvider.requestVerifyCode(email, settings)
.then((result){
print("sendRegisterVerificationCode : " + result.validityPeriod);
});
}
After the user receives the verification code, the user is registered with the e-mail address, password, and verification code. Each user must set a special password, and this password must be at least 8 characters long and different from the e-mail address. In addition, lowercase letters, uppercase letters, numbers, spaces or special characters must meet at least two of the requirements. In order to the registration process, a method named registerWithEmail should be created and mail address, verification code and password should be given as parameters. Then create an EmailUser object and set these values. Finally, a new user is created with the AGCAuth.instance.createEmailUser (user) line. You can find registerWithEmail method on the below.
Java:
void registerWithEmail(String email, String verifyCode, String password, BuildContext context) async{
EmailUser user = EmailUser(email, verifyCode, password: password);
AGCAuth.instance.createEmailUser(user)
.then((signInResult) {
print("registerWithEmail : " + signInResult.user.email);
.catchError((error) {
print("Register Error " + error.toString());
_showMyDialog(context, error.toString());
});
}
3. Signin with Email
In order for users to log in to your mobile app after they have registered, a verification code should be sent. To send the verification code while logging in, a method should be created as in the registration, and a verification code should be sent to the e-mail address with this method.
After the verification code is sent, the user can login to the app with their e-mail address, password and verification code. You can test whether the operation is successful by adding .then and .catchError to the login method. You can find all the codes for the sign-in method below.
Java:
void sendSigninVerificationCode(String email) async{
VerifyCodeSettings settings = VerifyCodeSettings(VerifyCodeAction.registerLogin, sendInterval: 30);
EmailAuthProvider.requestVerifyCode(email, settings)
.then((result){
print("sendSigninVerificationCode : " + result.validityPeriod);
});
}
void loginWithEmail(String email, String verifyCode, String password) async{
AGCAuthCredential credential = EmailAuthProvider.credentialWithVerifyCode(email, verifyCode, password: password);
AGCAuth.instance.signIn(credential)
.then((signInResult){
AGCUser user = signInResult.user;
print("loginWithEmail : " + user.displayName);
})
.catchError((error){
print("Login Error " + error.toString());
});
}
4. Reset Password
If the user forgets or wants to change his password, the password reset method provided by Auth Service should be used. Otherwise, the user cannot change his password, and cannot log into his account.
As in every auth method, a verification code is still required when resetting the password. This verification code should be sent to the user’s mail address, similar to the register and sign. Unlike the register and signin operations, the VerifyCodeSettings parameter must be VerifyCodeAction.resetPassword. After sending the verification code to the user’s e-mail address, password reset can be done as follows.
Java:
void sendResetPasswordVerificationCode(String email) async{
VerifyCodeSettings settings = VerifyCodeSettings(VerifyCodeAction.resetPassword, sendInterval: 30);
EmailAuthProvider.requestVerifyCode(email, settings)
.then((result){
print(result.validityPeriod);
});
}
void resetPassword(String email, String newPassword, String verifyCode) async{
AGCAuth.instance.resetPasswordWithEmail(email, newPassword, verifyCode)
.then((value) {
print("Password Reseted");
})
.catchError((error) {
print("Password Reset Error " + error);
});
}
5. Logout
To end the user’s current session, an instance must be created from the AGCAuth object and the signOut() method must be called. You can find this code block on the below.
Java:
void signOut() async{
AGCAuth.instance.signOut().then((value) {
print("SignInSuccess");
}).catchError((error) => print("SignOut Error : " + error));
}
6. User Information
Auth Service provides a lot of data to show the user information of a logged in user. In order to obtain this data, an instance can be created from the AGCAuth object and all the information belonging to the user can be listed with the currentUser method.
Java:
void getCurrentUser() async {
AGCAuth.instance.currentUser.then((value) {
print('current user = ${value?.uid} , ${value?.email} , ${value?.displayName} , ${value?.phone} , ${value?.photoUrl} ');
});
}
The AuthManager class must contain these operations. Thanks to the above methods, you can log in and register with your Email address in your app. You can create an object from the AuthManager class and call the method you need wherever you need it. Now that the AuthManager class is complete, a registration page can be designed and the necessary methods can be called.
7. Create Register Page
I will share an example to give you an idea about design. I designed an animation so that the elements in the page design come with animation at 5 second intervals. In addition, I prepared a design that makes circles around the icon you add periodically to highlight your application’s logo.
I used the avatar_glow library for this. Avatar Glow library allows us to make a simple and stylish design. To add this library, you can add “avatar_glow: ^ 1.1.0” line to pubspec.yaml file and integrate it into your project with “pub get”.
After the library is added, we create a Dart file named DelayedAnimation to run the animations. In this class, we define all the features of animation. You can find all the codes of the class below.
Java:
import 'dart:async';
import 'package:flutter/material.dart';
class DelayedAnimation extends StatefulWidget {
final Widget child;
final int delay;
DelayedAnimation({@required this.child, this.delay});
@override
_DelayedAnimationState createState() => _DelayedAnimationState();
}
class _DelayedAnimationState extends State<DelayedAnimation>
with TickerProviderStateMixin {
AnimationController _controller;
Animation<Offset> _animOffset;
@override
void initState() {
super.initState();
_controller =
AnimationController(vsync: this, duration: Duration(milliseconds: 800));
final curve =
CurvedAnimation(curve: Curves.decelerate, parent: _controller);
_animOffset =
Tween<Offset>(begin: const Offset(0.0, 0.35), end: Offset.zero)
.animate(curve);
if (widget.delay == null) {
_controller.forward();
} else {
Timer(Duration(milliseconds: widget.delay), () {
_controller.forward();
});
}
}
@override
void dispose() {
super.dispose();
_controller.dispose();
}
@override
Widget build(BuildContext context) {
return FadeTransition(
child: SlideTransition(
position: _animOffset,
child: widget.child,
),
opacity: _controller,
);
}
}
Then we can create a Dart file called RegisterPage and continue coding.
In this class, we first set a fixed delay time. I set it to 500 ms. Then I increased it by 500ms for each element and made it load one after the other.
Then TextEditingController objects should be created to get values such as email, password, verify code written into TextFormField.
Finally, when clicked the send verification code button, I set a visibility value as bool to change the name of the button and the visibility of the field where a new verification code will be entered.
Java:
final int delayedAmount = 500;
AnimationController _controller;
bool _visible = false;
String buttonText = "Send Verify Code";
TextEditingController emailController = new TextEditingController();
TextEditingController passwordController = new TextEditingController();
TextEditingController verifyCodeController = new TextEditingController();
Now, AnimationController values must be set in initState method.
Java:
@override
void initState() {
_controller = AnimationController(
vsync: this,
duration: Duration(
milliseconds: 200,
),
lowerBound: 0.0,
upperBound: 0.1,
)..addListener(() {
setState(() {});
});
super.initState();
}
Then a method should be created for the verification code send button and the save button, and these methods should be called in the Widget build method where necessary. In both methods, first of all, the visibility values and texts should be changed and the related methods should be called by creating an object from the AuthManager class.
Java:
void _toggleVerifyCode() {
setState(() {
_visible = true;
buttonText = "Send Again";
final AuthManager authManager = new AuthManager();
authManager.sendRegisterVerificationCode(emailController.text);
});
}
void _toggleRegister() {
setState(() {
_visible = true;
buttonText = "Send Again";
final AuthManager authManager = new AuthManager();
authManager.registerWithEmail(emailController.text, verifyCodeController.text, passwordController.text, this.context);
});
}
Finally, in the Widget build method, the design of each element should be prepared separately and returned at the end. If all the codes are written under return, the code will look too complex and debugging or modification will be difficult. As seen on the below, I prepared an Avatar Glow object at the top. Then, create two TextFormFields for the user to enter their mail address and password. Under these two TextFormFields, there is a button for sending the verification code. When this button is clicked, a verification code is sent to the mail address, and a new button design is created for entering this verification code and a new TextFormField and register operations. Yo can find screenshots and all of the codes on the below.
Java:
@override
Widget build(BuildContext context) {
final color = Color(0xFFF4EADE);
_scale = 1 - _controller.value;
final logo = AvatarGlow(
endRadius: 90,
duration: Duration(seconds: 2),
glowColor: Color(0xFF2F496E),
repeat: true,
repeatPauseDuration: Duration(seconds: 2),
startDelay: Duration(seconds: 1),
child: Material(
elevation: 8.0,
shape: CircleBorder(),
child: CircleAvatar(
backgroundColor: Color(0xFFF4EADE),
backgroundImage: AssetImage('assets/huawei_logo.png'),
radius: 50.0,
)
),
);
final title = DelayedAnimation(
child: Text(
"Register",
style: TextStyle(
fontWeight: FontWeight.bold,
fontSize: 35.0,
color: Color(0xFF2F496E)),
),
delay: delayedAmount + 500,
);
final email = DelayedAnimation(
delay: delayedAmount + 500,
child: TextFormField(
controller: emailController,
keyboardType: TextInputType.emailAddress,
autofocus: false,
decoration: InputDecoration(
hintText: '* Email',
contentPadding: EdgeInsets.fromLTRB(20.0, 10.0, 20.0, 10.0),
border: OutlineInputBorder(borderRadius: BorderRadius.circular(100.0)),
focusedBorder: OutlineInputBorder(
borderRadius: BorderRadius.circular(100.0),
borderSide: BorderSide(
color: Color(0xFF2F496E),
),
),
),
),
);
final password = DelayedAnimation(
delay: delayedAmount + 1000,
child: TextFormField(
controller: passwordController,
autofocus: false,
obscureText: true,
decoration: InputDecoration(
hintText: '* Password',
contentPadding: EdgeInsets.fromLTRB(20.0, 10.0, 20.0, 10.0),
border: OutlineInputBorder(borderRadius: BorderRadius.circular(100.0)),
focusedBorder: OutlineInputBorder(
borderRadius: BorderRadius.circular(100.0),
borderSide: BorderSide(
color: Color(0xFF2F496E),
),
),
),
),
);
final sendVerifyCodeButton = RaisedButton(
color: Color(0xFF2F496E),
highlightColor: Color(0xFF2F496E),
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(100.0),
),
onPressed: _toggleVerifyCode,
child: Text(
buttonText,
style: TextStyle(
fontSize: 15.0,
fontWeight: FontWeight.normal,
color: color,
),
),
);
final verifyCode = DelayedAnimation(
delay: 500,
child: TextFormField(
controller: verifyCodeController,
keyboardType: TextInputType.emailAddress,
autofocus: false,
decoration: InputDecoration(
hintText: '* Verify Code',
contentPadding: EdgeInsets.fromLTRB(20.0, 10.0, 20.0, 10.0),
border: OutlineInputBorder(borderRadius: BorderRadius.circular(100.0)),
focusedBorder: OutlineInputBorder(
borderRadius: BorderRadius.circular(100.0),
borderSide: BorderSide(
color: Color(0xFF2F496E),
),
),
),
),
);
final registerButton = RaisedButton(
color: Color(0xFF2F496E),
highlightColor: Color(0xFF2F496E),
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(100.0),
),
onPressed: _toggleRegister,
child: Text(
'Register',
style: TextStyle(
fontSize: 15.0,
fontWeight: FontWeight.normal,
color: color,
),
),
);
return MaterialApp(
debugShowCheckedModeBanner: false,
home: Scaffold(
backgroundColor: Color(0xFFF4EADE),
body: Center(
child: SingleChildScrollView(
child: Column(
children: <Widget>[
new Container(
margin: const EdgeInsets.all(20.0),
child: new Container()
),
title,
logo,
SizedBox(
height: 50,
width: 300,
child: email,
),
SizedBox(height: 15.0),
SizedBox(
height: 50,
width: 300,
child: password,
),
SizedBox(height: 15.0),
SizedBox(
height: 40,
width: 300,
child: DelayedAnimation(
delay: delayedAmount + 1500,
child: sendVerifyCodeButton
),
),
SizedBox(height: 15.0),
SizedBox(
height: 50,
width: 300,
child: Visibility(
maintainSize: true,
maintainAnimation: true,
maintainState: true,
visible: _visible,
child: DelayedAnimation(
delay: delayedAmount + 1500,
child: verifyCode
),
)
),
SizedBox(height: 15.0),
SizedBox(
height: 50,
width: 300,
child: Visibility(
maintainSize: true,
maintainAnimation: true,
maintainState: true,
visible: _visible,
child: DelayedAnimation(
delay: delayedAmount + 1500,
child: registerButton
),
)
),
SizedBox(height: 50.0,),
],
),
),
),
),
);
}
8. Create Login Page
We coded the all of requirements for login in the AuthManager class as above. Using the same design on the Register page and changing the button’s onPressed method, the Login page can be created easily. Since all codes are the same, I will not share the codes for this class again. As I mentioned, this is just a design example, you can change your login and registration pages to your application needs.
Very useful post
Neha J said:
Very useful post
Click to expand...
Click to collapse
Thank you.
Useful if you have memory problems.
Related
In this article, I am going to use 3 Huawei kits in one project:
· Map Kit, for personalizing how your map displays and interact with your users, also making location-based services work better for your users.
· Location Kit, for getting the user’s current location with fused location function, also creating geofences.
· Site Kit, for searching and exploring the nearby places with their addresses.
What is a Geo-fence?
Geofence literally means a virtual border around a geographic area. Geofencing technology is the name of the technology used to trigger an automatic alert when an active device enters a defined geographic area (geofence).
As technology developed, brands started to reach customers. Of course, at this point, with digital developments, multiple new marketing terms started to emerge. Geofencing, a new term that emerged with this development, entered the lives of marketers.
Project Setup
HMS Integration
Firstly, you need a Huawei Developer account and add an app in Projects in AppGallery Connect console. So that you can activate the Map, Location and Site kits and use them in your app. If you don’t have an Huawei Developer account and don’t know the steps please follow the links below.
· Register Huawei developer website
· Configuring app information in AppGallery Connect
· Integrating Map Kit Flutter Plugin
· Integrating Location Kit Flutter Plugin
· Integrating Site Kit Flutter Plugin
Important: While adding app, the package name you enter should be the same as your Flutter project’s package name.
Note: Before you install agconnect-services.json file, make sure the required kits are enabled.
Permissions
In order to make your kits work perfectly, you need to add the permissions below in AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
Creating Flutter Application
Add Dependencies to ‘pubspec.yaml’
After completing all the steps above, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. You can find all the plugins in pub.dev with the latest versions. You can follow the steps in installing section of the following links.
· Map Kit Plugin for Flutter
· Location Kit Plugin for Flutter
· Site Kit Plugin for Flutter
Code:
dependencies:
flutter:
sdk: flutter
huawei_location: ^5.0.0+301
huawei_site: ^5.0.1+300
huawei_map: ^4.0.4+300
After adding them, run flutter pub get command.
All the plugins are ready to use!
Request Location Permission and Get Current Location
Create a PermissionHandler instance and initialize it in initState to ask for permission. Also, follow the same steps for FusedLocationProviderClient. With locationService object, we can get the user’s current location by calling getLastLocation() method.
Code:
LatLng center;
PermissionHandler permissionHandler;
FusedLocationProviderClient locationService;
@override
void initState() {
permissionHandler = PermissionHandler();
locationService = FusedLocationProviderClient();
getCurrentLatLng();
super.initState();
}
getCurrentLatLng() async {
await requestPermission();
Location currentLocation = await locationService.getLastLocation();
LatLng latLng = LatLng(currentLocation.latitude, currentLocation.longitude);
setState(() {
center = latLng;
});
}
In requestPermission() method, you can find both Location Permission and Background Location Permission.
Code:
requestPermission() async {
bool hasPermission = await permissionHandler.hasLocationPermission();
if (!hasPermission) {
try {
bool status = await permissionHandler.requestLocationPermission();
print("Is permission granted $status");
} catch (e) {
print(e.toString());
}
}
bool backgroundPermission =
await permissionHandler.hasBackgroundLocationPermission();
if (!backgroundPermission) {
try {
bool backStatus =
await permissionHandler.requestBackgroundLocationPermission();
print("Is background permission granted $backStatus");
} catch (e) {
print(e.toString);
}
}
}
When you launch the app for the first time, the location permission screen will appear.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Add HuaweiMap
Huawei Map is the main layout of this project. It will cover all the screen and also we will add some buttons on it, so we should put HuaweiMap and other widgets into a Stack widget. Do not forget to create a Huawei map controller.
Code:
static const double _zoom = 16;
Set<Marker> _markers = {};
int _markerId = 1;
Set<Circle> _circles = {};
int _circleId = 1;
_onMapCreated(HuaweiMapController controller) {
mapController = controller;
}
Stack(
fit: StackFit.expand,
children: <Widget>[
HuaweiMap(
onMapCreated: _onMapCreated,
initialCameraPosition:
CameraPosition(target: center, zoom: _zoom),
mapType: MapType.normal,
onClick: (LatLng latLng) {
placeSearch(latLng);
selectedCoordinates = latLng;
_getScreenCoordinates(latLng);
setState(() {
clicked = true;
addMarker(latLng);
addCircle(latLng);
});
},
markers: _markers,
circles: _circles,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: true,
rotateGesturesEnabled: true,
myLocationButtonEnabled: true,
myLocationEnabled: true,
trafficEnabled: false,
),
],
)
We have got the current location with Location service’s getLastLocation() method and assigned it to center variables as longitude and latitude. While creating the HuaweiMap widget, assign that center variable to HuaweiMap’s target property, so that the app opens with a map showing the user’s current location.
Code:
placeSearch(LatLng latLng) async {
NearbySearchRequest request = NearbySearchRequest();
request.location = Coordinate(lat: latLng.lat, lng: latLng.lng);
request.language = "en";
request.poiType = LocationType.ADDRESS;
request.pageIndex = 1;
request.pageSize = 1;
request.radius = 100;
NearbySearchResponse response = await searchService.nearbySearch(request);
try {
print(response.sites);
site = response.sites[0];
} catch (e) {
print(e.toString());
}
}
When onClick method of HuaweiMap is triggered, call placeSearch using the Site Kit’s nearbySearch method. Thus, you will get a Site object to assign to the new geofence you will add.
Create Geofence
When the user touch somewhere on the map; a marker, a circle around the marker, a Slider widget to adjust the radius of the circle, and a button named “Add Geofence” will show up on the screen. So we will use a boolean variable called clicked and if it’s true, the widgets I have mentioned in the last sentence will be shown.
Code:
addMarker(LatLng latLng) {
if (marker != null) marker = null;
marker = Marker(
markerId: MarkerId(_markerId.toString()), //_markerId is set to 1
position: latLng,
clickable: true,
icon: BitmapDescriptor.defaultMarker,
);
setState(() {
_markers.add(marker);
});
selectedCoordinates = latLng;
_markerId++; //after a new marker is added, increase _markerId for the next marker
}
_drawCircle(Geofence geofence) {
this.geofence = geofence;
if (circle != null) circle = null;
circle = Circle(
circleId: CircleId(_circleId.toString()),
fillColor: Colors.grey[400],
strokeColor: Colors.red,
center: selectedCoordinates,
clickable: false,
radius: radius,
);
setState(() {
_circles.add(circle);
});
_circleId++;
}
Create a Slider widget wrapped with a Positioned widget and put them into Stack widget as shown below.
Code:
if (clicked)
Positioned(
bottom: 10,
right: 10,
left: 10,
child: Slider(
min: 50,
max: 200,
value: radius,
onChanged: (newValue) {
setState(() {
radius = newValue;
_drawCircle(geofence);
});
},
),
),
After implementing addMarker and drawCircle methods and adding Slider widget, now we will create AddGeofence Screen and it will appear as a ModalBottomSheet when AddGeofence button is clicked.
Code:
RaisedButton(
child: Text("Add Geofence"),
onPressed: () async {
geofence.uniqueId = _fenceId.toString();
geofence.radius = radius;
geofence.latitude = selectedCoordinates.lat;
geofence.longitude = selectedCoordinates.lng;
_fenceId++;
final clickValue = await showModalBottomSheet(
context: context,
isScrollControlled: true,
builder: (context) => SingleChildScrollView(
child: Container(
padding: EdgeInsets.only(
bottom: MediaQuery.of(context).viewInsets.bottom),
child: AddGeofenceScreen(
geofence: geofence,
site: site,
),
),
),
);
updateClicked(clickValue);
//When ModalBottomSheet is closed, pass a bool value in Navigator
//like Navigator.pop(context, false) so that clicked variable will be
//updated in home screen with updateClicked method.
},
),
void updateClicked(bool newValue) {
setState(() {
clicked = newValue;
});
}
In the new stateful AddGeofenceScreen widget’s state class, create GeofenceService and SearchService instances and initialize them in initState.
Code:
GeofenceService geofenceService;
int selectedConType = Geofence.GEOFENCE_NEVER_EXPIRE;
SearchService searchService;
@override
void initState() {
geofenceService = GeofenceService();
searchService = SearchService();
super.initState();
}
To monitor address, radius and also to select conversion type of the geofence, we will show a ModalBottomSheet with the widgets shown below.
Code:
Column(
crossAxisAlignment: CrossAxisAlignment.stretch,
mainAxisAlignment: MainAxisAlignment.spaceEvenly,
children: <Widget>[
Text(
"Address",
style: boldStyle,
),
Text(site.formatAddress),
Text(
"\nRadius",
style: boldStyle,
),
Text(geofence.radius.toInt().toString()),
Text(
"\nSelect Conversion Type",
style: boldStyle,
),
Column(
mainAxisAlignment: MainAxisAlignment.start,
children: <Widget>[
RadioListTile<int>(
dense: true,
title: Text(
"Enter",
style: TextStyle(fontSize: 14),
),
value: Geofence.ENTER_GEOFENCE_CONVERSION,
groupValue: selectedConType,
onChanged: (int value) {
setState(() {
selectedConType = value;
});
},
),
RadioListTile<int>(
dense: true,
title: Text("Exit"),
value: Geofence.EXIT_GEOFENCE_CONVERSION,
groupValue: selectedConType,
onChanged: (int value) {
setState(() {
selectedConType = value;
});
},
),
RadioListTile<int>(
dense: true,
title: Text("Stay"),
value: Geofence.DWELL_GEOFENCE_CONVERSION,
groupValue: selectedConType,
onChanged: (int value) {
setState(() {
selectedConType = value;
});
},
),
RadioListTile<int>(
dense: true,
title: Text("Never Expire"),
value: Geofence.GEOFENCE_NEVER_EXPIRE,
groupValue: selectedConType,
onChanged: (int value) {
setState(() {
selectedConType = value;
});
},
),
],
),
Align(
alignment: Alignment.bottomRight,
child: FlatButton(
child: Text(
"SAVE",
style: TextStyle(
color: Colors.blue, fontWeight: FontWeight.bold),
),
onPressed: () {
geofence.conversions = selectedConType;
addGeofence(geofence);
Navigator.pop(context, false);
},
),
)
],
),
For each conversion type, add a RadioListTile widget.
When you click SAVE button, addGeofence method will be called to add new Geofence to the list of Geofences, then return to the Home screen with false value to update clicked variable.
In addGeofence, do not forget to call createGeofenceList method with the list you have just added the geofence in.
Code:
void addGeofence(Geofence geofence) {
geofence.dwellDelayTime = 10000;
geofence.notificationInterval = 100;
geofenceList.add(geofence);
GeofenceRequest geofenceRequest = GeofenceRequest(geofenceList:
geofenceList);
try {
int requestCode = await geofenceService.createGeofenceList
(geofenceRequest);
print(requestCode);
} catch (e) {
print(e.toString());
}
}
To listen to the geofence events, you need to use onGeofenceData method in your code.
Code:
GeofenceService geofenceService;
StreamSubscription<GeofenceData> geofenceStreamSub;
@override
void initState() {
geofenceService = GeofenceService();
geofenceStreamSub = geofenceService.onGeofenceData.listen((data) {
infoText = data.toString(); //you can use this infoText to show a toast message to the user.
print(data.toString);
});
super.initState();
}
Search Nearby Places
In home screen, place a button onto the map to search nearby places with a keyword and when it is clicked a new alertDialog page will show up.
Code:
void _showAlertDialog() {
showDialog(
context: context,
builder: (BuildContext context) {
return AlertDialog(
title: Text("Search Location"),
content: Container(
height: 150,
child: Column(
mainAxisAlignment: MainAxisAlignment.spaceAround,
children: <Widget>[
TextField(
controller: searchQueryController,
),
MaterialButton(
color: Colors.blue,
child: Text(
"Search",
style: TextStyle(color: Colors.white),
),
onPressed: () async {
Navigator.pop(context);
_markers =
await nearbySearch(center, searchQueryController.text);
setState(() {});
},
)
],
),
),
actions: [
FlatButton(
child: Text("Close"),
onPressed: () {
Navigator.pop(context);
},
),
],
);
},
);
}
After you enter the keyword and click Search button, there will be markers related to the keyword will appear on the map.
Conclusion
In this article you have learnt how to use some of the features of Huawei Map, Location and Site kits in your projects. Also, you have learnt the geofencing concept. Now you can add geofences to your app and with geofencing, you can define an audience based on a customer’s behavior in a specific location. With location information, you can show suitable ads to the right people simultaneously, wherever they are.
Thank you for reading this article, I hope it was useful and you enjoyed it!
Huawei is the best Android smartphone devices making company. I don't know why Android creating a so much of issues. I feel bad
Can we show GIF image on huawei map at predefined locaation?
HMS Account Kit with Provider Pattern in Flutter
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone,
In this article, we will develop a login screen with Huawei’s account kit. We will be using the provider pattern which is one of the most preferred patterns in Flutter. In the end, our demo application will look like below.
Demo Application
HMS Account Kit
Apps with HUAWEI Account Kit allow users to sign in using their HUAWEI IDs with just a tap. By integrating, you can attract new users, by leveraging the enormous HUAWEI ID user base. Account Kit complies with international standards and protocols such as OAuth2.0 and OpenID Connect. It supports two-factor authentication(password authentication and mobile number authentication) to ensure high security. For a detailed explanation please refer to the documentation.
Provider Pattern
Provider pattern is a simple app state management. The idea behind it is that you have a central store or data container in the app which is called provider. Once you added your provider, so this data container to a widget, all child widgets of that widget can listen to that provider. It contains some data and notifies observers when a change occurs.
Provider pattern gives us an easy, low boiler-plate way to separate business logic from our widgets in apps. In the demo application, we are going to have a provider that is called LoginProvider and we will have all the required methods over there. From the other widgets in the app, we will be able to reach the methods and data of it.
Integration Preparations
First of all, you need to register as a HUAWEI developer and verify your identity. Please refer to the link for details. After that, you need to integrate the HUAWEI HMS Core into your application.
Software Requirements
Android Studio 3.X or later
JDK 1.8 or later
SDK Platform 19 or later
Gradle 4.6 or later
The integration flow will be like this :
For a detailed HMS core integration process, you can refer to Preparations for Integrating HUAWEI HMS Core.
Please make sure that you have enabled the Account Kit in Manage APIs section on AppGallery Connect.
Click to expand...
Click to collapse
Implementation
On your Flutter project directory, open pubspec.yaml file and add the dependencies for Account kit and Provider package. In order to show toast messages on user login and logout actions, I have also added fluttertoast package as well.
Code:
dependencies:
flutter:
sdk: flutter
huawei_account: ^5.0.0+300
provider: ^4.3.2+2
fluttertoast: ^7.1.1
Login Provider
In Login provider, we have all the required methods to manage Account actions like sign in, sign out, silent sign in, and revoke authorization. It gives us the flexibility to use any of these methods wherever we desire in the application.
Code:
class LoginProvider with ChangeNotifier {
User _user = new User();
User get getUser {
return _user;
}
void signIn() async {
AuthParamHelper authParamHelper = new AuthParamHelper();
authParamHelper
..setIdToken()
..setAuthorizationCode()
..setAccessToken()
..setProfile()
..setEmail()
..setId()
..addToScopeList([Scope.openId])
..setRequestCode(8888);
try {
final AuthHuaweiId accountInfo = await HmsAccount.signIn(authParamHelper);
_user.id = accountInfo.unionId;
_user.displayName = accountInfo.displayName;
_user.email = accountInfo.email;
_user.profilePhotoUrl = accountInfo.avatarUriString;
notifyListeners();
showToast('Welcome ${_user.displayName}');
} on Exception catch (exception) {
print(exception.toString());
}
}
Future signOut() async {
final signOutResult = await HmsAccount.signOut();
if (signOutResult) {
_user.id = null;
notifyListeners();
showToast('Signed out');
} else {
print('Login_provider:signOut failed');
}
}
void silentSignIn() async {
AuthParamHelper authParamHelper = new AuthParamHelper();
try {
final AuthHuaweiId accountInfo =
await HmsAccount.silentSignIn(authParamHelper);
if (accountInfo.unionId != null) {
_user.id = accountInfo.unionId;
_user.displayName = accountInfo.displayName;
_user.profilePhotoUrl = accountInfo.avatarUriString;
_user.email = accountInfo.email;
notifyListeners();
showToast('Welcome ${_user.displayName}');
}
} on Exception catch (exception) {
print(exception.toString());
print('Login_provider:Can not SignIn silently');
}
}
Future revokeAuthorization() async {
final bool revokeResult = await HmsAccount.revokeAuthorization();
if (revokeResult) {
print('Login_provider:Revoked Auth Successfully');
} else {
print('Login_provider:Failed to Revoked Auth');
}
}
void showToast(String message) {
Fluttertoast.showToast(
msg: message,
toastLength: Toast.LENGTH_SHORT,
gravity: ToastGravity.BOTTOM,
timeInSecForIosWeb: 1,
backgroundColor: Colors.grey,
textColor: Colors.black,
fontSize: 16.0);
}
}
Login Screen
On the login screen page, we are going to try if we can sign in silently first. If the revoke authorization method was not called, then the app will sign in silently and will not ask for user permissions. So that the application’s login screen will be skipped and the profile page will appear on the screen. If the user clicked the button that is called signout, then we call both sign-out and revoke authorization methods of the Account kit in our use case here. As a result, the user will be redirected to the login screen.
Code:
class LoginScreen extends StatelessWidget {
static const routeName = '/login-screen';
@override
Widget build(BuildContext context) {
final loginProvider = Provider.of<LoginProvider>(context, listen: false);
loginProvider.silentSignIn();
return Consumer<LoginProvider>(
builder: (context, data, _) {
return data.getUser.id != null
? ProfileScreen()
: LoginWidget(loginProvider: loginProvider);
},
);
}
}
class LoginWidget extends StatelessWidget {
const LoginWidget({
Key key,
@required this.loginProvider,
}) : super(key: key);
final LoginProvider loginProvider;
@override
Widget build(BuildContext context) {
var screenSize = MediaQuery.of(context).size;
return Scaffold(
body: Stack(
children: [
Image.asset(
'assets/images/welcome.png',
fit: BoxFit.cover,
height: double.infinity,
width: double.infinity,
alignment: Alignment.center,
),
Container(
alignment: Alignment.bottomCenter,
padding: EdgeInsets.only(bottom: screenSize.height / 6),
child: HuaweiIdAuthButton(
onPressed: () {
loginProvider.signIn();
},
buttonColor: AuthButtonBackground.BLACK,
borderRadius: AuthButtonRadius.MEDIUM,
),
)
],
),
);
}
}
Profile Screen
On the profile screen page, we are taking advantage of the provider pattern. We reach out to the data related to the user and the methods that are required to sign out through the login provider.
Code:
class ProfileScreen extends StatelessWidget {
static const routeName = '/profile-screen';
@override
Widget build(BuildContext context) {
final loginProvider = Provider.of<LoginProvider>(context, listen: false);
final _user = Provider.of<LoginProvider>(context).getUser;
return Scaffold(
appBar: AppBar(
title: Text('Profile'),
backgroundColor: Colors.black45,
),
body: Column(
children: [
Column(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
SafeArea(
child: Row(
children: [
_buildCircleAvatar(_user.profilePhotoUrl),
_userInformationText(_user.displayName, _user.email),
],
),
),
Divider(
color: Colors.black26,
height: 50,
),
],
),
OutlineButton.icon(
textColor: Colors.black54,
onPressed: () {
loginProvider.signOut().then((value) {
loginProvider.revokeAuthorization().then((value) =>
Navigator.of(context)
.pushReplacementNamed(LoginScreen.routeName));
});
},
icon: Icon(Icons.exit_to_app_sharp, color: Colors.black54),
label: Text("Log out"),
)
],
),
);
}
}
Widget _buildCircleAvatar(String photoUrl) {
return Padding(
padding: const EdgeInsets.only(
left: 10,
top: 30,
),
child: Container(
width: 100,
height: 100,
decoration: BoxDecoration(
border: Border.all(color: Colors.white, width: 3),
shape: BoxShape.circle,
color: Colors.white,
image: DecorationImage(
fit: BoxFit.cover,
image: photoUrl == null
? AssetImage('assets/images/profile_circle_avatar.png')
: NetworkImage(photoUrl),
),
),
),
);
}
Widget _userInformationText(String name, String email) {
return Padding(
padding: const EdgeInsets.only(left: 15.0, top: 15),
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Text(
name,
style: TextStyle(
fontSize: 15.0,
letterSpacing: 1,
fontWeight: FontWeight.w600,
),
),
SizedBox(
height: 3,
),
email == null
? Text('')
: Text(
email,
style: TextStyle(
color: Colors.grey,
fontSize: 12.0,
letterSpacing: 1,
fontWeight: FontWeight.w600,
),
),
],
),
);
}
You can find the source code of the demo app here.
In this article, we have developed a sample application of the HUAWEI Account Kit with Provider pattern in Flutter. I hope this article makes it easier for you to integrate Account Kit into your Flutter projects.
RESOURCES
Account Kit Service
Hi, Well explained User profile image URL is not there in plugin what is the do you have anyidea
sujith.e said:
Hi, Well explained User profile image URL is not there in plugin what is the do you have anyidea
Click to expand...
Click to collapse
Hi, there are two options for profile photo. The first one is that, if the user signed in, then the Account kit gives us the avatarUriString and we can use it to show the profile photo. For the second situation that we don't have the user info, we can use a default photo which is in this case under the assets folder. We need to modify pubspec.yaml for that purpose. You can examine the source code for a better understanding of it.
Regards.
Hi, Does the silent sign in asks for security code for first time login ?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone,
In this article, I will talk about how to use HMS Map Kit in Flutter applications and I will share sample codes for all features of Map Kit.
Today, Maps are the basis of many mobile applications. Unfortunately, finding resources for the integration of maps into applications developed with Flutter is more difficult than native applications. I hope this post will be a good resource for seamlessly integrating HMS Map Kit into your Flutter applications.
What is Map Kit ?
HMS Map Kit currently include all map data of more than 200 countries and regions and supports more than 100 languages.
HMS Map Kit is a Huawei Service that is easy to integrate, has a wide range of use and offers a variety of features. Moreover, Map Kit is constantly updated to enrich its data and reflect the differences on the map even at small scales.
To customize your maps, you can add markers, add rings, lines on the map. Map Kit offers us a wide range of uses to include everything you need on the map. You can see your your location live on the map, you can zoom and change the direction of the map. You can also see live traffic on the map. I think this is one of the most important features that should be in a map. I can say that Huawei has done a very successful job in reflecting traffic data on the map instantly. Finally, you can see the world’s most important locations in 3D thanks to Huawei Maps. I am sure that this feature will add a excitement to the map experience in your mobile application.
Note: HMS Map Kit works with EMUI 5.0 and above versions on Huawei devices and Android 7.0 and above on non-Huawei devices.
Development Steps
1. Create Your App in AppGallery Connect
Firstly you should be create a developer account in AppGallery Connect. After create your developer account, you can create a new project and new app. You can find a detail of these steps on the below.
2. Add Flutter Map Kit to Your Project
After creating your application on the AGC Console and activated Map Kit, the agconnect-services file should be added to the project first.
The agconnect-services.json configuration file should be added under the android/app directory in the Flutter project.
Next, the following dependencies for HMS usage need to be added to the build.gradle file under the android directory.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.5.0'
classpath 'com.huawei.agconnect:agcp:1.4.2.301'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Then add the following line of code to the build.gradle file under the android/app directory.
Code:
apply plugin: 'com.huawei.agconnect'
Add the following permissions to use the map to the AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<uses-permission android:name="com.huawei.appmarket.service.commondata.permission.GET_COMMON_DATA"/>
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>
Finally, the Map Kit SDK should be added to the pubspec.yaml file. To do this, open the pubspec.yaml file and add the required dependency as follows.
Code:
dependencies:
flutter:
sdk: flutter
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
huawei_map: ^5.0.3+302
And, by clicking “pub get”, the dependencies are added to Android Studio. After all these steps are completed, your app is ready to code.
3. Crate a Map
Firstly, create a HuaweiMapController object for create your map. Create a method called onMapCreated and set this object here for load the map when the application is opened.
Next, define a center coordinate and a zoom value for that coordinate. These values will use while map is opening.
Finally, after adding your map as a design, you will get a class coded as follows. For now, the screenshot of your application will also be as follows.
Code:
class MapPage extends StatefulWidget {
@override
_MapPageState createState() => _MapPageState();
}
class _MapPageState extends State<MapPage> {
HuaweiMapController _huaweiMapController;
static const LatLng _centerPoint = const LatLng(41.043982, 29.014333);
static const double _zoom = 12;
bool _cameraPosChanged = false;
bool _trafficEnabled = false;
@override
void initState() {
super.initState();
}
void onMapCreated(HuaweiMapController controller) {
_huaweiMapController = controller;
}
@override
Widget build(BuildContext context) {
final huaweiMap = HuaweiMap(
onMapCreated: onMapCreated,
mapType: MapType.normal,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: true,
rotateGesturesEnabled: true,
myLocationButtonEnabled: true,
myLocationEnabled: true,
trafficEnabled: _trafficEnabled,
markers: _markers,
polylines: _polylines,
polygons: _polygons,
circles: _circles,
onClick: (LatLng latLng) {
log("Map Clicked at $latLng");
},
onLongPress: (LatLng latlng) {
log("Map LongClicked at $latlng");
},
initialCameraPosition: CameraPosition(
target: _centerPoint,
zoom: _zoom,
),
);
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('Map Kit', style: TextStyle(
color: Colors.black
)),
backgroundColor: Color(0xFFF9C335),
),
body: Stack(
children: <Widget>[
huaweiMap
],
),
),
);
}
}
As you can see in the code above, we need some parameters while creating the map. The explanation and intended use of some of the most important and most used parameters are as follows.
mapType : It represents the type of map loaded. Currently there is only 2 map type support for Flutter. These are “normal” and “none”. If mapType is none, map will not be loaded. The normal type map is as seen in the image above.
zoomControlsEnabled : It represents the visibility of the zoom buttons on the right of the map. If you set this value as “true”, the buttons are automatically loaded and used on the map as above. If you set as “false”, you cannot zoom in on the map with these buttons.
myLocationEnabled : It represents whether you can see your own instant location on the map. If you set it to “true”, your location will appear as a blue point on the map. If you set it as “false”, the user location will not seen on the map.
myLocationButtonEnabled : It represents the button just below the zoom buttons at the bottom right of the map. If you have set the value of myLocationEnabled as true, when you click the button the map will automatically zoom to your location.
onClick : Here you can define the events you want to be triggered when tapped on the map. As seen in the example above, when I click on the map, I print the latitude and longitude information of the relevant point.
onLongPress : Events that will be triggered by a long tap on the map should be defined here. As you can see in the example, when I touch the map long, I print the latitude and longitude information of the relevant point.
initialCameraPosition : The starting position and zoom value to be displayed when the map is loaded must be defined here.
4. Show Traffic Data on the Map
When I was talking about the features of the Map Kit, I just mentioned that this is the feature that I like the most. It is both functional and easy to use.
To display live traffic data with a one touch, you can set the “trafficEnabled” value that we defined while creating the map to “true”.
To do this, design a small, round button on the left side of the map and prepare a method called trafficButtonOnClick. This method changes the trafficEnabled value to true and false each time the button is pressed.
Code:
void trafficButtonOnClick() {
if (_trafficEnabled) {
setState(() {
_trafficEnabled = false;
});
} else {
setState(() {
_trafficEnabled = true;
});
}
}
You can design the button as follows, create a Column under the return MaterialApp, and call all the buttons which we will create here one after another. I am sharing the button design and general design on the below. Each button that will be created from now on will be located under the trafficButton that we will add now.
Code:
@override
Widget build(BuildContext context) {
final huaweiMap = HuaweiMap(
onMapCreated: onMapCreated,
mapType: MapType.normal,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: true,
rotateGesturesEnabled: true,
myLocationButtonEnabled: true,
myLocationEnabled: true,
trafficEnabled: _trafficEnabled,
markers: _markers,
polylines: _polylines,
polygons: _polygons,
circles: _circles,
onClick: (LatLng latLng) {
log("Map Clicked at $latLng");
},
onLongPress: (LatLng latlng) {
log("Map LongClicked at $latlng");
},
initialCameraPosition: CameraPosition(
target: _centerPoint,
zoom: _zoom,
),
);
final trafficButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: () => trafficButtonOnClick(),
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
tooltip: "Traffic",
child: const Icon(Icons.traffic, size: 36.0, color: Colors.black),
),
);
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('Map Kit', style: TextStyle(
color: Colors.black
)),
backgroundColor: Color(0xFFF9C335),
),
body: Stack(
children: <Widget>[
huaweiMap,
Padding(
padding: const EdgeInsets.all(16.0),
child: Align(
alignment: Alignment.topLeft,
child: Column(
children: <Widget>[
trafficButton
//other buttons here
],
),
),
),
],
),
),
);
}
After the traffic button is added, the screen of the map will be as follows.
5. Create 3D Map
My another favorite feature is that. But Map Kit doesn’t support 3D maps for areas in Turkey. As I said, since this feature is not supported in Turkey, I entered the latitude and longitude information of Collesium and enabled the camera to move to this point and show it to me in 3D.
Likewise, as the button is clicked, we must ensure that this feature is active and deactivated respectively. When it is active, we will see the Collesium, and when we deactivate it, we must return to the center position we first defined. For this, we create a method named moveCameraButtonOnClick as follows.
Code:
void moveCameraButtonOnClick() {
if (!_cameraPosChanged) {
_huaweiMapController.animateCamera(
CameraUpdate.newCameraPosition(
const CameraPosition(
bearing: 270.0,
target: LatLng(41.889228, 12.491780),
tilt: 45.0,
zoom: 17.0,
),
),
);
_cameraPosChanged = !_cameraPosChanged;
} else {
_huaweiMapController.animateCamera(
CameraUpdate.newCameraPosition(
const CameraPosition(
bearing: 0.0,
target: _centerPoint,
tilt: 0.0,
zoom: 12.0,
),
),
);
_cameraPosChanged = !_cameraPosChanged;
}
}
While designing the button, we must located on the left side and one under the other. By making the button design as follows, we add it under the trafficButton with the name moveCamreButton, as I mentioned in fourth section. After adding the relevant code, the screenshot will be as follows.
Code:
final moveCamreButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: () => moveCameraButtonOnClick(),
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
tooltip: "CameraMove",
child:
const Icon(Icons.airplanemode_active, size: 36.0, color: Colors.black),
),
);
6. Add Markers to Your Map
Markers are indispensable for map services. Thanks to this feature, you can add markers in different colors and designs on the map according to your needs. With these markers, you can named a special address and highlight it on the map.
You need some data to add a marker. These are the markerId, position, title, snippet, icon, draggable, rotation values that you will specify when creating the marker.
The code on the below contains the values and sample code required to add a normal marker. With this code, you can add a classic marker as you see it on every map.
The second marker is draggable. You can move the marker anywhere you want by holding down on it. For this, you must set the draggable value to true.
The third marker is located on the map at an angle. If you want the marker to be located at any angle such as 45' or 60' rather than perpendicular, it will be sufficient to give the angle you want to the rotation value.
The fourth and last marker will look different and colorful, unlike the others.
You can create markers in any style you want using these four features. The codes required to create markers are as follows.
Code:
void markersButtonOnClick() {
if (_markers.length > 0) {
setState(() {
_markers.clear();
});
} else {
setState(() {
_markers.add(Marker(
markerId: MarkerId('normal_marker'),
position: LatLng(40.997802, 28.994978),
infoWindow: InfoWindow(
title: 'Normal Marker Title',
snippet: 'Description Here!',
onClick: () {
log("Normal Marker InfoWindow Clicked");
}),
onClick: () {
log('Normal Marker Clicked!');
},
icon: BitmapDescriptor.defaultMarker,
));
_markers.add(Marker(
markerId: MarkerId('draggable_marker'),
position: LatLng(41.027335, 29.002359),
draggable: true,
flat: true,
rotation: 0.0,
infoWindow: InfoWindow(
title: 'Draggable Marker Title',
snippet: 'Hi! Description Here!',
),
clickable: true,
onClick: () {
log('Draggable Marker Clicked!');
},
onDragEnd: (pos) {
log("Draggable onDragEnd position : ${pos.lat}:${pos.lng}");
},
icon: BitmapDescriptor.defaultMarker,
));
_markers.add(Marker(
markerId: MarkerId('angular_marker'),
rotation: 45,
position: LatLng(41.043974, 29.028881),
infoWindow: InfoWindow(
title: 'Angular Marker Title',
snippet: 'Hey! Why can not I stand up straight?',
onClick: () {
log("Angular marker infoWindow clicked");
}),
icon: BitmapDescriptor.defaultMarker,
));
});
_markers.add(Marker(
markerId: MarkerId('colorful_marker'),
position: LatLng(41.076009, 29.054630),
infoWindow: InfoWindow(
title: 'Colorful Marker Title',
snippet: 'Yeap, as you know, description here!',
onClick: () {
log("Colorful marker infoWindow clicked");
}),
onClick: () {
log('Colorful Marker Clicked');
},
icon: BitmapDescriptor.defaultMarkerWithHue(BitmapDescriptor.hueMagenta),
));
}
}
Again, you can create a new button to be located on the left side of the map and add it to the relevant place in the code. Don’t forget to call the above markersButtonOnClick method on the onPressed of the button you created. You can find the necessary codes and screenshot for button design below.
Code:
final markerButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: markersButtonOnClick,
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
child: const Icon(Icons.add_location, size: 36.0, color: Colors.black),
),
);
7. Add Circle to Your Map
To add a circle, create a method called circlesButtonOnClick and define circleId, center, radius, fillColor, strokeColor, strokeWidth, zIndex, clickable values for the circle that will be created within this method.
All of these values depending on which point on the map, what size and color you will add a circle.
As an example, I share the screenshot below with the circlesButtonOnClick method, which adds two circles when the button is pressed, and the circlesButton design that I call this method.
Code:
void circlesButtonOnClick() {
if (_circles.length > 0) {
setState(() {
_circles.clear();
});
} else {
LatLng point1 = LatLng(40.986595, 29.025362);
LatLng point2 = LatLng(41.023644, 29.014032);
setState(() {
_circles.add(Circle(
circleId: CircleId('firstCircle'),
center: point1,
radius: 1000,
fillColor: Color.fromARGB(100, 249, 195, 53),
strokeColor: Color(0xFFF9C335),
strokeWidth: 3,
zIndex: 2,
clickable: true,
onClick: () {
log("First Circle clicked");
}));
_circles.add(Circle(
circleId: CircleId('secondCircle'),
center: point2,
zIndex: 1,
clickable: true,
onClick: () {
log("Second Circle Clicked");
},
radius: 2000,
fillColor: Color.fromARGB(50, 230, 20, 50),
strokeColor: Color.fromARGB(50, 230, 20, 50),
));
});
}
}
Code:
final circlesButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: circlesButtonOnClick,
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
child: const Icon(Icons.adjust, size: 36.0, color: Colors.black),
),
);
8. Add Polylines to Your Map
The purpose of using polyline is to draw a straight line between 2 coordinates.
The parameters we need to draw a polyline are polylineId, points, color, zIndex, endCap, startCap, clickable values. Here you can set the start and end points with enCap and startCap values. For location values, you need to define two LatLng values as an array.
To create a polyline, create a method called polylinesButtonOnClick and set the above values according to your needs. For button design, create a method called polylinesButton and call the polylinesButtonOnClick method in onPress. The screenshot after adding all the codes and polyline is as follows.
Code:
void polylinesButtonOnClick() {
if (_polylines.length > 0) {
setState(() {
_polylines.clear();
});
} else {
List<LatLng> line1 = [
LatLng(41.068698, 29.030855),
LatLng(41.045916, 29.059351),
];
List<LatLng> line2 = [
LatLng(40.999551, 29.062441),
LatLng(41.025975, 29.069651),
];
setState(() {
_polylines.add(Polyline(
polylineId: PolylineId('firstLine'),
points: line1,
color: Colors.pink,
zIndex: 2,
endCap: Cap.roundCap,
startCap: Cap.squareCap,
clickable: true,
onClick: () {
log("First Line Clicked");
}));
_polylines.add(Polyline(
polylineId: PolylineId('secondLine'),
points: line2,
width: 2,
patterns: [PatternItem.dash(20)],
jointType: JointType.bevel,
endCap: Cap.roundCap,
startCap: Cap.roundCap,
color: Color(0x900072FF),
zIndex: 1,
clickable: true,
onClick: () {
log("Second Line Clicked");
}));
});
}
}
Code:
final polylinesButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: polylinesButtonOnClick,
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
child: const Icon(Icons.waterfall_chart, size: 36.0, color: Colors.black),
),
);
9. Add Polygon to Your Map
Polygon is exactly the same as polyline. The only difference is that when adding polygons, you can draw the shapes you want, such as triangles and pentagons, by specifying more than two points.
The parameters we need to draw a polygon are polygonId, points, fillColor, strokeColor, strokeWidth, zIndex, clickable values. For Points value, you need to define more than two LatLng values as an array.
To add polygons, create a method called polygonsButtonOnClick and set the above values according to your needs. For button design, create a method named polygonsButton and call the polygonsButtonOnClick method in onPress. After adding all the codes and polygon, the screenshot is as follows.
Code:
void polygonsButtonOnClick() {
if (_polygons.length > 0) {
setState(() {
_polygons.clear();
});
} else {
List<LatLng> points1 = [
LatLng(40.989306, 29.021242),
LatLng(40.980753, 29.024590),
LatLng(40.982632, 29.031885),
LatLng(40.991273, 29.024676)
];
List<LatLng> points2 = [
LatLng(41.090321, 29.025598),
LatLng(41.085146, 29.018045),
LatLng(41.077124, 29.016844),
LatLng(41.075441, 29.026285),
LatLng(41.079582, 29.036928),
LatLng(41.086828, 29.031435)
];
setState(() {
_polygons.add(Polygon(
polygonId: PolygonId('polygon1'),
points: points1,
fillColor: Color.fromARGB(100, 129, 95, 53),
strokeColor: Colors.brown[900],
strokeWidth: 1,
zIndex: 2,
clickable: true,
onClick: () {
log("Polygon 1 Clicked");
}));
_polygons.add(Polygon(
polygonId: PolygonId('polygon2'),
points: points2,
fillColor: Color.fromARGB(190, 242, 195, 99),
strokeColor: Colors.yellow[900],
strokeWidth: 1,
zIndex: 1,
clickable: true,
onClick: () {
log("Polygon 2 Clicked");
}));
});
}
}
Code:
final polygonsButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: polygonsButtonOnClick,
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
tooltip: "Polygons",
child: const Icon(Icons.crop_square, size: 36.0, color: Colors.black),
),
);
10. Clear Your Map
You can use all of the features on your map at the same time. You can combine the features you want according to the needs of your application and to increase the user experience to higher levels. After adding all these features at the same time, the final view of your map will be as follows.
To delete all the elements you added on the map with a single button, you can create a method called clearMap and clear the map in this method.
Code:
void clearMap() {
setState(() {
_markers.clear();
_polylines.clear();
_polygons.clear();
_circles.clear();
});
}
Code:
final clearButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: () => clearMap(),
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
tooltip: "Clear",
child: const Icon(Icons.refresh, size: 36.0, color: Colors.black),
),
);
After all the methods are added, your code structure should be as follows.
Code:
class MapPage extends StatefulWidget {
@override
_MapPageState createState() => _MapPageState();
}
class _MapPageState extends State<MapPage> {
HuaweiMapController _huaweiMapController;
static const LatLng _centerPoint = const LatLng(41.043982, 29.014333);
static const double _zoom = 12;
final Set<Marker> _markers = {};
final Set<Polyline> _polylines = {};
final Set<Polygon> _polygons = {};
final Set<Circle> _circles = {};
bool _cameraPosChanged = false;
bool _trafficEnabled = false;
@override
void initState() {
super.initState();
}
void onMapCreated(HuaweiMapController controller) {
_huaweiMapController = controller;
}
void clearMap() {
setState(() {
_markers.clear();
_polylines.clear();
_polygons.clear();
_circles.clear();
});
}
void log(msg) {
print(msg);
}
void markersButtonOnClick() {
if (_markers.length > 0) {
setState(() {
_markers.clear();
});
} else {
setState(() {
_markers.add(Marker(
markerId: MarkerId('normal_marker'),
position: LatLng(40.997802, 28.994978),
infoWindow: InfoWindow(
title: 'Normal Marker Title',
snippet: 'Description Here!',
onClick: () {
log("Normal Marker InfoWindow Clicked");
}),
onClick: () {
log('Normal Marker Clicked!');
},
icon: BitmapDescriptor.defaultMarker,
));
_markers.add(Marker(
markerId: MarkerId('draggable_marker'),
position: LatLng(41.027335, 29.002359),
draggable: true,
flat: true,
rotation: 0.0,
infoWindow: InfoWindow(
title: 'Draggable Marker Title',
snippet: 'Hi! Description Here!',
),
clickable: true,
onClick: () {
log('Draggable Marker Clicked!');
},
onDragEnd: (pos) {
log("Draggable onDragEnd position : ${pos.lat}:${pos.lng}");
},
icon: BitmapDescriptor.defaultMarker,
));
_markers.add(Marker(
markerId: MarkerId('angular_marker'),
rotation: 45,
position: LatLng(41.043974, 29.028881),
infoWindow: InfoWindow(
title: 'Angular Marker Title',
snippet: 'Hey! Why can not I stand up straight?',
onClick: () {
log("Angular marker infoWindow clicked");
}),
icon: BitmapDescriptor.defaultMarker,
));
});
_markers.add(Marker(
markerId: MarkerId('colorful_marker'),
position: LatLng(41.076009, 29.054630),
infoWindow: InfoWindow(
title: 'Colorful Marker Title',
snippet: 'Yeap, as you know, description here!',
onClick: () {
log("Colorful marker infoWindow clicked");
}),
onClick: () {
log('Colorful Marker Clicked');
},
icon: BitmapDescriptor.defaultMarkerWithHue(BitmapDescriptor.hueMagenta),
));
}
}
void polygonsButtonOnClick() {
if (_polygons.length > 0) {
setState(() {
_polygons.clear();
});
} else {
List<LatLng> points1 = [
LatLng(40.989306, 29.021242),
LatLng(40.980753, 29.024590),
LatLng(40.982632, 29.031885),
LatLng(40.991273, 29.024676)
];
List<LatLng> points2 = [
LatLng(41.090321, 29.025598),
LatLng(41.085146, 29.018045),
LatLng(41.077124, 29.016844),
LatLng(41.075441, 29.026285),
LatLng(41.079582, 29.036928),
LatLng(41.086828, 29.031435)
];
setState(() {
_polygons.add(Polygon(
polygonId: PolygonId('polygon1'),
points: points1,
fillColor: Color.fromARGB(100, 129, 95, 53),
strokeColor: Colors.brown[900],
strokeWidth: 1,
zIndex: 2,
clickable: true,
onClick: () {
log("Polygon 1 Clicked");
}));
_polygons.add(Polygon(
polygonId: PolygonId('polygon2'),
points: points2,
fillColor: Color.fromARGB(190, 242, 195, 99),
strokeColor: Colors.yellow[900],
strokeWidth: 1,
zIndex: 1,
clickable: true,
onClick: () {
log("Polygon 2 Clicked");
}));
});
}
}
void polylinesButtonOnClick() {
if (_polylines.length > 0) {
setState(() {
_polylines.clear();
});
} else {
List<LatLng> line1 = [
LatLng(41.068698, 29.030855),
LatLng(41.045916, 29.059351),
];
List<LatLng> line2 = [
LatLng(40.999551, 29.062441),
LatLng(41.025975, 29.069651),
];
setState(() {
_polylines.add(Polyline(
polylineId: PolylineId('firstLine'),
points: line1,
color: Colors.pink,
zIndex: 2,
endCap: Cap.roundCap,
startCap: Cap.squareCap,
clickable: true,
onClick: () {
log("First Line Clicked");
}));
_polylines.add(Polyline(
polylineId: PolylineId('secondLine'),
points: line2,
width: 2,
patterns: [PatternItem.dash(20)],
jointType: JointType.bevel,
endCap: Cap.roundCap,
startCap: Cap.roundCap,
color: Color(0x900072FF),
zIndex: 1,
clickable: true,
onClick: () {
log("Second Line Clicked");
}));
});
}
}
void circlesButtonOnClick() {
if (_circles.length > 0) {
setState(() {
_circles.clear();
});
} else {
LatLng point1 = LatLng(40.986595, 29.025362);
LatLng point2 = LatLng(41.023644, 29.014032);
setState(() {
_circles.add(Circle(
circleId: CircleId('firstCircle'),
center: point1,
radius: 1000,
fillColor: Color.fromARGB(100, 249, 195, 53),
strokeColor: Color(0xFFF9C335),
strokeWidth: 3,
zIndex: 2,
clickable: true,
onClick: () {
log("First Circle clicked");
}));
_circles.add(Circle(
circleId: CircleId('secondCircle'),
center: point2,
zIndex: 1,
clickable: true,
onClick: () {
log("Second Circle Clicked");
},
radius: 2000,
fillColor: Color.fromARGB(50, 230, 20, 50),
strokeColor: Color.fromARGB(50, 230, 20, 50),
));
});
}
}
void moveCameraButtonOnClick() {
if (!_cameraPosChanged) {
_huaweiMapController.animateCamera(
CameraUpdate.newCameraPosition(
const CameraPosition(
bearing: 270.0,
target: LatLng(41.889228, 12.491780),
tilt: 45.0,
zoom: 17.0,
),
),
);
_cameraPosChanged = !_cameraPosChanged;
} else {
_huaweiMapController.animateCamera(
CameraUpdate.newCameraPosition(
const CameraPosition(
bearing: 0.0,
target: _centerPoint,
tilt: 0.0,
zoom: 12.0,
),
),
);
_cameraPosChanged = !_cameraPosChanged;
}
}
void trafficButtonOnClick() {
if (_trafficEnabled) {
setState(() {
_trafficEnabled = false;
});
} else {
setState(() {
_trafficEnabled = true;
});
}
}
@override
Widget build(BuildContext context) {
final huaweiMap = HuaweiMap(
onMapCreated: onMapCreated,
mapType: MapType.normal,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: true,
rotateGesturesEnabled: true,
myLocationButtonEnabled: true,
myLocationEnabled: true,
trafficEnabled: _trafficEnabled,
markers: _markers,
polylines: _polylines,
polygons: _polygons,
circles: _circles,
onClick: (LatLng latLng) {
log("Map Clicked at $latLng");
},
onLongPress: (LatLng latlng) {
log("Map LongClicked at $latlng");
},
initialCameraPosition: CameraPosition(
target: _centerPoint,
zoom: _zoom,
),
);
final markerButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: markersButtonOnClick,
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
child: const Icon(Icons.add_location, size: 36.0, color: Colors.black),
),
);
final circlesButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: circlesButtonOnClick,
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
child: const Icon(Icons.adjust, size: 36.0, color: Colors.black),
),
);
final polylinesButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: polylinesButtonOnClick,
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
child: const Icon(Icons.waterfall_chart, size: 36.0, color: Colors.black),
),
);
final polygonsButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: polygonsButtonOnClick,
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
tooltip: "Polygons",
child: const Icon(Icons.crop_square, size: 36.0, color: Colors.black),
),
);
final clearButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: () => clearMap(),
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
tooltip: "Clear",
child: const Icon(Icons.refresh, size: 36.0, color: Colors.black),
),
);
final moveCamreButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: () => moveCameraButtonOnClick(),
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
tooltip: "CameraMove",
child:
const Icon(Icons.airplanemode_active, size: 36.0, color: Colors.black),
),
);
final trafficButton = Padding(
padding: EdgeInsets.all(8.0),
child: FloatingActionButton(
onPressed: () => trafficButtonOnClick(),
materialTapTargetSize: MaterialTapTargetSize.padded,
backgroundColor: Color(0xFFF9C335),
tooltip: "Traffic",
child: const Icon(Icons.traffic, size: 36.0, color: Colors.black),
),
);
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: const Text('Map Kit', style: TextStyle(
color: Colors.black
)),
backgroundColor: Color(0xFFF9C335),
),
body: Stack(
children: <Widget>[
huaweiMap,
Padding(
padding: const EdgeInsets.all(16.0),
child: Align(
alignment: Alignment.topLeft,
child: Column(
children: <Widget>[
clearButton,
trafficButton,
moveCamreButton,
markerButton,
circlesButton,
polylinesButton,
polygonsButton
],
),
),
),
],
),
),
);
}
}
Introduction
In this article, we will learn how to implement Image Segmentation feature in flutter application. Using this we can segments same elements such as human body, plant and sky from an image. We can use in different scenarios, it can be used in photography apps to apply background.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
About Image Segmentation
Image Segmentation allows developers two types of segmentation Human body and multiclass segmentation. We can apply image segmentation on static images and video streams if we select human body type. But we can only apply segmentation for static images in multiclass segmentation.
Huawei ML Kit’s Image Segmentation service divides same elements (such as human body, plant and sky) from an image. The elements supported includes human body, sky, plant, food, cat, dog, flower, water, sand, building, mountain, and others. By the way, Huawei ML Kit works on all Android phones with ARM architecture and as it’s device-side capability is free.
The result of human body segmentation includes the coordinate array of the human body, human body image with a transparent background, and gray-scale image with a white human body and black background.
Requirements
1. Any operating system (MacOS, Linux and Windows etc.)
2. Any IDE with Flutter SDK installed (IntelliJ, Android Studio and VsCode etc.)
3. Minimum API Level 19 is required.
4. Required EMUI 5.0 and later version devices.
Setting up the ML kit
1. First create a developer account in AppGallery Connect. After create your developer account, you can create a new project and new app. For more information, click here.
2. Enable the ML kit in the Manage API section and add the plugin.
3. Add the required dependencies to the build.gradle file under root folder.
Code:
maven {url'http://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
4. Add the required permissions to the AndroidManifest.xml file under app/src/main folder.
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
5. After completing all the above steps, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. Refer this URL for cross-platform plugins to download the latest versions.
Code:
huawei_ml:
path: ../huawei_ml/
6. Do not forget to add the following meta-data tags in your AndroidManifest.xml. This is for automatic update of the machine learning model.
Code:
<application
... >
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg"/>
</application>
After adding them, run flutter pub get command. Now all the plugins are ready to use.
Note: Set multiDexEnabled to true in the android/app directory, so the app will not crash.
Code Integration
We need to initialize the analyzer with some settings. If we want to identify only the human body, then we need to use MLImageSegmentationSetting.BODY_SEG constant.
Code:
class ImageSegmentation extends StatefulWidget {
@override
ImageSegmentationState createState() => ImageSegmentationState();
}
class ImageSegmentationState extends State<ImageSegmentation> {
MLImageSegmentationAnalyzer analyzer;
MLImageSegmentationAnalyzerSetting setting;
List<MLImageSegmentation> result;
PickedFile _pickedFile;
File _imageFile;
File _imageFile1;
String _imagePath;
String _imagePath1;
String _foregroundUri = "Foreground Uri";
String _grayscaleUri = "Grayscale Uri";
String _originalUri = "Original Uri";
@override
void initState() {
analyzer = new MLImageSegmentationAnalyzer();
setting = new MLImageSegmentationAnalyzerSetting();
_checkCameraPermissions();
super.initState();
}
_checkCameraPermissions() async {
if (await MLPermissionClient().checkCameraPermission()) {
Scaffold.of(context).showSnackBar(SnackBar(
content: Text("Permission Granted"),
));
} else {
await MLPermissionClient().requestCameraPermission();
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Column(
children: [
SizedBox(height: 15),
Container(
padding: EdgeInsets.all(16.0),
child: Column(
children: [
_setImageView(_imageFile),
SizedBox(width: 15),
_setImageView(_imageFile1),
SizedBox(width: 15),
],
)),
// SizedBox(height: 15),
// _setText(),
SizedBox(height: 15),
_showImagePickingOptions(),
],
));
}
Widget _showImagePickingOptions() {
return Expanded(
child: Align(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Container(
margin: EdgeInsets.only(left: 20.0, right: 20.0),
width: MediaQuery.of(context).size.width,
child: MaterialButton(
color: Colors.amber,
textColor: Colors.white,
child: Text("TAKE PICTURE"),
onPressed: () async {
final String path = await getImage(ImageSource.camera);
_startRecognition(path);
})),
Container(
width: MediaQuery.of(context).size.width,
margin: EdgeInsets.only(left: 20.0, right: 20.0),
child: MaterialButton(
color: Colors.amber,
textColor: Colors.white,
child: Text("PICK FROM GALLERY"),
onPressed: () async {
final String path = await getImage(ImageSource.gallery);
_startRecognition(path);
})),
],
),
),
);
}
Widget _setImageView(File imageFile) {
if (imageFile != null) {
return Image.file(imageFile, width: 200, height: 200);
} else {
return Text(" ");
}
}
_startRecognition(String path) async {
setting.path = path;
setting.analyzerType = MLImageSegmentationAnalyzerSetting.BODY_SEG;
setting.scene = MLImageSegmentationAnalyzerSetting.ALL;
try {
result = await analyzer.analyzeFrame(setting);
_foregroundUri = result.first.foregroundUri;
_grayscaleUri = result.first.grayscaleUri;
_originalUri = result.first.originalUri;
_imagePath = await FlutterAbsolutePath.getAbsolutePath(_grayscaleUri);
_imagePath1 = await FlutterAbsolutePath.getAbsolutePath(_originalUri);
setState(() {
_imageFile = File(_imagePath);
_imageFile1 = File(_imagePath1);
});
} on Exception catch (e) {
print(e.toString());
}
}
Future<String> getImage(ImageSource imageSource) async {
final picker = ImagePicker();
_pickedFile = await picker.getImage(source: imageSource);
return _pickedFile.path;
}
}
Demo
Tips and Tricks
1. Download latest HMS Flutter plugin.
2. Set minimum SDK version to 23 or later.
3. Do not forget to add Camera permission in Manifest file.
4. Latest HMS Core APK is required.
Conclusion
That’s it!
In this article, we have learnt how to use image segmentation. We can get human body related pixels out of our image and changing background. Here we implemented transparent background and gray-scale image with a white human body and black background.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
ML kit URL
Original Source
muraliameakula said:
Introduction
In this article, we will learn how to implement Image Segmentation feature in flutter application. Using this we can segments same elements such as human body, plant and sky from an image. We can use in different scenarios, it can be used in photography apps to apply background.
About Image Segmentation
Image Segmentation allows developers two types of segmentation Human body and multiclass segmentation. We can apply image segmentation on static images and video streams if we select human body type. But we can only apply segmentation for static images in multiclass segmentation.
Huawei ML Kit’s Image Segmentation service divides same elements (such as human body, plant and sky) from an image. The elements supported includes human body, sky, plant, food, cat, dog, flower, water, sand, building, mountain, and others. By the way, Huawei ML Kit works on all Android phones with ARM architecture and as it’s device-side capability is free.
The result of human body segmentation includes the coordinate array of the human body, human body image with a transparent background, and gray-scale image with a white human body and black background.
Requirements
1. Any operating system (MacOS, Linux and Windows etc.)
2. Any IDE with Flutter SDK installed (IntelliJ, Android Studio and VsCode etc.)
3. Minimum API Level 19 is required.
4. Required EMUI 5.0 and later version devices.
Setting up the ML kit
1. First create a developer account in AppGallery Connect. After create your developer account, you can create a new project and new app. For more information, click here.
2. Enable the ML kit in the Manage API section and add the plugin.
3. Add the required dependencies to the build.gradle file under root folder.
Code:
maven {url'http://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
4. Add the required permissions to the AndroidManifest.xml file under app/src/main folder.
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
5. After completing all the above steps, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. Refer this URL for cross-platform plugins to download the latest versions.
Code:
huawei_ml:
path: ../huawei_ml/
6. Do not forget to add the following meta-data tags in your AndroidManifest.xml. This is for automatic update of the machine learning model.
Code:
<application
... >
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg"/>
</application>
After adding them, run flutter pub get command. Now all the plugins are ready to use.
Note: Set multiDexEnabled to true in the android/app directory, so the app will not crash.
Code Integration
We need to initialize the analyzer with some settings. If we want to identify only the human body, then we need to use MLImageSegmentationSetting.BODY_SEG constant.
Code:
class ImageSegmentation extends StatefulWidget {
@override
ImageSegmentationState createState() => ImageSegmentationState();
}
class ImageSegmentationState extends State<ImageSegmentation> {
MLImageSegmentationAnalyzer analyzer;
MLImageSegmentationAnalyzerSetting setting;
List<MLImageSegmentation> result;
PickedFile _pickedFile;
File _imageFile;
File _imageFile1;
String _imagePath;
String _imagePath1;
String _foregroundUri = "Foreground Uri";
String _grayscaleUri = "Grayscale Uri";
String _originalUri = "Original Uri";
@override
void initState() {
analyzer = new MLImageSegmentationAnalyzer();
setting = new MLImageSegmentationAnalyzerSetting();
_checkCameraPermissions();
super.initState();
}
_checkCameraPermissions() async {
if (await MLPermissionClient().checkCameraPermission()) {
Scaffold.of(context).showSnackBar(SnackBar(
content: Text("Permission Granted"),
));
} else {
await MLPermissionClient().requestCameraPermission();
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Column(
children: [
SizedBox(height: 15),
Container(
padding: EdgeInsets.all(16.0),
child: Column(
children: [
_setImageView(_imageFile),
SizedBox(width: 15),
_setImageView(_imageFile1),
SizedBox(width: 15),
],
)),
// SizedBox(height: 15),
// _setText(),
SizedBox(height: 15),
_showImagePickingOptions(),
],
));
}
Widget _showImagePickingOptions() {
return Expanded(
child: Align(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Container(
margin: EdgeInsets.only(left: 20.0, right: 20.0),
width: MediaQuery.of(context).size.width,
child: MaterialButton(
color: Colors.amber,
textColor: Colors.white,
child: Text("TAKE PICTURE"),
onPressed: () async {
final String path = await getImage(ImageSource.camera);
_startRecognition(path);
})),
Container(
width: MediaQuery.of(context).size.width,
margin: EdgeInsets.only(left: 20.0, right: 20.0),
child: MaterialButton(
color: Colors.amber,
textColor: Colors.white,
child: Text("PICK FROM GALLERY"),
onPressed: () async {
final String path = await getImage(ImageSource.gallery);
_startRecognition(path);
})),
],
),
),
);
}
Widget _setImageView(File imageFile) {
if (imageFile != null) {
return Image.file(imageFile, width: 200, height: 200);
} else {
return Text(" ");
}
}
_startRecognition(String path) async {
setting.path = path;
setting.analyzerType = MLImageSegmentationAnalyzerSetting.BODY_SEG;
setting.scene = MLImageSegmentationAnalyzerSetting.ALL;
try {
result = await analyzer.analyzeFrame(setting);
_foregroundUri = result.first.foregroundUri;
_grayscaleUri = result.first.grayscaleUri;
_originalUri = result.first.originalUri;
_imagePath = await FlutterAbsolutePath.getAbsolutePath(_grayscaleUri);
_imagePath1 = await FlutterAbsolutePath.getAbsolutePath(_originalUri);
setState(() {
_imageFile = File(_imagePath);
_imageFile1 = File(_imagePath1);
});
} on Exception catch (e) {
print(e.toString());
}
}
Future<String> getImage(ImageSource imageSource) async {
final picker = ImagePicker();
_pickedFile = await picker.getImage(source: imageSource);
return _pickedFile.path;
}
}
Demo
Tips and Tricks
1. Download latest HMS Flutter plugin.
2. Set minimum SDK version to 23 or later.
3. Do not forget to add Camera permission in Manifest file.
4. Latest HMS Core APK is required.
Conclusion
That’s it!
In this article, we have learnt how to use image segmentation. We can get human body related pixels out of our image and changing background. Here we implemented transparent background and gray-scale image with a white human body and black background.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
ML kit URL
Original Source
Click to expand...
Click to collapse
We need to define any parameter for any kind of segment?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we will be integrating Huawei ML kit in Flutter StoryApp to listen stories using ML kit Text To Speech (TTS). ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you to develop various AI apps. ML Kit allows your apps to easily leverage Huawei's long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries.
In this flutter sample application, we are using Language/Voice-related services, services are as follows.
Real-time translation: Translates text from the source language into the target language through the server on the cloud.
On-device translation: Translates text from the source language into the target language with the support of an on-device model, even when no Internet service is available.
Real-time language detection: Detects the language of text online. Both single-language text and multi-language text are supported.
On-device language detection: Detects the language of text without Internet connection. Both single-language text and multi-language text are supported.
Automatic speech recognition: Converts speech (no longer than 60 seconds) into text in real time.
Automatic speech recognition: Converts speech (no longer than 60 seconds) into text in real time.
Text to speech: Converts text information into audio output online in real time. Rich timbres, and volume and speed options are supported to produce more natural sounds.
On-device text to speech: Converts text information into speech with the support of an on-device model, even when there is no Internet connection.
Audio file transcription: Converts an audio file (no longer than 5 hours) into text. The generated text contains punctuation and timestamps. Currently, the service supports Chinese and English.
Real-time transcription: Converts speech (no longer than 5 hours) into text in real time. The generated text contains punctuation and timestamps.
Sound detection: Detects sound events in online (real-time recording) mode. The detected sound events can help you perform subsequent actions.
Supported Devices
Development Overview
You need to install Flutter and Dart plugin in IDE and I assume that you have prior knowledge about the Flutter and Dart.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
Android phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.7 or later.
Android studio software or Visual Studio or Code installed.
HMS Core (APK) 4.X or later.
Integration process
Step 1: Create Flutter project.
Step 2: Add the App level gradle dependencies.
Choose inside project Android > app > build.gradle.
[/B]
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
[B]
Root level gradle dependencies
[/B]
maven {url 'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
[B]
Step 3: Add the below permissions in Android Manifest file.
[/B]
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
[B]
Step 4: Download flutter plugins
Step 5: Add downloaded file into parent directory of the project. Declare plugin path in pubspec.yaml file under dependencies. Add path location for asset image.
Let's start coding
loginScreen.dart
[/B][/B]
class LoginScreen extends StatelessWidget {
const LoginScreen({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return MaterialApp(
debugShowCheckedModeBanner: false,
home: LoginDemo(),
);
}
}
class LoginDemo extends StatefulWidget {
@override
_LoginDemoState createState() => _LoginDemoState();
}
class _LoginDemoState extends State<LoginDemo> {
final HMSAnalytics _hmsAnalytics = new HMSAnalytics();
@override
void initState() {
HwAds.init();
showSplashAd();
super.initState();
}
@override
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: Colors.white,
appBar: AppBar(
title: Text("Login Page"),
backgroundColor: Colors.grey[850],
),
body: RefreshIndicator(
onRefresh: showToast,
child: SingleChildScrollView(
child: Column(
children: <Widget>[
Padding(
padding: const EdgeInsets.only(top: 60.0),
child: Center(
child: Container(
width: 200,
height: 150,
decoration: BoxDecoration(
color: Colors.red,
borderRadius: BorderRadius.circular(60.0)),
child: Image.asset('images/logo_huawei.png')),
),
),
Padding(
padding: EdgeInsets.symmetric(horizontal: 15),
child: TextField(
decoration: InputDecoration(
border: OutlineInputBorder(),
labelText: 'Email',
hintText: 'Enter valid email id '),
),
),
Padding(
padding: const EdgeInsets.only(
left: 15.0, right: 15.0, top: 15, bottom: 0),
child: TextField(
obscureText: true,
decoration: InputDecoration(
border: OutlineInputBorder(),
labelText: 'Password',
hintText: 'Enter password'),
),
),
FlatButton(
onPressed: () {
//TODO FORGOT PASSWORD SCREEN GOES HERE
},
child: Text(
'Forgot Password',
style: TextStyle(color: Colors.blue, fontSize: 15),
),
),
Container(
height: 50,
width: 270,
decoration: BoxDecoration(
color: Colors.red, borderRadius: BorderRadius.circular(20)),
child: FlatButton(
onPressed: () async {
try {
try {
final bool result = await AccountAuthService.signOut();
if (result) {
final bool response =
await AccountAuthService.cancelAuthorization();
}
} on Exception catch (e) {
print(e.toString());
}
} on Exception catch (e) {
print(e.toString());
}
},
child: GestureDetector(
onTap: () async {
try {
final bool response =
await AccountAuthService.cancelAuthorization();
} on Exception catch (e) {
print(e.toString());
}
},
child: Text(
'Login',
style: TextStyle(color: Colors.white, fontSize: 25),
),
),
),
),
SizedBox(
height: 5,
),
Container(
height: 50,
width: 270,
decoration: BoxDecoration(
color: Colors.red, borderRadius: BorderRadius.circular(20)),
child: HuaweiIdAuthButton(
theme: AuthButtonTheme.FULL_TITLE,
buttonColor: AuthButtonBackground.RED,
borderRadius: AuthButtonRadius.MEDIUM,
onPressed: () {
signInWithHuaweiAccount();
}),
),
SizedBox(
height: 30,
),
GestureDetector(
onTap: () {
//showBannerAd();
},
child: Text('New User? Create Account'),
),
],
),
),
),
);
}
void signInWithHuaweiAccount() async {
AccountAuthParamsHelper helper = new AccountAuthParamsHelper();
helper.setAuthorizationCode();
try {
// The sign-in is successful, and the user's ID information and authorization code are obtained.
Future<AuthAccount> account = AccountAuthService.signIn(helper);
account.then((value) => Fluttertoast.showToast(
msg: "Welcome " + value.displayName.toString(),
toastLength: Toast.LENGTH_SHORT,
gravity: ToastGravity.CENTER,
timeInSecForIosWeb: 1,
backgroundColor: Colors.red,
textColor: Colors.white,
fontSize: 16.0));
Navigator.push(
context, MaterialPageRoute(builder: (_) => StoryListScreen()));
} on Exception catch (e) {
print(e.toString());
}
}
Future<void> showToast() async {
Fluttertoast.showToast(
msg: "Refreshing.. ",
toastLength: Toast.LENGTH_SHORT,
gravity: ToastGravity.CENTER,
timeInSecForIosWeb: 1,
backgroundColor: Colors.lightBlue,
textColor: Colors.white,
fontSize: 16.0);
}
//Show Splash Ad
void showSplashAd() {
SplashAd _splashAd = createSplashAd();
_splashAd
..loadAd(
adSlotId: "testq6zq98hecj",
orientation: SplashAdOrientation.portrait,
adParam: AdParam(),
topMargin: 20);
Future.delayed(Duration(seconds: 10), () {
_splashAd.destroy();
});
}
SplashAd createSplashAd() {
SplashAd _splashAd = new SplashAd(
adType: SplashAdType.above,
ownerText: ' Huawei SplashAd',
footerText: 'Test SplashAd',
); // Splash Ad
return _splashAd;
}
}
[B][B]
storyListScreen.dart
[/B][/B][/B]
class StoryListScreen extends StatefulWidget {
@override
_StoryListScreenState createState() => _StoryListScreenState();
}
class _StoryListScreenState extends State<StoryListScreen> {
final _itemExtent = 56.0;
final generatedList = List.generate(22, (index) => 'Item $index');
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Stories'),
),
backgroundColor: Colors.white,
body: CustomScrollView(
controller: ScrollController(initialScrollOffset: _itemExtent * 401),
slivers: [
SliverFixedExtentList(
itemExtent: _itemExtent,
delegate: SliverChildBuilderDelegate(
(context, index) => Card(
margin: EdgeInsets.only(left: 12, right: 12, top: 5, bottom: 5),
child: Center(
child: GestureDetector(
onTap: () {
showStory(index);
},
child: ListTile(
title: Text(
storyTitles[index],
style: TextStyle(
fontSize: 22.0, fontWeight: FontWeight.bold),
),
),
),
),
),
childCount: storyTitles.length,
),
),
],
),
);
}
void showStory(int index) {
print(storyTitles[index] + " Index :" + index.toString());
Navigator.push(
context, MaterialPageRoute(builder: (_) => StoryDetails(index)));
}
}
[B][B][B]
storyDetails.dart
[/B][/B][/B][/B]
class StoryDetails extends StatefulWidget {
int index;
StoryDetails(this.index);
@override
_StoryDetailsState createState() => new _StoryDetailsState(index);
}
class _StoryDetailsState extends State<StoryDetails> {
int index = 0;
MLTtsEngine? engine = null;
bool toggle = false;
BannerAd? _bannerAd = null;
bool isPaused = false;
_StoryDetailsState(this.index);
@override
void initState() {
// TODO: implement initState
initML();
showBannerAd();
super.initState();
}
@override
Widget build(BuildContext context) {
return WillPopScope(
onWillPop: _onBackPressed,
child: Scaffold(
appBar: AppBar(
title: Text(storyTitles[index]),
actions: <Widget>[
IconButton(
icon: toggle
? Icon(Icons.pause_circle_filled_outlined)
: Icon(
Icons.play_circle_fill_outlined,
),
onPressed: () {
setState(() {
// Here we changing the icon.
toggle = !toggle;
if (toggle) {
if (!isPaused) {
// do something
print("......Play.....");
final stream = storyDetails[index].splitStream(
chunkSize: 499,
splitters: [','],
delimiters: [r'\'],
);
play(stream);
} else {
MLTtsEngine().resume();
isPaused = false;
}
} else {
isPaused = true;
MLTtsEngine().pause();
}
});
}),
],
),
backgroundColor: Colors.white,
body: SafeArea(
child: SingleChildScrollView(
child: Padding(
padding: EdgeInsets.only(left: 5, right: 5, top: 3, bottom: 50),
child: Column(children: <Widget>[
Card(
child: Image.asset(
"images/image_0" + index.toString() + ".png"),
),
Card(
child: Text(
storyDetails[index],
style: TextStyle(
color: Colors.black,
fontWeight: FontWeight.normal,
fontSize: 20),
)),
Center(
child: Image.asset(
"images/greeting.gif",
height: 320.0,
width: 620.0,
),
),
]),
),
),
)),
);
}
void showBannerAd() {
_bannerAd = createBannerAd();
_bannerAd!
..loadAd()
..show(gravity: Gravity.bottom, offset: 1);
}
//Create BannerAd
static BannerAd createBannerAd() {
BannerAd banner = BannerAd(
adSlotId: "testw6vs28auh3",
size: BannerAdSize.sSmart,
adParam: AdParam());
banner.setAdListener = (AdEvent event, {int? errorCode}) {
print("Banner Ad event : $event " + banner.id.toString());
};
return banner;
}
Future<bool> _onBackPressed() async {
if (_bannerAd != null) {
_bannerAd?.destroy();
}
if (engine != null) {
engine!.stop();
}
return true;
}
Future<void> playStory(String parts) async {
// Create MLTtsConfig to configure the speech.
final config = MLTtsConfig(
language: MLTtsConstants.TTS_EN_US,
synthesizeMode: MLTtsConstants.TTS_ONLINE_MODE,
text: parts,
);
// Create an MLTtsEngine object.
engine = new MLTtsEngine();
// Set a listener to track tts events.
engine?.setTtsCallback(MLTtsCallback(
onError: _onError,
onEvent: _onEvent,
onAudioAvailable: _onAudioAvailable,
onRangeStart: _onRangeStart,
onWarn: _onWarn,
));
// Start the speech.
await engine?.speak(config);
}
void _onError(String taskId, MLTtsError err) {
print(err.errorMsg);
}
void _onEvent(String taskId, int eventId) {
}
void _onAudioAvailable(
String taskId, MLTtsAudioFragment audioFragment, int offset) {
}
void _onRangeStart(String taskId, int start, int end) {
}
void _onWarn(String taskId, MLTtsWarn warn) {
}
Future<void> initML() async {
MLLanguageApp().setApiKey(
"DAED8900[p0-tu7au4ZHZuWDrR7oKps/WybCAJ0IOi7UdLfIlsIu9C4pEw0OSNA==");
}
Future<void> play(Stream<List<String>> stream) async {
int i = 0;
await for (List<String> parts in stream) {
// print(parts);
if (i == 0) {
playStory(parts.toString());
}
i++;
}
}
}
[B][B][B][B]
Result
Tricks and Tips
Make sure that downloaded plugin is unzipped in parent directory of project.
Makes sure that agconnect-services.json file added.
Make sure dependencies are added yaml file.
Run flutter pug get after adding dependencies.
Make sure that service is enabled in agc.
Makes sure images are defined in yaml file.
Make sure that permissions are added in Manifest file.
Conclusion
In this article, we have learnt how to integrate Huawei ML kit Text to Speech in Flutter StoryApp. It supports maximum of 500 character for one request to convert Text to Speech. Once Account kit integrated, users can login quickly and conveniently sign in to apps with their Huawei IDs after granting initial access permission. Banner and Splash Ads helps you to monetize your StoryApp.
Thank you so much for reading, and also I would like to ‘thanks author for write-ups’. I hope this article helps you to understand the integration of Huawei ML Kit, Banner and Splash Ads in flutter StoryApp.
Reference
ML Kit Text To Speech
StoryAuthors : https://momlovesbest.com/short-moral-stories-kids
Account Kit – Training Video
ML Kit – Training Video
Checkout in forum