Introduction
Flutter is a mobile application development kit for crafting high-quality native experiences on iOS and Android platforms in record time. Even though Flutter is enough to build great mobile apps, interactivity, such as map integration, is needed in order to increase the user experience.
Huawei Map Kit
Huawei Map Kit is a development kit and map service developed by Huawei to easily integrate map-based functions into your apps. The kit currently covers map data of more than 200 countries and regions, supports 40+ languages, provides UI elements such as markers, shapes, and layers to customize your map, and also enables users to interact with the map in your app through gestures and buttons in different scenarios.
With the recently released Huawei Map Kit Flutter Plugin, Huawei developers now can use these features and integrate map-based functions to their Flutter projects. Hence, in this article, to explore the kit and Huawei Services, we will try to build a mobile app featuring Huawei Map using the plugin and Flutter SDK.
HMS Core Github: https://github.com/HMS-Core/hms-flutter-plugin/tree/master/flutter-hms-map
Required Configurations
Before we get started, to use Huawei Map Kit, and also other Huawei Mobile Services, you should be a Huawei Developer Account holder. For more detailed information on Developer Accounts and how to apply for them, please refer to this link.
Creating an App
· Sign in to AppGallery Connect using your Huawei ID and create a new project to work with, by clicking My projects>Add Project button.
· Click Add App button to add a new application to your project by filling the required fields such as name, category, and default language.
· Map Kit APIs of your app is enabled by default, but just to be sure, you can check it from the Manage APIs tab of your project on AppGallery Connect. You can also refer to Enabling Services article if you need any help.
· Open your Android Studio and create a Flutter application. The package name of your Flutter application should be the same with the package name of your app which you created on AppGallery Connect.
· Android requires a Signing Certificate to verify the authenticity of apps. Thus, you need to generate a Signing Certificate for your app. If you don’t know how to generate a Signing Certificate please click here for related article. Copy your generated Keystore file to your android/app directory of your project.
A side note: Flutter project structure have folders, such as ios and android folders, which belongs to different platforms, by nature, yet Android Studio treats them as a Flutter project folders and throws errors on these files. For this reason, before changing anything related to Android platform, you should right click to your android folder on your project directory and select Flutter > Open Android module in Android Studio. You can easily modify the files and also generate Signing Certificates from the Android Studio window that opened after selection.
· After generating your Signing Certificate (Keystore) you should extract SHA-256 fingerprint using keytool, which provided by JDK, and add to the AppGallery Connect by navigating to the App Information section of your app. You may also refer to Generating Fingerprint from a Keystore and Add Fingerprint certificate to AppGallery Connect articles for further help.
Integrating HMS and Map Plugin to Your Flutter Project
You also need to configure your Flutter application in order to communicate with Huawei to use Map Kit.
· Add the Huawei Map Kit Flutter Plugin as a dependency to your project’s pubspec.yaml file and run flutter pub get command to integrate Map Plugin to your project.
Code:
dependencies:
flutter:
sdk: flutter
huawei_map: ^4.0.4+300
· Download the agconnect-services.json file from the App Information section of the AppGallery Connect and copy it to your android/app directory of your project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Your project’s android directory should look like this after adding both agconnect-services.json and keystore file.
· Add the Maven repository address and AppGallery Connect plugin to the project level build.gradle (android/build.gradle) file.
Code:
buildscript {
repositories {
//other repositories
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
//other dependencies
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
}
allprojects {
repositories {
//other repositories
maven { url 'https://developer.huawei.com/repo/' }
}
}
· Open your app level build.gradle (android/app/build.gradle) file and add the AppGallery Connect plugin.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect' //Added Line
apply plugin: 'kotlin-android'
· In the same file (android/app/build.gradle), add the signing configurations, and change the minSdkVersion of your project as shown below.
Code:
android {
/*
* Other configurations
*/
defaultConfig {
applicationId "<package_name>" //Your unique package name
minSdkVersion 19 //Change minSdkVersion to 19
targetSdkVersion 28
versionCode flutterVersionCode.toInteger()
versionName flutterVersionName
}
signingConfigs {
config{
storeFile file('<keystore_file>')
storePassword '<keystore_password>'
keyAlias '<key_alias>'
keyPassword '<key_password>'
}
}
buildTypes {
debug {
signingConfig signingConfigs.config
}
release {
signingConfig signingConfigs.config
}
}
}
· Finally, to call capabilities of Huawei Map Kit, apply for the following permissions for your app in your AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
— To obtain the current device location following permissions are also needed to be declared in your AndroidManifest.xml file (on Android 6.0 and later versions, you need to apply for these permissions dynamically). But we won’t use the current location in our app, this step is optional.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>
Using Huawei Map Kit Flutter Plugin
Creating a Map
Now that we are ready to use Huawei’s Map features, let’s implement a simple map.
Huawei Map Kit Flutter Plugin provides a single widget, called HuaweiMap, for developers to easily create and manage map fragments. By using this widget, developers can enable or disable attributes or gestures of a map, set initial markers, circles, or other shapes, decide the map type, and also set an initial camera position to focus an area when the map is ready.
Let’s choose a random location from İstanbul as an initial camera position. While declaring the initial camera position, the zoom level, which indicates the value of magnification, and target, which indicates the latitude and longitude of the location, is required. You can find my target coordinates and zoom level below, which we will use while creating our map.
Code:
static const LatLng _center = const LatLng(41.027470, 28.999339);
static const double _zoom = 12;
Since now we have an initial position, we should implement the map itself. We will first create a simple Scaffold and set a simple AppBar, then create a HuaweiMap object as a Scaffold’s body. HuaweiMap object, as mentioned before, has different attributes which you can see below. The following code will create a HuaweiMap object that is full-screen, scrollable, tiltable, and also shows the buildings or traffic.
Code:
class HomeScreen extends StatefulWidget {
@override
_HomeScreenState createState() => _HomeScreenState();
}
class _HomeScreenState extends State<HomeScreen> {
static const LatLng _center = const LatLng(41.027470, 28.999339);
static const double _zoom = 12;
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: AppBar(
title: Text("Map Demo"),
centerTitle: true,
backgroundColor: Colors.red,
),
body: HuaweiMap(
initialCameraPosition: CameraPosition(
target: _center,
zoom: _zoom,
),
mapType: MapType.normal,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: false,
rotateGesturesEnabled: true,
myLocationButtonEnabled: false,
myLocationEnabled: false,
trafficEnabled: true,
),
);
}
}
Considering creating a ‘clean’ map, I disabled myLocationEnabled, myLocationButtonEnabled and zoomControlsEnabled attributes of the map, but do not forget to explore these attributes by trying yourself since they are great to boost your user experience of your app.
Huawei Map
Resizing the Map
A full-screen map is not always useful in some scenarios, thus, since HuaweiMap is a standalone widget, we can resize the map by wrapping it to Container or ConstrainedBox widgets.
For this project, I will create a layout in Scaffold by using Expanded, Column, and Container widgets. The following code shows a HuaweiMap widget which fills only one-third of a screen.
Code:
class HomeScreen extends StatefulWidget {
@override
_HomeScreenState createState() => _HomeScreenState();
}
class _HomeScreenState extends State<HomeScreen> {
static const LatLng _center = const LatLng(41.027470, 28.999339);
static const double _zoom = 12;
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: AppBar(
title: Text("Locations"),
centerTitle: true,
backgroundColor: Colors.red,
),
body: Column(
children: [
Expanded(
child: Padding(
padding: EdgeInsets.all(8),
child: Container(
decoration: BoxDecoration(
border: Border.all(color: Colors.green, width: 2)),
child: HuaweiMap(
initialCameraPosition: CameraPosition(
target: _center,
zoom: _zoom,
),
mapType: MapType.normal,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: false,
rotateGesturesEnabled: true,
myLocationButtonEnabled: false,
myLocationEnabled: false,
trafficEnabled: true,
),
),
),
),
Expanded(flex: 2, child: Container()),
],
),
);
}
}
Adding Interactivity by Using Markers and CameraUpdate
Let’s assume that we are building an app that shows different restaurant locations as markers on our HuaweiMap object. To do this, we will set some initial markers to our map using HuaweiMap widget’s markers field.
Markers field of the widget takes a Set of markers, hence we should first create a set of Marker objects. Then use them as initial markers for our HuaweiMap widget.
As you know, the two-third of our screen is empty, to fill the space we will create some Card widgets that hold the location’s name, motto, and address as a String. To reduce the redundant code blocks, I create a separate widget, called LocationCard, which returns a styled custom Card widget. To not lose the scope of this article, I will not share the steps of how to create a custom card widget but you may find its code from the project’s GitHub link.
Code:
class HomeScreen extends StatefulWidget {
@override
_HomeScreenState createState() => _HomeScreenState();
}
class _HomeScreenState extends State<HomeScreen> {
static const LatLng _center = const LatLng(41.027470, 28.999339);
static const double _zoom = 12;
//Marker locations
static const LatLng _location1 = const LatLng(41.0329109, 28.9840904);
static const LatLng _location2 = const LatLng(41.0155957, 28.9827176);
static const LatLng _location3 = const LatLng(41.0217315, 29.0111898);
//Set of markers
Set<Marker> _markers = {
Marker(markerId: MarkerId("Location1"), position: _location1),
Marker(markerId: MarkerId("Location2"), position: _location2),
Marker(markerId: MarkerId("Location3"), position: _location3),
};
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: AppBar(
title: Text("Locations"),
centerTitle: true,
backgroundColor: Colors.red,
),
body: Column(
children: [
Expanded(
child: Padding(
padding: EdgeInsets.all(8),
child: Container(
decoration: BoxDecoration(
border: Border.all(color: Colors.green, width: 2)),
child: HuaweiMap(
initialCameraPosition: CameraPosition(
target: _center,
zoom: _zoom,
),
mapType: MapType.normal,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: false,
rotateGesturesEnabled: true,
myLocationButtonEnabled: false,
myLocationEnabled: false,
trafficEnabled: true,
markers: _markers, //Using the set
),
),
),
),
//Styled Card widgets
Expanded(flex: 2, child: Padding(
padding: EdgeInsets.all(8),
child: SingleChildScrollView(
child: Column(
children: [
LocationCard(
title: "Location 1",
motto: "A Fine Dining Restaurant",
address:
"Avrupa Yakası, Cihangir, 34433 Beyoğlu/İstanbul Türkiye",
),
LocationCard(
title: "Location 2",
motto: "A Restaurant with an Extraordinary View",
address:
"Avrupa Yakası, Hoca Paşa, 34110 Fatih/İstanbul Türkiye",
),
LocationCard(
title: "Location 3",
motto: "A Casual Dining Restaurant",
address:
"Anadolu Yakası, Aziz Mahmut Hüdayi, 34672 Üsküdar/İstanbul Türkiye",
)
],
),
),),
)],
),
);
}
}
Now we have some custom cards beneath the map object which also has some initial markers. We will use these custom cards as a button to zoom in the desired marker with a smooth camera animation. To do so, users can easily tab a card to see the zoomed-in location on a Huawei map and explore the surrounding without leaving the page.
Resized Map with Location Cards
Before turning cards into a button we should first set a HuaweiMapController object in order to provide a controller for HuaweiMap, then use this controller on HuaweiMap widgets onMapCreated field to pair map and its controller. Below, I created a controller, and with the help of a simple function, use it in our HuaweiMap object.
Code:
HuaweiMapController mapController;
void _onMapCreated(HuaweiMapController controller) {
mapController = controller;
}
/*
This section only shows the added line. Remaining code is not changed.
*/
child: HuaweiMap(
initialCameraPosition: CameraPosition(
target: _center,
zoom: _zoom,
),
onMapCreated: _onMapCreated, // Added Line
mapType: MapType.normal,
tiltGesturesEnabled: true,
/*
Rest of the code
*/
We now have a controller for non-user camera moves, so let’s use the controller. I wrapped the LocationCards with InkWell widget to provide an onTap functionality. There are several useful methods in plugins CameraUpdate class which enables us to zoom in, zoom out, or change camera position. We will use the newLatLngZoom method to zoom in the stated location then, by using the controller and animateCamera method, we will animate the camera move to our new camera location. You can find the wrapped LocationCard with the CameraUpdate and controller.
Code:
InkWell(
onTap: () {
CameraUpdate cameraUpdate =
CameraUpdate.newLatLngZoom(
_location1, _zoomMarker);
mapController.animateCamera(cameraUpdate);
},
child: LocationCard(
title: "Location 1",
motto: "A Fine Dining Restaurant",
address:
"Avrupa Yakası, Cihangir, 34433 Beyoğlu/İstanbul Türkiye",
)),
The used _zoomMarker variable is a constant double and has a value of 18. Also, the used _location1 variable is the variable we set while creating our markers.
After implementing these steps, tap a card and you will see a smooth camera move animation with the change of zoom level in your HuaweiMap widget. Voila!
As I mentioned before, you can also set some circles, polylines, or polygons similar to the markers. Furthermore, you can add some on-click actions both to your map and shapes or markers you set. Do not forget to explore other functionalities that Huawei Map Kit Flutter Plugin offers.
I am leaving the project’s GitHub link in case you want to check or try this example by yourself. You may also find LocationCard’s code and other minor adjustments that I made, from the link.
https://github.com/SerdarCanDev/FlutterHuaweiMapTutorial
Conclusion
Since Huawei created its own services, the demand for support to cross-platform frameworks such as Flutter, React Native, Cordova or Xamarin, is increased. To meet these demands Huawei continuously releasing plugins and updates in order to support its developers. We already learned how to use Huawei’s Map Kit in our Flutter projects, yet there are several more official plugins for Huawei Services to explore.
For further reading, I will provide some links in the “References and Further Reading” section including an article which showcases another service. You may also ask any question related to this article in the comments section.
https://developer.huawei.com/consumer/en/hms/huawei-MapKit
https://pub.dev/publishers/developer.huawei.com/packages
Sending Push Notifications on Flutter with Huawei Push Kit Plugin:
https://medium.com/huawei-developers/sending-push-notifications-on-flutter-with-huawei-push-kit-plugin-534787862b4d
Introduction
In the previous post, we looked at how to use HUAWEI ML Kit's skeleton detection capability to detect points such as the head, neck, shoulders, knees and ankles. But as well as skeleton detection, ML Kit also provides a hand keypoint detection capability, which can locate 21 hand keypoints, such as fingertips, joints, and wrists.
Application Scenarios
Hand keypoint detection is useful in a huge range of situations. For example, short video apps can generate some cute and funny special effects based on hand keypoints, to add more fun to short videos.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Or, if smart home devices are integrated with hand keypoint detection, users could control them from a remote distance using customized gestures, so they could do things like activate a robot vacuum cleaner while they’re out.
Hand Keypoint Detection Development
Now, we’re going to see how to quickly integrate ML Kit's hand keypoint detection feature. Let’s take video stream detection as an example.
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add SDK Dependencies to the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.2.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Automatically Update
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "handkeypoint"/>
1.5 Apply for Camera Permission and Local File Reading Permission
Code:
<!--Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Read permission-->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2. Code Development
2.1 Create a Hand Keypoint Analyzer
Code:
MLHandKeypointAnalyzerSetting setting = new MLHandKeypointAnalyzerSetting.Factory()
// MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
// Set the maximum number of hand regions that can be detected within an image. A maximum of 10 hand regions can be detected by default.
setMaxHandResults(1)
create();
MLHandKeypointAnalyzer analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting);
2.2 Create the HandKeypointTransactor Class for Processing Detection Results
This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this class to obtain the detection results and implement specific services. In addition to coordinate information for each hand keypoint, the detection results include a confidence value for the palm and each of the keypoints. Palm and hand keypoints which are incorrectly detected can be filtered out based on the confidence values. You can set a threshold based on misrecognition tolerance.
Code:
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = result.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.3 Set the Detection Result Processor to Bind the Analyzer to the Result Processor
Code:
analyzer.setTransactor(new HandKeypointTransactor());
2.4 Create an Instance of the LensEngine Class
The LensEngine Class is provided by the HMS Core ML SDK to capture dynamic camera streams and pass these streams to the analyzer. The camera display size should be set to a value between 320 x 320 px and 1920 x 1920 px.
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
setLensType(LensEngine.BACK_LENS)
applyDisplayDimension(1280, 720)
applyFps(20.0f)
enableAutomaticFocus(true)
create();
2.5 Call the run Method to Start the Camera and Read Camera Streams for Detection
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
Demo Effect
And that's it! We can now see hand keypoints appear when making different gestures. Remember that you can expand this capability if you need to.
Introduction
Sometimes, we can't help taking photos to keep our unforgettable moments in life. But most of us are not professional photographers or models, so our photographs can end up falling short of our expectations. So, how can we produce more impressive snaps? If you have an image processing app on your phone, it can automatically detect faces in a photo, and you can then adjust the image until you're happy with it. So, after looking around online, I found HUAWEI ML Kit's face detection capability. By integrating this capability, you can add beautification functions to your apps. Have a try!
Application Scenarios
ML Kit's face detection capability detects up to 855 facial keypoints and returns the coordinates for the face's contour, eyebrows, eyes, nose, mouth, and ears, as well as the angle of the face. Once you've integrated this capability, you can quickly create beauty apps and enable users to add fun facial effects and features to their images.
Face detection also detects whether the subject's eyes are open, whether they're wearing glasses or a hat, whether they have a beard, and even their gender and age. This is useful if you want to add a parental control function to your app which prevents children from getting too close to their phone, or staring at the screen for too long.
In addition, face detection can detect up to seven facial expressions, including smiling, neutral, angry, disgusted, frightened, sad, and surprised faces. This is great if you want to create apps such as smile-cameras.
You can integrate any of these capabilities as needed. At the same time, face detection supports image and video stream detection, cross-frame face tracking, and multi-face detection. It really is powerful! Now, let's see how to integrate this capability.
Face Detection Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process. Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add Configurations to the File Header
After integrating the SDK, add the following configuration to the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.3 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Import the contour and keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
// Import the facial expression detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
// Import the facial feature detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
...
</manifest>
1.5 Apply for Camera Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
2. Code Development
2.1 Create a Face Analyzer by Using the Default Parameter Configurations
Code:
analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
2.2 Create an MLFrame Object by Using the android.graphics.Bitmap for the Analyzer to Detect Images
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncAnalyseFrame Method to Perform Face Detection
Code:
Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {
@Override
public void onSuccess(List<MLFace> faces) {
// Detection success. Obtain the face keypoints.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
2.4 Use the Progress Bar to Process the Face in the Image
Call the magnifyEye and smallFaceMesh methods to implement the eye-enlarging algorithm and face-shaping algorithm.
Code:
private SeekBar.OnSeekBarChangeListener onSeekBarChangeListener = new SeekBar.OnSeekBarChangeListener() {
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
switch (seekBar.getId()) {
case R.id.seekbareye: // When the progress bar of the eye enlarging changes, ...
case R.id.seekbarface: // When the progress bar of the face shaping changes, ...
}
}
2.5 Release the Analyzer After the Detection is Complete
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
Log.e(TAG, "e=" + e.getMessage());
}
Demo
Now, let's see what it can do. Pretty cool, right?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
There are so many online games these days that are addictive, easy to play, and suitable for a wide age range. I've long dreamed of creating a hit game of my own, but doing so is harder than it seems. I was researching on the Internet, and was fortunate to learn about HUAWEI ML Kit's face detection and hand keypoint detection capabilities, which make games much more engaging.
Application Scenarios
ML Kit's face detection capability detects up to 855 keypoints of the face, and returns the coordinates for the face contours, eyebrows, eyes, nose, mouth, and ears, as well as angles. Integrating the face detection capability, makes it easy to create a beauty app, or enable users to add special effects to facial images to make them more intriguing.
The hand keypoint detection capability can be applied across a wide range of scenarios. For example, short video apps are able to provide diverse special effects that users can apply to their videos, after integrating this capability, providing new sources of fun and whimsy.
Crazy Rockets is a game that integrates both capabilities. It provides two playing modes for players, allowing them to control rockets through hand and face movements. Both modes work flawlessly by detecting the motions. Let's take a look at what the game effects look like in practice.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Pretty exhilarating, wouldn't you say? Now, I'll show you how to create a game like Crazy Rockets, by using ML Kit's face detection capability.
Development Practice
Preparations
To find detailed information about the preparations you need to make, please refer to Development Process.
Here, we'll just take a look at the most important procedures.
1. Face Detection
1.1 Configure the Maven Repository
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > dependencies and add AppGallery Connect plug-in configurations.
Code:
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
1.2 Integrate the SDK
Code:
Implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
1.3 Create a Face Analyzer
Code:
MLFaceAnalyzer analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
1.4 Create a Processing Class
Code:
public class FaceAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLFace> {
@Override
public void transactResult(MLAnalyzer.Result<MLFace> results) {
SparseArray<MLFace> items = results.getAnalyseList();
// Process detection results as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
1.5 Create a LensEngine to Capture Dynamic Camera Streams, and Pass them to the Analyzer
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1440, 1080)
.applyFps(30.0f)
.enableAutomaticFocus(true)
.create();
1.6 Call the run Method to Start the Camera, and Read Camera Streams for Detection
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
1.7 Release Detection Resources
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
2. Hand Keypoint Detection
2.1 Configure the Maven Repository
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > dependencies and add AppGallery Connect plug-in configurations.
Code:
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
2.2 Integrate the SDK
Code:
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
2.3 Create a Default Hand Keypoint Analyzer
Code:
MLHandKeypointAnalyzer analyzer =MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();
2.4 Create a Processing Class
Code:
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// Process detection results as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.5 Set the Processing Class
Code:
analyzer.setTransactor(new HandKeypointTransactor());
2.6 Create a Lengengine
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
2.7 Call the run Method to Start the Camera, and Read Camera Streams for Detection
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
2.8 Release Detection Resources
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
Learn More
For more information, please visit HUAWEI Developers.
For detailed instructions, please visit Development Guide.
You can join the HMS Core developer discussion on Reddit.
You can download the demo and sample code from GitHub.
To solve integration problems, please go to Stack Overflow.
Overview
"Hey, I just took some pictures of that gorgeous view. Take a look."
"Yes, let me see."
... (a few minutes later)
"Where are they?"
"Wait a minute. There are too many pictures."
Have you experienced this type of frustration before? Finding one specific image in a gallery packed with thousands of images, can be a daunting task. Wouldn't it be nice if there was a way to search for images by category, rather than having to browse through your entire album to find what you want?
Our thoughts exactly! That's why we created HUAWEI ML Kit's scene detection service, which empowers your app to build a smart album, bolstered by intelligent image classification, the result of detecting and labeling elements within images. With this service, you'll be able to locate any image in little time, and with zero hassle.
Features
ML Kit's scene detection service is able to classify and annotate images with food, flowers, plants, cats, dogs, kitchens, mountains, and washers, among a multitude of other items, as well as provide for an enhanced user experience based on the detected information.
The service contains the following features:
Multi-scenario detection
Detects 102 scenarios, with more scenarios continually added.
High detection accuracy
Detects a wide range of objects with a high degree of accuracy.
Fast detection response
Responds in milliseconds, and continually optimizes performance.
Simple and efficient integration
Facilitates simple and cost-effective integration, with APIs and SDK packages.
Applicable Scenarios
In addition to creating smart albums, retrieving, and classifying images, the scene detection service can also automatically select corresponding filters and camera parameters to help users take better images, by detecting where the users are located.
Development Practice
1. Preparations
1.1 Configure app information in AppGallery Connect.
Before you start developing an app, configure the app information in AppGallery Connect. For details, please refer to Development Guide.
1.2 Configure the Maven repository address for the HMS Core SDK, and integrate the SDK for the service.
(1) Open the build.gradle file in the root directory of your Android Studio project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
(2) Add the AppGallery Connect plug-in and the Maven repository.
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
If the agconnect-services.json file has been added to the app, go to buildscript > dependencies and add the AppGallery Connect plug-in configuration.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Code Development
Static Image Detection
2.1 Create a scene detection analyzer instance.
Code:
// Method 1: Use default parameter settings.
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
// Method 2: Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
MLSceneDetectionAnalyzer analyzer =
2.2 Create an MLFrame object by using the android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are all supported.
Code:
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
2.3 Implement scene detection.
Code:
// Method 1: Detect in synchronous mode.
SparseArray<MLSceneDetection> results = analyzer.analyseFrame(frame);
// Method 2: Detect in asynchronous mode.
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the error code. You can process the error code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the error code.
String errorMessage = mlException.getMessage();
} else {
// Other exceptions.
}
}
});
2.4 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
Camera Stream Detection
You can process camera streams, convert them into an MLFrame object, and detect scenarios using the static image detection method.
If the synchronous detection API is called, you can also use the LensEngine class built into the SDK to detect scenarios in camera streams. The following is the sample code:
3.1 Create a scene detection analyzer, which can only be created on the device.
Code:
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
3.2 Create the SceneDetectionAnalyzerTransactor class for processing detection results. This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this API to obtain the detection results and implement specific services.
Code:
public class SceneDetectionAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLSceneDetection> {
@Override
public void transactResult(MLAnalyzer.Result<MLSceneDetection> results) {
SparseArray<MLSceneDetection> items = results.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
3.3 Set the detection result processor to bind the analyzer to the result processor.
Code:
analyzer.setTransactor(new SceneDetectionAnalyzerTransactor());
// Create an instance of the LensEngine class provided by the HMS Core ML SDK to capture dynamic camera streams and pass the streams to the analyzer.
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context, this.analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1440, 1080)
.applyFps(30.0f)
.enableAutomaticFocus(true)
.create();
3.4 Call the run method to start the camera and read camera streams for detection.
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
3.5 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
You'll find that the sky, plants, and mountains in all of your images will be identified in an instant. Pretty exciting stuff, wouldn't you say? Feel free to try it out yourself!
GitHub Source Code
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow