How ML Kit's Face Detection and Hand Keypoint Detection Capabilities Helped - Huawei Developers

Introduction
There are so many online games these days that are addictive, easy to play, and suitable for a wide age range. I've long dreamed of creating a hit game of my own, but doing so is harder than it seems. I was researching on the Internet, and was fortunate to learn about HUAWEI ML Kit's face detection and hand keypoint detection capabilities, which make games much more engaging.
Application Scenarios
ML Kit's face detection capability detects up to 855 keypoints of the face, and returns the coordinates for the face contours, eyebrows, eyes, nose, mouth, and ears, as well as angles. Integrating the face detection capability, makes it easy to create a beauty app, or enable users to add special effects to facial images to make them more intriguing.
The hand keypoint detection capability can be applied across a wide range of scenarios. For example, short video apps are able to provide diverse special effects that users can apply to their videos, after integrating this capability, providing new sources of fun and whimsy.
Crazy Rockets is a game that integrates both capabilities. It provides two playing modes for players, allowing them to control rockets through hand and face movements. Both modes work flawlessly by detecting the motions. Let's take a look at what the game effects look like in practice.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Pretty exhilarating, wouldn't you say? Now, I'll show you how to create a game like Crazy Rockets, by using ML Kit's face detection capability.
Development Practice
Preparations
To find detailed information about the preparations you need to make, please refer to Development Process.
Here, we'll just take a look at the most important procedures.
1. Face Detection
1.1 Configure the Maven Repository
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > dependencies and add AppGallery Connect plug-in configurations.
Code:
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
1.2 Integrate the SDK
Code:
Implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
1.3 Create a Face Analyzer
Code:
MLFaceAnalyzer analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
1.4 Create a Processing Class
Code:
public class FaceAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLFace> {
@Override
public void transactResult(MLAnalyzer.Result<MLFace> results) {
SparseArray<MLFace> items = results.getAnalyseList();
// Process detection results as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
1.5 Create a LensEngine to Capture Dynamic Camera Streams, and Pass them to the Analyzer
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1440, 1080)
.applyFps(30.0f)
.enableAutomaticFocus(true)
.create();
1.6 Call the run Method to Start the Camera, and Read Camera Streams for Detection
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
1.7 Release Detection Resources
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
2. Hand Keypoint Detection
2.1 Configure the Maven Repository
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > dependencies and add AppGallery Connect plug-in configurations.
Code:
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
2.2 Integrate the SDK
Code:
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
2.3 Create a Default Hand Keypoint Analyzer
Code:
MLHandKeypointAnalyzer analyzer =MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();
2.4 Create a Processing Class
Code:
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// Process detection results as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.5 Set the Processing Class
Code:
analyzer.setTransactor(new HandKeypointTransactor());
2.6 Create a Lengengine
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
2.7 Call the run Method to Start the Camera, and Read Camera Streams for Detection
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
2.8 Release Detection Resources
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
Learn More
For more information, please visit HUAWEI Developers.
For detailed instructions, please visit Development Guide.
You can join the HMS Core developer discussion on Reddit.
You can download the demo and sample code from GitHub.
To solve integration problems, please go to Stack Overflow.

Related

How to Integrate HUAWEI ML Kit's Hand Keypoint Detection Capability

Introduction
In the previous post, we looked at how to use HUAWEI ML Kit's skeleton detection capability to detect points such as the head, neck, shoulders, knees and ankles. But as well as skeleton detection, ML Kit also provides a hand keypoint detection capability, which can locate 21 hand keypoints, such as fingertips, joints, and wrists.
Application Scenarios
Hand keypoint detection is useful in a huge range of situations. For example, short video apps can generate some cute and funny special effects based on hand keypoints, to add more fun to short videos.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Or, if smart home devices are integrated with hand keypoint detection, users could control them from a remote distance using customized gestures, so they could do things like activate a robot vacuum cleaner while they’re out.
Hand Keypoint Detection Development
Now, we’re going to see how to quickly integrate ML Kit's hand keypoint detection feature. Let’s take video stream detection as an example.
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add SDK Dependencies to the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.2.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Automatically Update
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "handkeypoint"/>
1.5 Apply for Camera Permission and Local File Reading Permission
Code:
<!--Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Read permission-->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2. Code Development
2.1 Create a Hand Keypoint Analyzer
Code:
MLHandKeypointAnalyzerSetting setting = new MLHandKeypointAnalyzerSetting.Factory()
// MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
// Set the maximum number of hand regions that can be detected within an image. A maximum of 10 hand regions can be detected by default.
setMaxHandResults(1)
create();
MLHandKeypointAnalyzer analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting);
2.2 Create the HandKeypointTransactor Class for Processing Detection Results
This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this class to obtain the detection results and implement specific services. In addition to coordinate information for each hand keypoint, the detection results include a confidence value for the palm and each of the keypoints. Palm and hand keypoints which are incorrectly detected can be filtered out based on the confidence values. You can set a threshold based on misrecognition tolerance.
Code:
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = result.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.3 Set the Detection Result Processor to Bind the Analyzer to the Result Processor
Code:
analyzer.setTransactor(new HandKeypointTransactor());
2.4 Create an Instance of the LensEngine Class
The LensEngine Class is provided by the HMS Core ML SDK to capture dynamic camera streams and pass these streams to the analyzer. The camera display size should be set to a value between 320 x 320 px and 1920 x 1920 px.
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
setLensType(LensEngine.BACK_LENS)
applyDisplayDimension(1280, 720)
applyFps(20.0f)
enableAutomaticFocus(true)
create();
2.5 Call the run Method to Start the Camera and Read Camera Streams for Detection
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
Demo Effect
And that's it! We can now see hand keypoints appear when making different gestures. Remember that you can expand this capability if you need to.

Implement Eye-Enlarging and Face-Shaping Functions with ML Kit's Detection Capability

Introduction
Sometimes, we can't help taking photos to keep our unforgettable moments in life. But most of us are not professional photographers or models, so our photographs can end up falling short of our expectations. So, how can we produce more impressive snaps? If you have an image processing app on your phone, it can automatically detect faces in a photo, and you can then adjust the image until you're happy with it. So, after looking around online, I found HUAWEI ML Kit's face detection capability. By integrating this capability, you can add beautification functions to your apps. Have a try!
Application Scenarios
ML Kit's face detection capability detects up to 855 facial keypoints and returns the coordinates for the face's contour, eyebrows, eyes, nose, mouth, and ears, as well as the angle of the face. Once you've integrated this capability, you can quickly create beauty apps and enable users to add fun facial effects and features to their images.
Face detection also detects whether the subject's eyes are open, whether they're wearing glasses or a hat, whether they have a beard, and even their gender and age. This is useful if you want to add a parental control function to your app which prevents children from getting too close to their phone, or staring at the screen for too long.
In addition, face detection can detect up to seven facial expressions, including smiling, neutral, angry, disgusted, frightened, sad, and surprised faces. This is great if you want to create apps such as smile-cameras.
You can integrate any of these capabilities as needed. At the same time, face detection supports image and video stream detection, cross-frame face tracking, and multi-face detection. It really is powerful! Now, let's see how to integrate this capability.
Face Detection Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process. Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add Configurations to the File Header
After integrating the SDK, add the following configuration to the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.3 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Import the contour and keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
// Import the facial expression detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
// Import the facial feature detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
...
</manifest>
1.5 Apply for Camera Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
2. Code Development
2.1 Create a Face Analyzer by Using the Default Parameter Configurations
Code:
analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
2.2 Create an MLFrame Object by Using the android.graphics.Bitmap for the Analyzer to Detect Images
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncAnalyseFrame Method to Perform Face Detection
Code:
Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {
@Override
public void onSuccess(List<MLFace> faces) {
// Detection success. Obtain the face keypoints.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
2.4 Use the Progress Bar to Process the Face in the Image
Call the magnifyEye and smallFaceMesh methods to implement the eye-enlarging algorithm and face-shaping algorithm.
Code:
private SeekBar.OnSeekBarChangeListener onSeekBarChangeListener = new SeekBar.OnSeekBarChangeListener() {
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
switch (seekBar.getId()) {
case R.id.seekbareye: // When the progress bar of the eye enlarging changes, ...
case R.id.seekbarface: // When the progress bar of the face shaping changes, ...
}
}
2.5 Release the Analyzer After the Detection is Complete
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
Log.e(TAG, "e=" + e.getMessage());
}
Demo
Now, let's see what it can do. Pretty cool, right?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}

Easily Capture Body Motion with HUAWEI ML Kit’s Skeleton Detection

Are you the kind of person who tends to go stiff and awkward when there’s a camera on you, and ends up looking unnatural in photos? If so, posture snapshots can help. All you need to do is select a posture template, and the camera will automatically take snapshots when it detects your body is in that position. This means photographs are only taken when you’re at your most natural. In this post, I'm going to show you how to integrate HUAWEI ML Kit's skeleton detection function into your apps. This function locates 14 skeleton points, and easily captures images of specific postures.
Skeleton Detection Function Development
1. Preparations
Before you get started, you need to make the necessary preparations. Also, ensure that the Maven repository address for the HMS Core SDK has been configured in your project, and the skeleton detection SDK has been integrated.
1.1 Configure the Maven Repository Address to the Project-Level build.gradle File
Code:
buildscript {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
classpath "com.android.tools.build:gradle:3.3.2"
}
}
Code:
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
1.2 Add SDK Dependencies to the App-Level build.gradle File
Code:
dependencies {
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-base:2.0.1.300'
}
2 Code Development
2.1 Static Image Detection
2.1.1 Create a Skeleton Analyzer
Code:
MLSkeletonAnalyzer analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer();
2.1.2 Create an MLFrame Using a Bitmap
The image resolution should be not less than 320 x 320 pixels and not greater than 1920 x 1920 pixels.
Code:
// Create an MLFrame using the bitmap.
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.1.3 Call the asyncAnalyseFrame Method to Perform Skeleton Detection
Code:
Task<List<MLSkeleton>> task = analyzer.asyncAnalyseFrame(frame); task.addOnSuccessListener(new OnSuccessListener<List<MLSkeleton>>() {
public void onSuccess(List<MLSkeleton> skeletons) {
// Process the detection result.
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Detection failure.
}
});
2.1.4 Stop the Analyzer and Release Resources When the Detection Ends
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
2.2 Dynamic Video Detection
2.2.1 Create a Skeleton Analyzer
Code:
MLSkeletonAnalyzer analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer();
2.2.2 Create the SkeletonAnalyzerTransactor Class to Process the Detection Result
This class implements the MLAnalyzer.MLTransactor<T> API. You can use the transactResult method to obtain the detection result and implement specific services.
Code:
public class SkeletonAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLSkeleton> {
@Override
public void transactResult(MLAnalyzer.Result<MLSkeleton> results) {
SparseArray<MLSkeleton> items = results.getAnalyseList();
// You can process the detection result as required. For example, calculate the similarity in this method to perform an operation, such as taking a photo when a specific posture has been detected.
// Only the detection result is processed. Other detection APIs provided by ML Kit cannot be called.
// Convert the result encapsulated using SparseArray to an ArrayList for similarity calculation.
List<MLSkeleton> resultsList = new ArrayList<>();
for (int i = 0; i < items.size(); i++) {
resultsList.add(items.valueAt(i));
}
// Calculate the similarity between the detection result and template.
// templateList is a list of skeleton templates. Templates can be generated by detecting static images. The skeleton detection service supports single-person and multi-person template matching.
float result = analyzer.caluteSimilarity(resultsList, templateList);
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.2.3 Set the Detection Result Processor to Bind the Analyzer
Code:
analyzer.setTransactor(new SkeletonAnalyzerTransactor());
2.2.4 Create the LensEngine Class
This class is provided by the HMS Core ML SDK. It captures dynamic video streams from the camera, and sends them to the analyzer. The camera display size should be not less than 320 x 320 pixels and not greater than 1920 x 1920 pixels.
Code:
// Create LensEngine.
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
2.2.5 Open the Camera
You can obtain and detect the video streams, then stop the analyzer and release resources when the detection ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
Let's take a look at the dynamic video.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
You can do all sorts of things with HUAWEI ML Kit's skeleton detection function:
Create virtual images to simulate live action in motion sensing games.
Provide posture guidance to enhance workouts and rehabilitation training.
Detect unusual behavior in video surveillance footage.
Well explained, will it support non -huawei devices if suppose i installed HMS Core APK.
Good Article Thanks :victory:
Nice explanation. Can you please provide a scenario where I can use this service. Thank you

Create a New Paradigm for Photo Gallery Management by ML Kit

Overview
"Hey, I just took some pictures of that gorgeous view. Take a look."
"Yes, let me see."
... (a few minutes later)
"Where are they?"
"Wait a minute. There are too many pictures."
Have you experienced this type of frustration before? Finding one specific image in a gallery packed with thousands of images, can be a daunting task. Wouldn't it be nice if there was a way to search for images by category, rather than having to browse through your entire album to find what you want?
Our thoughts exactly! That's why we created HUAWEI ML Kit's scene detection service, which empowers your app to build a smart album, bolstered by intelligent image classification, the result of detecting and labeling elements within images. With this service, you'll be able to locate any image in little time, and with zero hassle.
Features
ML Kit's scene detection service is able to classify and annotate images with food, flowers, plants, cats, dogs, kitchens, mountains, and washers, among a multitude of other items, as well as provide for an enhanced user experience based on the detected information.
The service contains the following features:
Multi-scenario detection
Detects 102 scenarios, with more scenarios continually added.
High detection accuracy
Detects a wide range of objects with a high degree of accuracy.
Fast detection response
Responds in milliseconds, and continually optimizes performance.
Simple and efficient integration
Facilitates simple and cost-effective integration, with APIs and SDK packages.
Applicable Scenarios
In addition to creating smart albums, retrieving, and classifying images, the scene detection service can also automatically select corresponding filters and camera parameters to help users take better images, by detecting where the users are located.
Development Practice
1. Preparations
1.1 Configure app information in AppGallery Connect.
Before you start developing an app, configure the app information in AppGallery Connect. For details, please refer to Development Guide.
1.2 Configure the Maven repository address for the HMS Core SDK, and integrate the SDK for the service.
(1) Open the build.gradle file in the root directory of your Android Studio project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
(2) Add the AppGallery Connect plug-in and the Maven repository.
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
If the agconnect-services.json file has been added to the app, go to buildscript > dependencies and add the AppGallery Connect plug-in configuration.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Code Development
Static Image Detection
2.1 Create a scene detection analyzer instance.
Code:
// Method 1: Use default parameter settings.
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
// Method 2: Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
MLSceneDetectionAnalyzer analyzer =
2.2 Create an MLFrame object by using the android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are all supported.
Code:
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
2.3 Implement scene detection.
Code:
// Method 1: Detect in synchronous mode.
SparseArray<MLSceneDetection> results = analyzer.analyseFrame(frame);
// Method 2: Detect in asynchronous mode.
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the error code. You can process the error code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the error code.
String errorMessage = mlException.getMessage();
} else {
// Other exceptions.
}
}
});
2.4 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
Camera Stream Detection
You can process camera streams, convert them into an MLFrame object, and detect scenarios using the static image detection method.
If the synchronous detection API is called, you can also use the LensEngine class built into the SDK to detect scenarios in camera streams. The following is the sample code:
3.1 Create a scene detection analyzer, which can only be created on the device.
Code:
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
3.2 Create the SceneDetectionAnalyzerTransactor class for processing detection results. This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this API to obtain the detection results and implement specific services.
Code:
public class SceneDetectionAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLSceneDetection> {
@Override
public void transactResult(MLAnalyzer.Result<MLSceneDetection> results) {
SparseArray<MLSceneDetection> items = results.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
3.3 Set the detection result processor to bind the analyzer to the result processor.
Code:
analyzer.setTransactor(new SceneDetectionAnalyzerTransactor());
// Create an instance of the LensEngine class provided by the HMS Core ML SDK to capture dynamic camera streams and pass the streams to the analyzer.
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context, this.analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1440, 1080)
.applyFps(30.0f)
.enableAutomaticFocus(true)
.create();
3.4 Call the run method to start the camera and read camera streams for detection.
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
3.5 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
You'll find that the sky, plants, and mountains in all of your images will be identified in an instant. Pretty exciting stuff, wouldn't you say? Feel free to try it out yourself!
GitHub Source Code
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow

How a Programmer Developed a Live-Streaming App with Gesture-Controlled Virtual Backgrounds

What's it like to date a programmer?
John is a Huawei programmer. His girlfriend Jenny, a teacher, has an interesting answer to that question: "Thanks to my programmer boyfriend, my course ranked among the most popular online courses at my school".
Let's go over how this came to be. Due to COVID-19, the school where Jenny taught went entirely online. Jenny, who was new to live streaming, wanted her students to experience the full immersion of traveling to Tokyo, New York, Paris, the Forbidden City, Catherine Palace, and the Louvre Museum, so that they could absorb all of the relevant geographic and historical knowledge related to those places. But how to do so?
Jenny was stuck on this issue, but John quickly came to her rescue.
After analyzing her requirements in detail, John developed a tailored online course app that brings its users an uncannily immersive experience. It enables users to change the background while live streaming. The video imagery within the app looks true-to-life, as each pixel is labeled, and the entire body image — down to a single strand of hair — is completely cut out.
Actual Effects​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How to Implement​Changing live-streaming backgrounds by gesture can be realized by using image segmentation and hand gesture recognition in HUAWEI ML Kit.
The image segmentation service segments specific elements from static images or dynamic video streams, with 11 types of image elements supported: human bodies, sky scenes, plants, foods, cats and dogs, flowers, water, sand, buildings, mountains, and others.
The hand gesture recognition service offers two capabilities: hand keypoint detection and hand gesture recognition. Hand keypoint detection is capable of detecting 21 hand keypoints (including fingertips, knuckles, and wrists) and returning positions of the keypoints. The hand gesture recognition capability detects and returns the positions of all rectangular areas of the hand from images and videos, as well as the type and confidence of a gesture. This capability can recognize 14 different gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time video streams.
Development Process​1. Add the AppGallery Connect plugin and the Maven repository.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Integrate required services in the full SDK mode.
Code:
dependencies{
// Import the basic SDK of image segmentation.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.4.300'
// Import the multiclass segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-multiclass-model:2.0.4.300'
// Import the human body segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.4.300'
// Import the basic SDK of hand gesture recognition.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
// Import the model package of hand keypoint detection.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
}
3. Add configurations in the file header.
Add apply plugin: 'com.huawei.agconnect' after apply plugin: 'com.android.application'.
4. Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file:
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="imgseg,handkeypoint" />
...
</manifest>
5. Create an image segmentation analyzer.
Code:
MLImageSegmentationAnalyzer imageSegmentationAnalyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();// Image segmentation analyzer.
MLHandKeypointAnalyzer handKeypointAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();// Hand gesture recognition analyzer.
MLCompositeAnalyzer analyzer = new MLCompositeAnalyzer.Creator()
.add(imageSegmentationAnalyzer)
.add(handKeypointAnalyzer)
.create();
6. Create a class for processing the recognition result.
Code:
public class ImageSegmentAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLImageSegmentation> {
@Override
public void transactResult(MLAnalyzer.Result<MLImageSegmentation> results) {
SparseArray<MLImageSegmentation> items = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
7. Set the detection result processor to bind the analyzer to the result processor.
Code:
imageSegmentationAnalyzer.setTransactor(new ImageSegmentAnalyzerTransactor());
handKeypointAnalyzer.setTransactor(new HandKeypointTransactor());
8. Create a LensEngine object.
Code:
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context,analyzer)
// Set the front or rear camera mode. LensEngine.BACK_LENS indicates the rear camera, and LensEngine.FRONT_LENS indicates the front camera.
.setLensType(LensEngine.FRONT_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
9. Start the camera, read video streams, and start recognition.
Code:
// Implement other logics of the SurfaceView control by yourself.
SurfaceView mSurfaceView = new SurfaceView(this);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
10. Stop the analyzer and release the recognition resources when recognition ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
For more details, as follows:
Our official website
Our Development Documentation page, to find the documents you need
Experience the easy-integration process on Codelabs
GitHub to download demos and sample codes
Stack Overflow to solve any integration problem
Original Source
muraliameakula said:
What's it like to date a programmer?
John is a Huawei programmer. His girlfriend Jenny, a teacher, has an interesting answer to that question: "Thanks to my programmer boyfriend, my course ranked among the most popular online courses at my school".
Let's go over how this came to be. Due to COVID-19, the school where Jenny taught went entirely online. Jenny, who was new to live streaming, wanted her students to experience the full immersion of traveling to Tokyo, New York, Paris, the Forbidden City, Catherine Palace, and the Louvre Museum, so that they could absorb all of the relevant geographic and historical knowledge related to those places. But how to do so?
Jenny was stuck on this issue, but John quickly came to her rescue.
After analyzing her requirements in detail, John developed a tailored online course app that brings its users an uncannily immersive experience. It enables users to change the background while live streaming. The video imagery within the app looks true-to-life, as each pixel is labeled, and the entire body image — down to a single strand of hair — is completely cut out.
Actual Effects​
How to Implement​Changing live-streaming backgrounds by gesture can be realized by using image segmentation and hand gesture recognition in HUAWEI ML Kit.
The image segmentation service segments specific elements from static images or dynamic video streams, with 11 types of image elements supported: human bodies, sky scenes, plants, foods, cats and dogs, flowers, water, sand, buildings, mountains, and others.
The hand gesture recognition service offers two capabilities: hand keypoint detection and hand gesture recognition. Hand keypoint detection is capable of detecting 21 hand keypoints (including fingertips, knuckles, and wrists) and returning positions of the keypoints. The hand gesture recognition capability detects and returns the positions of all rectangular areas of the hand from images and videos, as well as the type and confidence of a gesture. This capability can recognize 14 different gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time video streams.
Development Process​1. Add the AppGallery Connect plugin and the Maven repository.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Integrate required services in the full SDK mode.
Code:
dependencies{
// Import the basic SDK of image segmentation.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.4.300'
// Import the multiclass segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-multiclass-model:2.0.4.300'
// Import the human body segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.4.300'
// Import the basic SDK of hand gesture recognition.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
// Import the model package of hand keypoint detection.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
}
3. Add configurations in the file header.
Add apply plugin: 'com.huawei.agconnect' after apply plugin: 'com.android.application'.
4. Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file:
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="imgseg,handkeypoint" />
...
</manifest>
5. Create an image segmentation analyzer.
Code:
MLImageSegmentationAnalyzer imageSegmentationAnalyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();// Image segmentation analyzer.
MLHandKeypointAnalyzer handKeypointAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();// Hand gesture recognition analyzer.
MLCompositeAnalyzer analyzer = new MLCompositeAnalyzer.Creator()
.add(imageSegmentationAnalyzer)
.add(handKeypointAnalyzer)
.create();
6. Create a class for processing the recognition result.
Code:
public class ImageSegmentAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLImageSegmentation> {
@Override
public void transactResult(MLAnalyzer.Result<MLImageSegmentation> results) {
SparseArray<MLImageSegmentation> items = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
7. Set the detection result processor to bind the analyzer to the result processor.
Code:
imageSegmentationAnalyzer.setTransactor(new ImageSegmentAnalyzerTransactor());
handKeypointAnalyzer.setTransactor(new HandKeypointTransactor());
8. Create a LensEngine object.
Code:
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context,analyzer)
// Set the front or rear camera mode. LensEngine.BACK_LENS indicates the rear camera, and LensEngine.FRONT_LENS indicates the front camera.
.setLensType(LensEngine.FRONT_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
9. Start the camera, read video streams, and start recognition.
Code:
// Implement other logics of the SurfaceView control by yourself.
SurfaceView mSurfaceView = new SurfaceView(this);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
10. Stop the analyzer and release the recognition resources when recognition ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
For more details, as follows:
Our official website
Our Development Documentation page, to find the documents you need
Experience the easy-integration process on Codelabs
GitHub to download demos and sample codes
Stack Overflow to solve any integration problem
Original Source
Click to expand...
Click to collapse
can i integrate any video background?

Categories

Resources