Practice on Developing a Face Verification Function - Huawei Developers

Background​Oh how great it is to be able to reset bank details from the comfort of home and avoid all the hassle of going to the bank, queuing up, and proving you are who you say you are.
All these have become true with the help of some tech magic known as face verification, which is perfect for verifying a user's identity remotely. I have been curious about how the tech works, so here it is: I decided to integrate the face verification service from HMS Core ML Kit into a demo app. Below is how I did it.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Process​Preparations​1. Make necessary configurations as detailed here.
2. Configure the Maven repository address for the face verification service.
Open the project-level build.gradle file of the Android Studio project.
Add the Maven repository address and AppGallery Connect plugin. Go to allprojects > repositories and configure the Maven repository address for the face verification service.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > repositories to configure the Maven repository address.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > dependencies to add the plugin configuration.
Code:
buildscript{
dependencies {
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
Function Building​1. Create an instance of the face verification analyzer.
Code:
MLFaceVerificationAnalyzer analyzer = MLFaceVerificationAnalyzerFactory.getInstance().getFaceVerificationAnalyzer();
2. Create an MLFrame object via android.graphics.Bitmap. This object is used to set the face verification template image whose format can be JPG, JPEG, PNG, or BMP.
Code:
// Create an MLFrame object.
MLFrame templateFrame = MLFrame.fromBitmap(bitmap);
3. Set the template image. The setting will fail if the template does not contain a face, and the face verification service will use the template set last time.
Code:
List<MLFaceTemplateResult> results = analyzer.setTemplateFace(templateFrame);
for (int i = 0; i < results.size(); i++) {
// Process the result of face detection in the template.
}
4. Use android.graphics.Bitmap to create an MLFrame object that is used to set the image for comparison. The image format can be JPG, JPEG, PNG, or BMP.
Code:
// Create an MLFrame object.
MLFrame compareFrame = MLFrame.fromBitmap(bitmap);
5. Perform face verification by calling the asynchronous or synchronous method. The returned verification result (MLFaceVerificationResult) contains the facial information obtained from the comparison image and the confidence indicating the faces in the comparison image and template image being of the same person.
Asynchronous method:
Code:
Task<List<MLFaceVerificationResult>> task = analyzer.asyncAnalyseFrame(compareFrame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFaceVerificationResult>>() {
@Override
public void onSuccess(List<MLFaceVerificationResult> results) {
// Callback when the verification is successful.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Callback when the verification fails.
}
});
Synchronous method:
Code:
SparseArray<MLFaceVerificationResult> results = analyzer.analyseFrame(compareFrame);
for (int i = 0; i < results.size(); i++) {
// Process the verification result.
}
6. Stop the analyzer and release the resources that it occupies, when verification is complete.
Code:
if (analyzer != null) {
analyzer.stop();
}
This is how the face verification function is built. This kind of tech not only saves hassle, but is great for honing my developer skills.
​References​Face Verification from HMS Core ML Kit
Why Facial Verification is the Biometric Technology for Financial Services in 2022

Related

Integrate ML Kit’s Document Skew Correction Capability

Introduction
In the previous post, we looked at how to use HUAWEI ML Kit’s text recognition capability. With this capability, users just need to provide an image, and your app will automatically recognize all of the key information. But what if the image is skew? Can we still get all the same key information? Yes, we can! ML Kit’s document skew correction capability automatically recognizes the position of the document, corrects the shooting angle, and lets users customize the edge points. So even if the document is skew, your users can still get all of the information they need.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Application Scenarios
Document skew correction is useful in a huge range of situations. For example, if your camera is at an angle when you shoot a paper document, the resulting image may be difficult to read. You can use document skew correction to adjust the document to the correct angle, which makes it easier to read.
Document skew correction also works for cards.
And if you’re traveling, you can use it to straighten out pictures of road signs that you’ve taken from an angle.
Isn’t it convenient? Now, I will show you how to quickly integrate this capability.
Document Correction Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we’ll just look at the most important procedures.
1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-documentskew:2.0.2.300'
// Import the document detection/correction model package.
implementation 'com.huawei.hms:ml-computer-vision-documentskew-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: ‘com.huawei.agconnect’
apply plugin: ‘com.android.application’
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "dsc"/>
1.5 Apply for Camera Permission and Local Image Reading Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2 Code Development
2.1 Create a Text Box Detection/Correction Analyzer
Code:
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting.Factory().create();
MLDocumentSkewCorrectionAnalyzer analyzer = MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
2.2 Create an MLFrame Object by Using android.graphics.Bitmap for the Analyzer to Detect Images
JPG, JPEG, and PNG images are supported. It is recommended that you limit the image size to between 320 x 320 px and 1920 x 1920 px.
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncDocumentSkewDetect Asynchronous Method or analyseFrame Synchronous Method to Detect the Text Box.
When the return code is MLDocumentSkewCorrectionConstant.SUCCESS, the coordinates of the four verticals of the text box are returned. These coordinates are relative to the coordinates of the input image. If the coordinates are inconsistent with those of the device, you need to convert them. Otherwise, the returned data will be meaningless.
Code:
// Call the asyncDocumentSkewDetect asynchronous method.
Task<MLDocumentSkewDetectResult> detectTask = analyzer.asyncDocumentSkewDetect(mlFrame);
detectTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
@Override
public void onSuccess(MLDocumentSkewDetectResult detectResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
})
// Call the analyseFrame synchronous method.
SparseArray<MLDocumentSkewDetectResult> detect = analyzer.analyseFrame(mlFrame);
if (detect != null && detect.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Detection success.
} else {
// Detection failure.
}
2.4 Obtain the Coordinate Data of the Four Verticals in the Text Box Once the Detection is Successful
Use the upper left vertex as the starting point, and add the upper left vertex, upper right vertex, lower right vertex, and lower left vertex to the list (List<Point>). Finally, create an MLDocumentSkewCorrectionCoordinateInput object.
If the synchronous method analyseFrame is called, the detection results will be obtained first, as you can see in the following figure. (If the asynchronous method asyncDocumentSkewDetect is called, skip this step.)
Code:
MLDocumentSkewDetectResult detectResult = detect.get(0);
Obtain the coordinate data for the four verticals of the text box and create an MLDocumentSkewCorrectionCoordinateInput object.
Code:
Point leftTop = detectResult.getLeftTopPosition();
Point rightTop = detectResult.getRightTopPosition();
Point leftBottom = detectResult.getLeftBottomPosition();
Point rightBottom = detectResult.getRightBottomPosition();
List<Point> coordinates = new ArrayList<>();
coordinates.add(leftTop);
coordinates.add(rightTop);
coordinates.add(rightBottom);
coordinates.add(leftBottom);
MLDocumentSkewCorrectionCoordinateInput coordinateData = new MLDocumentSkewCorrectionCoordinateInput(coordinates);
2.5 Call the asyncDocumentSkewCorrect Asynchronous Method or syncDocumentSkewCorrect Synchronous Method to Correct the Text Box
Code:
// Call the asyncDocumentSkewCorrect asynchronous method.
Task<MLDocumentSkewCorrectionResult> correctionTask = analyzer.asyncDocumentSkewCorrect(mlFrame, coordinateData);
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
// Call the syncDocumentSkewCorrect synchronous method.
SparseArray<MLDocumentSkewCorrectionResult> correct= analyzer.syncDocumentSkewCorrect(mlFrame, coordinateData);
if (correct != null && correct.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Correction success.
} else {
// Correction failure.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
Can we use ML Kit to translate the document to other language ?

How to Improve Text Resolution Using HUAWEI ML Kit's Text Super-Resolution Capability

Introduction
Often, when you take screenshots of text and images you find online and share them with your friends, the apps you use to send them will automatically compress the screenshots. This means the people receiving them get low definition images which are hard to read.
Is there any way to solve this problem? Of course! ML Kit's text super-resolution capability improves text resolution in images. It enables users to magnify images containing text until they're 9 times bigger while greatly enhancing the text’s definition, making it easier to read.
Application Scenarios
The text super-resolution capability is useful in a huge range of situations. For example, when screenshots are compressed, the text super-resolution capability will restore them to their original definition.
When you take a photograph of a document from far away, or can't properly adjust the focus, the text may not be clear. In this situation, you can use the text super-resolution capability to improve the definition and readability of the document.
Pretty useful, right? Below, I'll show you how to integrate this capability
Text Super-Resolution Development
1. Configure the Maven Repository Address
1.1 Open the build.gradle File in the Root Directory of Your Android Studio Project
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
1.2 Add the AppGallery Connect Plug-in and the Maven Repository
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Integrate the Text Image Super-Resolution SDK
2.1 Full SDK Integration (Recommended)
Code:
dependencies{
// Import the base SDK.
Implementation 'com.huawei.hms:ml-computer-vision-textimagesuperresolution:2.0.3.300'
// Import the text image super-resolution model package.
implementation 'com.huawei.hms:ml-computer-vision-textimagesuperresolution-model:2.0.3.300'
}
2.2 Add Configurations to the File Header
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
2.3 Update the Machine Learning Model
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "tisr"/>
3. Code Development
3.1 Create a Text Image Super-Resolution Analyzer
Code:
MLTextImageSuperResolutionAnalyzer analyzer = MLTextImageSuperResolutionAnalyzerFactory.getInstance().getTextImageSuperResolutionAnalyzer();
3.2 Create an MLFrame Object by Using android.graphics.Bitmap
The bitmap type must be ARGB8888. If it isn’t, you’ll need to convert it.
Code:
// Create an MLFrame object using the bitmap. The bitmap parameter indicates the input image.
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
3.3 Processing Super-Resolution for Images That Contain Texts
Code:
Task<MLTextImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLTextImageSuperResolutionResult>() {
public void onSuccess(MLTextImageSuperResolutionResult result) {
// Processing logic for super-resolution success.
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for super-resolution failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize the messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} else {
// Other errors.
}
});
3.4 Stop the Analyzer to Release Detection Resources Once the Super-Resolution is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
GitHub Source Code
How far it can detect the text using camera ?

Integrate HUAWEI ML Kit with Ease, for High-Level Business Card Recognition

Introduction
In an age of digital living, maintaining interpersonal connections has never been more important. It's a common ritual for professionals to exchange business cards, but this can come with certain hassles – having to input the contact information for each new contact, including names, companies, and addresses. That's where business card recognition comes into the picture.
How It Works
The business card recognition service utilizes the optical character recognition (OCR) technology to digitize text on a card, converting it into an editable format, and classifying the recognized information by category. When you have integrated the service into your app, your users will be able to collect information on a business card simply by snapping a picture, scanning the QR code on the card, or importing the card image.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Preparations
Before API development, there are a few things that you'll need to do in preparation, for example, configuring the Maven repository address of the HMS Core SDK in your project, and integrating the SDK for the business card recognition service.
Install Android Studio
Official download website
Installation guide
Add Huawei Maven Repository to the Project-Level build.gradle File
Open the build.gradle File in the Root Directory of Your Android Studio Project
Add the Maven Repository Address
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
allprojects {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
Import the SDK
Code:
dependencies {
// Text recognition SDK.
implementation 'com.huawei.hms:ml-computer-vision-ocr:2.0.1.300'
// Text recognition model.
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-jk-model:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-latin-model:2.0.1.300'
}
Add the Manifest File
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="ocr" />
...
</manifest>
Add Permissions
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.hardware.camera.autofocus" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.autofocus" />
Submit a Dynamic Permission Application
Code:
if (!(ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED)) {
requestCameraPermission();
}
Development
1. Create the text analyzer MLTextAnalyzer to recognize text within images. Use the custom parameter MLLocalTextSetting to configure the on-device text analyzer.
Code:
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("zh")
.create();
MLTextAnalyzer analyzer = MLAnalyzerFactory.getInstance()
.getLocalTextAnalyzer(setting);
2. Use android.graphics.Bitmap to create an MLFrame. Supported image formats include JPG, JPEG, PNG, and BMP. It is recommended that the aspect ratio for landscape business cards be 2:1 and that for portrait business cards be 1:2.
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
3. Pass the MLFrame object to the asyncAnalyseFrame method for text recognition.
Code:
Task<MLText> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText text) {
// Recognition success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
}
});
4. After recognition is complete, stop the analyzer to release recognition resources.
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// IOException
} catch (Exception e) {
// Exception
}
More Information
We have developed a demo app that demonstrates the effects of the business card recognition service.
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow

Use Site Kit to Return Parent and Child Node Information for a Searched Place

When users search for places, they may not specify exactly what aspect of that place they are interested in. Search results should include information about both parent nodes (the place itself) and child nodes (related information), because it makes it easier for users to find the information they're looking for. For example, if a user searches for an airport, your app can also return information about child nodes, such as terminals, parking lots, and entrances and exits. This enables your app to provide more scenario-specific results, making it easier for users to explore their surroundings.
This post shows you how you can integrate Site Kit into your app and return information about both parent and child nodes for the places your users search for.
1. Preparations
Before you get started, there is a few preparations you'll need to make. First, make sure that you have configured the Maven repository address of the Site SDK in your project, and integrated the Site SDK.
1.1 Configure the Maven repository address in the project-level build.gradle file.
Code:
<p style="line-height: 1.5em;">buildscript {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
// Add a dependency on the AppGallery Connect plugin.
dependencies {
classpath "com.android.tools.build:gradle:3.3.2"
}
}
</p>
Code:
<p style="line-height: 1.5em;">allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
</p>
1.2 Add a dependency on the Site SDK in the build.gradle file in the app directory.
Code:
<p style="line-height: 1.5em;">dependencies {
implementation 'com.huawei.hms:site:4.0.0.202'
}
</p>
2. Development Procedure
2.1 Create a SearchService object.
Code:
<p style="line-height: 1.5em;">SearchService searchService = SearchServiceFactory.create(this, Utils.getApiKey());
</p>
2.2 Create the SearchResultListener class so your app can process the search result.
The SearchResultListener class implements the SearchResultListener<TextSearchResponse> method. The onSearchResult(TextSearchResponse results) method in this class is used to obtain the search result and implement the specific service.
Code:
<p style="line-height: 1.5em;">SearchResultListener<TextSearchResponse> resultListener = new SearchResultListener<TextSearchResponse>() {
@Override
public void onSearchResult(TextSearchResponse results) {
Log.d(TAG, "onTextSearchResult: " + results.toString());
List<Site> siteList;
if (results == null || results.getTotalCount() <= 0 || (siteList = results.getSites()) == null || siteList.size() <= 0) {
resultTextView.setText("Result is Empty!");
return;
}
for (Site site : siteList) {
// Handle the search result as needed.
....
// Obtain information about child nodes.
if ((site.getPoi() != null)) {
ChildrenNode[] childrenNodes = poi.getChildrenNodes();
// Handle the information as needed.
....
}
}
}
@Override
public void onSearchError(SearchStatus status) {
resultTextView.setText("Error : " + status.getErrorCode() + " " + status.getErrorMessage());
}
}; </p>
2.3 Create the TextSearchRequest class and set the request parameters.
Code:
<p style="line-height: 1.5em;">TextSearchRequest request = new TextSearchRequest();
String query = "Josep Tarradellas Airport";
request.setQuery(query);
Double lat = 41.300621;
Double lng = 2.0797638;
request.setLocation(new Coordinate(lat, lng));
// Set to obtain child node information.
request.setChildren(true);
</p>
2.4 Set a request result handler and bind it with the request.
Code:
<p style="line-height: 1.5em;">searchService.textSearch(request, resultListener);
</p>
Once you have completed the steps above, your app will be able to return information about both the parent node and its child nodes. This attachment shows how the search results will look:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
You can find the source code on GitHub.
For more details, you can go to:
l Our official website
l Our development guide
l Reddit to join our developer discussion
l GitHub to download demos and sample codes
l Stack Overflow to solve any integration problems
Nice write up.

Introduction to AI-Empowered Image Segmentation

Image segmentation technology is gathering steam thanks to the development of multiple fields. Take the autonomous vehicle as an example, which has been developing rapidly since last year and become a showpiece for both well-established companies and start-ups. Most of them use computer vision, which includes image segmentation, as the technical basis for self-driving cars, and it is image segmentation that allows a car to understand the situation on the road and to tell the road from the people.
Image segmentation is not only applied to autonomous vehicles, but is also used in a number of different fields, including:
Medical imaging, where it helps doctors make diagnosis and perform tests
Satellite image analysis, where it helps analyze tons of data
Media apps, where it cuts people from video to prevent bullet comments from obstructing them.
It is a widespread application. I myself am also a fan of this technology. Recently, I've tried an image segmentation service from HMS Core ML Kit, which I found outstanding. This service has an original framework for semantic segmentation, which labels each and every pixel in an image, so the service can clearly, completely cut out something as delicate as a hair. The service also excels at processing images with different qualities and dimensions. It uses algorithms of structured learning to prevent white borders — which is a common headache of segmentation algorithms — so that the edges of the segmented image appear more natural.
I'm delighted to be able to share my experience of implementing this service here.
Preparations​First, configure the Maven repository and integrate the SDK of the service. I followed the instructions here to complete all these.
1. Configure the Maven repository address
Java:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Add build dependencies
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.1.0.301'
// Import the package of the human body segmentation model.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.1.0.303'
}
3. Add the permission in the AndroidManifest.xml file.
Java:
// Permission to write to external storage.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Development Procedure​1. Dynamically request the necessary permissions
Java:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
private boolean allPermissionsGranted() {
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
return false;
}
}
return true;
}
private void getRuntimePermissions() {
List<String> allNeededPermissions = new ArrayList<>();
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
allNeededPermissions.add(permission);
}
}
if (!allNeededPermissions.isEmpty()) {
ActivityCompat.requestPermissions(
this, allNeededPermissions.toArray(new String[0]), PERMISSION_REQUESTS);
}
}
private static boolean isPermissionGranted(Context context, String permission) {
if (ContextCompat.checkSelfPermission(context, permission) == PackageManager.PERMISSION_GRANTED) {
return true;
}
return false;
}
private String[] getRequiredPermissions() {
try {
PackageInfo info =
this.getPackageManager()
.getPackageInfo(this.getPackageName(), PackageManager.GET_PERMISSIONS);
String[] ps = info.requestedPermissions;
if (ps != null && ps.length > 0) {
return ps;
} else {
return new String[0];
}
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
return new String[0];
}
}
2. Create an image segmentation analyzer
Java:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
// Set the segmentation mode to human body segmentation.
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
3. Use android.graphics.Bitmap to create an MLFrame object for the analyzer to detect images
Java:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
4. Call asyncAnalyseFrame for image segmentation
Java:
// Create a task to process the result returned by the analyzer.
Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
// Asynchronously process the result returned by the analyzer.
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override
public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {.
if (mlImageSegmentationResults != null) {
// Obtain the human body segment cut out from the image.
foreground = mlImageSegmentationResults.getForeground();
preview.setImageBitmap(MainActivity.this.foreground);
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
return;
}
});
5. Change the image background
Java:
// Obtain an image from the album.
backgroundBitmap = Utils.loadFromPath(this, id, targetedSize.first, targetedSize.second);
BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
preview.setBackground(drawable);
preview.setImageBitmap(this.foreground);
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Result​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> HUAWEI Developers official website
>> Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

Categories

Resources