Image segmentation technology is gathering steam thanks to the development of multiple fields. Take the autonomous vehicle as an example, which has been developing rapidly since last year and become a showpiece for both well-established companies and start-ups. Most of them use computer vision, which includes image segmentation, as the technical basis for self-driving cars, and it is image segmentation that allows a car to understand the situation on the road and to tell the road from the people.
Image segmentation is not only applied to autonomous vehicles, but is also used in a number of different fields, including:
Medical imaging, where it helps doctors make diagnosis and perform tests
Satellite image analysis, where it helps analyze tons of data
Media apps, where it cuts people from video to prevent bullet comments from obstructing them.
It is a widespread application. I myself am also a fan of this technology. Recently, I've tried an image segmentation service from HMS Core ML Kit, which I found outstanding. This service has an original framework for semantic segmentation, which labels each and every pixel in an image, so the service can clearly, completely cut out something as delicate as a hair. The service also excels at processing images with different qualities and dimensions. It uses algorithms of structured learning to prevent white borders — which is a common headache of segmentation algorithms — so that the edges of the segmented image appear more natural.
I'm delighted to be able to share my experience of implementing this service here.
PreparationsFirst, configure the Maven repository and integrate the SDK of the service. I followed the instructions here to complete all these.
1. Configure the Maven repository address
Java:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Add build dependencies
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.1.0.301'
// Import the package of the human body segmentation model.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.1.0.303'
}
3. Add the permission in the AndroidManifest.xml file.
Java:
// Permission to write to external storage.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Development Procedure1. Dynamically request the necessary permissions
Java:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
private boolean allPermissionsGranted() {
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
return false;
}
}
return true;
}
private void getRuntimePermissions() {
List<String> allNeededPermissions = new ArrayList<>();
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
allNeededPermissions.add(permission);
}
}
if (!allNeededPermissions.isEmpty()) {
ActivityCompat.requestPermissions(
this, allNeededPermissions.toArray(new String[0]), PERMISSION_REQUESTS);
}
}
private static boolean isPermissionGranted(Context context, String permission) {
if (ContextCompat.checkSelfPermission(context, permission) == PackageManager.PERMISSION_GRANTED) {
return true;
}
return false;
}
private String[] getRequiredPermissions() {
try {
PackageInfo info =
this.getPackageManager()
.getPackageInfo(this.getPackageName(), PackageManager.GET_PERMISSIONS);
String[] ps = info.requestedPermissions;
if (ps != null && ps.length > 0) {
return ps;
} else {
return new String[0];
}
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
return new String[0];
}
}
2. Create an image segmentation analyzer
Java:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
// Set the segmentation mode to human body segmentation.
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
3. Use android.graphics.Bitmap to create an MLFrame object for the analyzer to detect images
Java:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
4. Call asyncAnalyseFrame for image segmentation
Java:
// Create a task to process the result returned by the analyzer.
Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
// Asynchronously process the result returned by the analyzer.
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override
public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {.
if (mlImageSegmentationResults != null) {
// Obtain the human body segment cut out from the image.
foreground = mlImageSegmentationResults.getForeground();
preview.setImageBitmap(MainActivity.this.foreground);
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
return;
}
});
5. Change the image background
Java:
// Obtain an image from the album.
backgroundBitmap = Utils.loadFromPath(this, id, targetedSize.first, targetedSize.second);
BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
preview.setBackground(drawable);
preview.setImageBitmap(this.foreground);
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> HUAWEI Developers official website
>> Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Related
The source code will be shared and it's quite friendly to users who are fresh to Android. The whole process will take only 30 minutes.
It's an application level development and we won't go through the algorithm of image segmentation. I use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don't know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don't have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn't meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
[CODE]buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
[/CODE]
1.2 Add SDK Dependency in Application Level build.gradle
Code:
Introducing SDK and basic SDK of face recognition:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user's device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--使用存储权限--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // 根据包名打开对应的设置界面
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
Code:
The image segmentation detector can be created through the image segmentation detection configurator "mlimagesegmentation setting".
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create "mlframe" Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator "MLImageSegmentationSetting".
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call "asyncanalyseframe" Method for Image Segmentation
Code:
// 创建一个task,处理图像分割检测器返回的结果。 Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // 异步处理图像分割检测器返回结果 Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let's see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-Core/hms-ml-demo/tree/master/ID-Photo-DIY
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
1. People's portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
2. Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
3. Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
More articles like this, you can visit HUAWEI Developer Forum and Medium
Background
I believe that we all start to learn a language when we have done dictation, now when the primary school students learn the language is an important after - school work is dictating the text of the new words, many parents have this experience. However, on the one hand, the pronunciation is relatively simple. On the other hand, parents' time is very precious. Now, there are many dictation voices in the market. These broadcasters record the dictation words in the language teaching materials after class for parents to download. However, this kind of recording is not flexible enough, if the teacher leaves a few extra words today that are not part of the after-school problem set, the recordings won't meet the needs of parents and children. This document describes how to use the general text recognition and speech synthesis functions of the ML kit to implement the automatic voice broadcast app. You only need to take photos of dictation words or texts, and then the text in the photos can be automatically played. The tone color and tone of the voice can be adjusted.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Preparations
Open the project-level build.gradle file
Choose allprojects > repositories and configure the Maven repository address of HMS SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Configure the Maven repository address of HMS SDK in buildscript->repositories.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Choose buildscript > dependencies and configure the AGC plug-in.
Code:
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
Adding Compilation Dependencies
Open the application levelbuild.gradle file.
SDK integration
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-voice-tts:1.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr:1.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:1.0.4.300'
}
Add the ACG plug-in to the file header.
Code:
apply plugin: 'com.huawei.agconnect'
Specify permissions and features: Declare them in AndroidManifest.xml.
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Key Development Steps
There are two main functions. One is to identify the operation text, and the other is to read the operation. The OCR+TTS mode is used to read the operation. After taking a photo, click the play button to read the operation.
Dynamic permission application
Code:
private static final int PERMISSION_REQUESTS = 1;
@Override
public void onCreate(Bundle savedInstanceState) {
// Checking camera permission
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
2. Start the reading interface.
Code:
public void takePhoto(View view) {
Intent intent = new Intent(MainActivity.this, ReadPhotoActivity.class);
startActivity(intent);
}
3.Invoke createLocalTextAnalyzer() in the onCreate() method to create a device-side text recognizer.
Code:
private void createLocalTextAnalyzer() {
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("zh")
.create();
this.textAnalyzer = MLAnalyzerFactory.getInstance().getLocalTextAnalyzer(setting);
}
4.Invoke createLocalTextAnalyzer() in the onCreate() method to create a device-side text recognizer.
Code:
private void createTtsEngine() {
MLTtsConfig mlConfigs = new MLTtsConfig()
.setLanguage(MLTtsConstants.TTS_ZH_HANS)
.setPerson(MLTtsConstants.TTS_SPEAKER_FEMALE_ZH)
.setSpeed(0.2f)
.setVolume(1.0f);
this.mlTtsEngine = new MLTtsEngine(mlConfigs);
MLTtsCallback callback = new MLTtsCallback() {
@Override
public void onError(String taskId, MLTtsError err) {
}
@Override
public void onWarn(String taskId, MLTtsWarn warn) {
}
@Override
public void onRangeStart(String taskId, int start, int end) {
}
@Override
public void onEvent(String taskId, int eventName, Bundle bundle) {
if (eventName == MLTtsConstants.EVENT_PLAY_STOP) {
if (!bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED)) {
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_finish, Toast.LENGTH_SHORT).show();
}
}
}
};
mlTtsEngine.setTtsCallback(callback);
}
5.Set the buttons for reading photos, taking photos, and reading aloud.
Code:
this.relativeLayoutLoadPhoto.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ReadPhotoActivity.this.selectLocalImage(ReadPhotoActivity.this.REQUEST_CHOOSE_ORIGINPIC);
}
});
this.relativeLayoutTakePhoto.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ReadPhotoActivity.this.takePhoto(ReadPhotoActivity.this.REQUEST_TAKE_PHOTO);
}
});
6.Start TextAnalyzer() during the callback of photographing and reading photos.
Code:
private void startTextAnalyzer() {
if (this.isChosen(this.originBitmap)) {
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Task<MLText> task = this.textAnalyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText mlText) {
// Transacting logic for segment success.
if (mlText != null) {
ReadPhotoActivity.this.remoteDetectSuccess(mlText);
} else {
ReadPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Transacting logic for segment failure.
ReadPhotoActivity.this.displayFailure();
return;
}
});
} else {
Toast.makeText(this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show();
return;
}
}
7.After the recognition is successful, click the play button to start the playback.
Code:
this.relativeLayoutRead.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (ReadPhotoActivity.this.sourceText == null) {
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show();
} else {
ReadPhotoActivity.this.mlTtsEngine.speak(sourceText, MLTtsEngine.QUEUE_APPEND);
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_start, Toast.LENGTH_SHORT).show();
}
}
});
Demo
Any questions about this process, you can visit HUAWEI Developer Forum.
Seems quite simple and useful. I will try.
sanghati said:
Hi,
Nice article. Can you use ML kit for scanning product and find that product online to buy.
Thanks
Click to expand...
Click to collapse
Hi, if you want to scan products which you want to buy, you can use scan kit. Refer the document and acquire help from HUAWEI Developer Forum.
More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201257887466590240&fid=0101187876626530001
Introductions
Richard Yu introduced Huawei HMS Core 4.0 to you at the launch event a while ago. Please check the launch event information:
What does the global release of HMS Core 4.0 mean?
Machine Learning Kit (MLKit) is one of the most important services.
What can MLKIT do? Which of the following problems can be solved during application development?
Today, let’s take face detection as an example to show you the powerful functions of MLKIT and the convenience it provides for developers.
1.1 Capabilities Provided by MLKIT Face Detection
First, let’s look at the face detection capability of Huawei Machine Learning Service (MLKIT).
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
As shown in the animation, facial recognition can recognize the face direction, detect facial expressions (such as happy, disgusted, surprised, sad, angry, and angry), detect facial attributes (such as gender, age, and wearable), and detect whether to open or close eyes, supports coordinate detection of features such as faces, noses, eyes, lips, and eyebrows. In addition, multiple faces can be detected at the same time.
Tips: This function is free of charge and covers all Android models.
2 Development of the Multi-Face Smile Photographing Function
Today, I will use the multi-facial recognition and expression detection capabilities of MLKIT to write a small demo for smiling snapshot and perform a practice.
To download the Github demo source code, click here (the project directory is Smile-Camera).
2.1 Development Preparations
The preparations for developing the kit of Huawei HMS are similar. The only difference is that the Maven dependency is added and the SDK is introduced.
1. Add the Huawei Maven repository to the project-level gradle.
Incrementally add the following Maven addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
}
}
2. Add the SDK dependency to the build.gradle file at the application level.
Introduce the facial recognition SDK and basic SDK.
Code:
dependencies{
// Introduce the basic SDK.
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
// Introduce the face detection capability package.
implementation 'com.huawei.hms:ml-computer-vision-face-recognition-model:1.0.2.300'
}
3. The model is added to the AndroidManifest.xml file in incremental mode for automatic download.
This is mainly used to update the model. After the algorithm is optimized, the model can be automatically downloaded to the mobile phone for update.
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
</application>
</manifest>
4. Apply for camera and storage permissions in the AndroidManifest.xml file.
Code:
<!-Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Use the storage permission.-->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2.2 Code development
1. Create a face analyzer and take photos when a smile is detected.
Photos taken after detection:
1) Analyzer parameter configuration
2) Sends analyzer parameter settings to the analyzer.
3) In analyzer.setTransacto, transactResult is rewritten to process the content after facial recognition. After facial recognition, a confidence level (smiling probability) is returned. You only need to set the confidence level to a certain value.
Code:
private MLFaceAnalyzer analyzer;
private void createFaceAnalyzer() {
MLFaceAnalyzerSetting setting =
new MLFaceAnalyzerSetting.Factory()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
.setKeyPointType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_KEYPOINTS)
.setMinFaceProportion(0.1f)
.setTracingAllowed(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
this.analyzer.setTransactor(new MLAnalyzer.MLTransactor<MLFace>() {
@Override
public void destroy() {
}
Code:
@Override
public void transactResult(MLAnalyzer.Result<MLFace> result) {
SparseArray<MLFace> faceSparseArray = result.getAnalyseList();
int flag = 0;
for (int i = 0; i < faceSparseArray.size(); i++) {
MLFaceEmotion emotion = faceSparseArray.valueAt(i).getEmotions();
if (emotion.getSmilingProbability() > smilingPossibility) {
flag++;
}
}
if (flag > faceSparseArray.size() * smilingRate && safeToTakePicture) {
safeToTakePicture = false;
mHandler.sendEmptyMessage(TAKE_PHOTO);
}
}
});
}
Photographing and storage:
Code:
private void takePhoto() {
this.mLensEngine.photograph(null,
new LensEngine.PhotographListener() {
@Override
public void takenPhotograph(byte[] bytes) {
mHandler.sendEmptyMessage(STOP_PREVIEW);
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
saveBitmapToDisk(bitmap);
}
});
}
2. Create a visual engine to capture dynamic video streams from cameras and send the streams to the analyzer.
Code:
private void createLensEngine() {
Context context = this.getApplicationContext();
// Create LensEngine
this.mLensEngine = new LensEngine.Creator(context, this.analyzer).setLensType(this.lensType)
.applyDisplayDimension(640, 480)
.applyFps(25.0f)
.enableAutomaticFocus(true)
.create();
}
3. Dynamic permission application, attaching the analyzer and visual engine creation code3. Dynamic permission application, attaching the analyzer and visual engine creation code
Code:
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
this.setContentView(R.layout.activity_live_face_analyse);
if (savedInstanceState! = null) {
this.lensType = savedInstanceState.getInt("lensType");
}
this.mPreview = this.findViewById(R.id.preview);
this.createFaceAnalyzer();
this.findViewById(R.id.facingSwitch).setOnClickListener(this);
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
} else {
this.requestCameraPermission();
}
}
Code:
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE};
Code:
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE);
return;
}
}
Code:
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
if (requestCode != LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
return;
}
if (grantResults.length != 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
return;
}
}
3 Conclusion
Is the development process very simple? A new feature can be developed in 30 minutes. Let’s experience the effect of the multi-faced smile capture.
Multi-person smiling face snapshot:
Based on the face detection capability, which functions can be done? Open your brain hole! Here are a few hints:
1. Add interesting decorative effects by identifying the locations of facial features such as ears, eyes, nose, mouth, and eyebrows.
2. Identify facial contours and stretch the contours to generate interesting portraits or develop facial beautification functions for contour areas.
3. Develop some parental control functions based on age identification and children’s infatuation with electronic products.
4. Develop the eye comfort feature by detecting the duration of eyes staring at the screen.
5. Implements liveness detection through random commands (such as shaking the head, blinking the eyes, and opening the mouth).
6. Recommend offerings to users based on their age and gender.
For details about the development guide, visit the HUAWEI Developers
Everyone's family has some old photos filed away in an album. Despite the simple backgrounds and casual poses, these photos reveal quite a bit, telling stories and providing insight on what life was like in the past.
In anticipation of Father's Birthday, John, a programmer at Huawei, was racking his brains about what gift to get for his father. He thought about it for quite a while — then suddenly, a glimpse at an old photo album piqued his interest. "Why not using my coding expertise to restore my father's old photo, and shed light on his youthful personality?", he mused. Intrigued by this thought, John started to look into how he could achieve this goal.
Image super-resolution in HUAWEI ML Kit was ultimately what he settled on. With this service, John was able to convert the wrinkled and blurry old photo into a hi-res image, and presented it to his father. His father was deeply touched by the gesture.
Actual Effects:
Image Super-Resolution
This service converts an unclear, low-resolution image into a high-resolution image, increasing pixel intensity and displaying details that were missed when the image was originally taken.
Image super-resolution is ideal in computer vision, where it can help enhance image recognition and analysis capabilities. The Image super-resolution technology has improved rapidly, and weighs more in day-to-day work and life. It can be used to sharpen common images, such as portrait shots, as well as vital images in fields like medical imaging, security surveillance, and satellite imaging.
Image super-resolution offers both 1x and 3x super-resolution capabilities. 1x super-resolution removes compression noise, and 3x super-resolution effectively suppresses compression noise, while also providing a 3x enlargement capability.
The Image super-resolution service can help enhance images for a wide range of objects and items, such as greenery, food, and employee ID cards. You can even use it to enhance low-quality images such as news images obtained from the network into clear, enlarged ones.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Preparations
For more details about configuring the Huawei Maven repository and integrating the image super-resolution SDK, please refer to the Development Guide of ML Kit on HUAWEI Developers.
Configuring the Integrated SDK
Open the build.gradle file in the app directory. Add build dependencies for the image super-resolution SDK under the dependencies block.
Code:
implementation'com.huawei.hms:ml-computer-vision-imagesuperresolution:2.0.4.300'
implementation'com.huawei.hms:ml-computer-vision-imagesuperresolution-model:2.0.4.300'
Configuring the AndroidManifest.xml File
Open the AndroidManifest.xml file in the main folder. Apply for the storage read permission as needed by adding the following statement before <application>:
Code:
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Add the following statements in <application>. Then the app, after being installed, will automatically update the machine learning model to the device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imagesuperresolution"/>
Development Procedure
Configuring the Application for the Storage Read Permission
Check whether the app has had the storage read permission in onCreate() of MainActivity. If no, apply for this permission through requestPermissions; if yes, call startSuperResolutionActivity() to start super-resolution processing on the image.
Code:
if (ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE)
!= PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, REQUEST_CODE);
} else {
startSuperResolutionActivity();
}
Check the permission application results:
Code:
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode == REQUEST_CODE) {
if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
startSuperResolutionActivity();
} else {
Toast.makeText(this, "Permission application failed, you denied the permission", Toast.LENGTH_SHORT).show();
}
}
}
After the application is complete, create a button. Set a configuration that after the button is tapped, the app will read images from the storage.
Code:
private void selectLocalImage() {
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, REQUEST_SELECT_IMAGE);
}
Configuring the Image Super-Resolution Analyzer
Before the app can perform super-resolution processing on the image, create and configure an analyzer. The example below configures two parameters for the 1x super-resolution capability and 3x super-resolution capability respectively. Which one of them is used depends on the value of selectItem.
Code:
private MLImageSuperResolutionAnalyzer createAnalyzer() {
if (selectItem == INDEX_1X) {
return MLImageSuperResolutionAnalyzerFactory.getInstance().getImageSuperResolutionAnalyzer();
} else {
MLImageSuperResolutionAnalyzerSetting setting = new MLImageSuperResolutionAnalyzerSetting.Factory()
.setScale(MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_3X)
.create();
return MLImageSuperResolutionAnalyzerFactory.getInstance().getImageSuperResolutionAnalyzer(setting);
}
}
Constructing and Processing the Image
Before the app can perform super-resolution processing on the image, convert the image into a bitmap whose color format is ARGB8888. Create an MLFrame object using the bitmap. After the image is added, obtain its information and override onActivityResult.
Code:
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_OK) {
if (data != null) {
imageUri = data.getData();
}
reloadAndDetectImage(true, false);
} else if (resultCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_CANCELED) {
finish();
}
}
Create an MLFrame object using the bitmap.
Code:
srcBitmap = BitmapUtils.loadFromPathWithoutZoom(this, imageUri, IMAGE_MAX_SIZE, IMAGE_MAX_SIZE);
MLFrame frame = MLFrame.fromBitmap(srcBitmap);
Call the asynchronous method asyncAnalyseFrame to perform super-resolution processing on the image.
Code:
Task<MLImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSuperResolutionResult>() {
public void onSuccess(MLImageSuperResolutionResult result) {
// Recognition success.
desBitmap = result.getBitmap();
setImage(desImageView, desBitmap);
setImageSizeInfo(desBitmap.getWidth(), desBitmap.getHeight());
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Recognition failure.
Log.e(TAG, "Failed." + e.getMessage());
Toast.makeText(getApplicationContext(), e.getMessage(), Toast.LENGTH_SHORT).show();
}
});
After the recognition is complete, stop the analyzer.
Code:
if (analyzer != null) {
analyzer.stop();
}
Meanwhile, override onDestroy of the activity to release the bitmap resources.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
if (desBitmap != null) {
desBitmap.recycle();
}
if (analyzer != null) {
analyzer.stop();
}
}
References
To learn more, please visit:
Official webpages for Image Super-Resolution and ML Kit
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original source
Overview
In this article, I will create a DoctorConsultApp android application in which I will integrate HMS Core kits such as Huawei ID, Crash and Analytics.
Huawei ID Service Introduction
Huawei ID login provides you with simple, secure, and quick sign-in and authorization functions. Instead of entering accounts and passwords and waiting for authentication, users can just tap the Sign in with HUAWEI ID button to quickly and securely sign in to your app with their HUAWEI IDs.
Prerequisite
Huawei Phone EMUI 3.0 or later.
Non-Huawei phones Android 4.4 or later (API level 19 or higher).
HMS Core APK 4.0.0.300 or later
Android Studio
AppGallery Account.
App Gallery Integration process
Sign In and Create or Choose a project on AppGallery Connect portal.
Navigate to Project settings and download the configuration file.
Navigate to General Information, and then provide Data Storage location.
App Development
Create A New Project.
Configure Project Gradle.
Code:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
Configure App Gradle.
Code:
implementation 'com.huawei.hms:identity:5.3.0.300'
implementation 'com.huawei.agconnect:agconnect-auth:1.4.1.300'
implementation 'com.huawei.hms:hwid:5.3.0.302'
Configure AndroidManifest.xml.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
Create Activity class with XML UI.
MainActivity:
Code:
public class MainActivity extends AppCompatActivity {
Toolbar t;
DrawerLayout drawer;
EditText nametext;
EditText agetext;
ImageView enter;
RadioButton male;
RadioButton female;
RadioGroup rg;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
drawer = findViewById(R.id.draw_activity);
t = (Toolbar) findViewById(R.id.toolbar);
nametext = findViewById(R.id.nametext);
agetext = findViewById(R.id.agetext);
enter = findViewById(R.id.imageView7);
male = findViewById(R.id.male);
female = findViewById(R.id.female);
rg=findViewById(R.id.rg1);
ActionBarDrawerToggle toggle = new ActionBarDrawerToggle(this, drawer, t, R.string.navigation_drawer_open, R.string.navigation_drawer_close);
drawer.addDrawerListener(toggle);
toggle.syncState();
NavigationView nav = findViewById(R.id.nav_view);
enter.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
String name = nametext.getText().toString();
String age = agetext.getText().toString();
String gender= new String();
int id=rg.getCheckedRadioButtonId();
switch(id)
{
case R.id.male:
gender = "Mr.";
break;
case R.id.female:
gender = "Ms.";
break;
}
Intent symp = new Intent(MainActivity.this, SymptomsActivity.class);
symp.putExtra("name",name);
symp.putExtra("gender",gender);
startActivity(symp);
}
});
nav.setNavigationItemSelectedListener(new NavigationView.OnNavigationItemSelectedListener() {
@Override
public boolean onNavigationItemSelected(@NonNull MenuItem menuItem) {
switch(menuItem.getItemId())
{
case R.id.nav_sos:
Intent in = new Intent(MainActivity.this, call.class);
startActivity(in);
break;
case R.id.nav_share:
Intent myIntent = new Intent(Intent.ACTION_SEND);
myIntent.setType("text/plain");
startActivity(Intent.createChooser(myIntent,"SHARE USING"));
break;
case R.id.nav_hosp:
Intent browserIntent = new Intent(Intent.ACTION_VIEW);
browserIntent.setData(Uri.parse("https://www.google.com/maps/search/hospitals+near+me"));
startActivity(browserIntent);
break;
case R.id.nav_cntus:
Intent c_us = new Intent(MainActivity.this, activity_contact_us.class);
startActivity(c_us);
break;
}
drawer.closeDrawer(GravityCompat.START);
return true;
}
});
}
}
App Build Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Tips and Tricks
Identity Kit displays the HUAWEI ID registration or sign-in page first. You can use the functions provided by Identity Kit only after signing in by a registered HUAWEI ID.
Conclusion
In this article, we have learned how to integrate Huawei ID in Android application. After completely read this article, user can easily implement Huawei ID in DoctorConsultApp application.
Thanks for reading this article. Be sure to like and comment to this article, if you found it helpful. It means a lot to me.
References
HMS Docs:
https://developer.huawei.com/consum.../HMSCore-Guides/introduction-0000001050048870
Video Training:
https://developer.huawei.com/consumer/en/training/course/video/101583015541549183