How to Integrate HUAWEI ML Kit's Image Super-Resolution Capability - Huawei Developers

Have you ever been sent compressed images that have poor definition? Even when you zoom in, the image is still blurry. I recently received a ZIP file of travel photos from a trip I went on with a friend. After opening it, I found to my dismay that each image was either too dark, too dim or too blurry. How am I going to show off with such terrible photos? So, I sought help from the Internet, and luckily, I came across HUAWEI ML Kit's image super-resolution capability. The amazing thing is that this SDK is free of charge, and can be used with all Android phones.
Background
ML Kit's image super-resolution capability is backed by a deep neural network and provides two super-resolution capabilities for mobile apps:
1x super-resolution: Doesn’t change the size but removes compression noise, for vivid, natural images.
3x super-resolution: Provides 3x magnification (with 9x higher pixel resolution) while also removing compression noise, for clear details and textures.
Of course, these capabilities are not easy for non-professionals to develop themselves. But here is the sample code, so you can download and experience it yourself!
Application Scenarios
Image super-resolution is not just about optimizing faces and text; it’s useful in a huge range of situations. When shopping apps integrate 3x image super-resolution, they can give shoppers the option to zoom in on a product, without losing any image quality. For news and reading apps, the 1x image super-resolution capability can give users clearer images without changing the image resolution. Then, for photography apps which integrate image super-resolution, users can capture more vivid and natural images.
Code Development
Before we get started with API development, we need to make necessary development preparations. Also, ensure that the Maven repository address of the HMS Core SDK has been configured in your project, and the SDK of this service has been integrated.
Note
When using the image super-resolution capability, you need to set the value of targetSdkVersion to less than 29 in the build.gradle file.
1. Create an image super-resolution analyzer.
You can create the analyzer using the MLImageSuperResolutionAnalyzerSetting class.
Code:
// Method 1: Use default parameter settings, that is, the 1x super-resolution capability.
MLImageSuperResolutionAnalyzer analyzer = MLImageSuperResolutionAnalyzerFactory.getInstance
().getImageSuperResolutionAnalyzer();
// Method 2: Use custom parameter settings. Currently, only 1x super-resolution is supported. More capabilities will be supported in future.
MLImageSuperResolutionAnalyzerSetting settings = new MLImageSuperResolutionAnalyzerSetting.Factory()
// Set the image super-resolution multiplier to 1x.
setScale(MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_1X)
create();
MLImageSuperResolutionAnalyzer analyzer = MLImageSuperResolutionAnalyzerFactory.getInstance
().getImageSuperResolutionAnalyzer(setting)
2. Create an MLFrame object by using android.graphics.Bitmap. (Note that the bitmap type must be ARGB8888. Convert if necessary.)
Code:
// Create an MLFrame object using the bitmap. The parameter bitmap indicates the input image.
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
3. Perform super-resolution processing on the image. (For details about the error codes, please refer to ML Kit Error Codes).
Code:
Task<MLImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSuperResolutionResult>() {
public void onSuccess(MLImageSuperResolutionResult result) {
// Processing logic for detection success.
}})
addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for detection failure.
// failure.
if(e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the error code. You can process the error code and customize the messages users see.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the error code.
String errorMessage = mlException.getMessage();
} else {
// Other errors.
}
}
});
4. Stop the analyzer to release recognition resources after the recognition is complete.
Code:
if (analyzer != null) {
analyzer.stop();
}
Demo
Now, let's compare some before and after images, to see what this capability can do.
Original image
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Image after 1x super-resolution
Original image
Image after 3x super-resolution
Related Links
With HUAWEI ML Kit, you can integrate a wide range of capabilities and services into your apps, including text recognition, card recognition, text translation, facial recognition, speech recognition, text to speech, image classification, image segmentation, and product visual search.
If you’re interested, go to HUAWEI ML Kit - About the Service to learn more.
To find out more about these capabilities and services, here are some useful resources:
https://developer.huawei.com/consumer/cn/doc/31403

How much time it takes to integrate ML kit in a project. What is the use case of ML kit super resolution capability.

Related

Detect Ingredients from Food Image: Combine the Forces of Huawei ML Kit and Image Kit

This week, I was asked to build an application that can detect base elements of food given in an image. This article will tell you about my journey building this application using Huawei Image Kit and Machine Learning Kit.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Huawei ML Kit provides many useful machine learning related features to developers and one of them is Image Classification. Thanks to this feature, ML Kit can describe images by classifying their elements. However, if you need to get the names of the elements that appear in an image, ML Kit may not provide the data you want efficiently. I will now show you how I tried to overcome this issue.
First of all, I started by integrating HMS Core into the application which is explained in detail here:
Android | Integrating Your Apps With Huawei HMS Core
Hi, this article explains you how to integrate with HMS (Huawei Mobile Services) and making AppGallery Connect Console project settings.
medium.com
Machine Learning Kit Part
After the HMS Core integration, I created a method which takes a bitmap and returns classification of that. Thanks to Huawei ML Kit, it can be done in few lines of code:
Java:
private void classifyPicture(Bitmap bitmap) {
MLImageClassificationAnalyzer cloudAnalyzer = MLAnalyzerFactory.getInstance().getRemoteImageClassificationAnalyzer();
MLFrame frame = MLFrame.fromBitmap(bitmap);
Task<List<MLImageClassification>> task = cloudAnalyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Here we get the classifications for the image
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
}
});
}
When I tried this method, I observed that whenever I send an image of a plate of food into it, ML Kit returns classifications like “Food, Dish, Meal, Lunch” instead of the actual elements of the food.
At this point, I understood that ML Kit wants to describe pictures in a general way, so it doesn’t always return the elements appearing in pictures.
Let’s see an example:
Before Image Kit​As you can see, even though the picture contains egg and salad, they don’t appear among the classifications.
Knowing that ML Kit has the power to detect these elements, I came up with an idea that will force ML Kit not to ignore them. My idea was cropping the image into small pieces and send them to ML Kit separately. I thought that ML Kit will be forced to return elements in images with this way since there won’t be other elements to get things complicated.
Image Kit Part
At this moment, a need for help arose to crop the images in the background. That is when I decided to bring Image Kit into the field. Image Kit provides a layout element called “CropLayoutView” which helps users to select a portion of image to be cropped easily. However, I didn’t want users to interact with this view because my aim was to crop the image in the background without user even knowing that.
Therefore, I decided to do a little trick by making this element invisible:
XML:
<com.huawei.hms.image.vision.crop.CropLayoutView
android:id="@+id/crop_layout_view"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:visibility="invisible" />
With this way, cropping process can be done in the background without showing anything in UI.
Then, I wrote a method which takes a bitmap and returns a list of bitmaps consists of cropped pieces of the main image:
Java:
private List<Bitmap> getCroppedImages(Bitmap selectedImageBitmap) {
List<Bitmap> croppedImages = new ArrayList<>();
croppedImages.add(selectedImageBitmap);
int height = selectedImageBitmap.getHeight();
int width = selectedImageBitmap.getWidth();
croppedImages.add(cropImage(selectedImageBitmap, width*20/100, height*20/100, width*80/100, height*80/100));
croppedImages.add(cropImage(selectedImageBitmap, width*5/100, height*25/100, width*50/100, height*75/100));
croppedImages.add(cropImage(selectedImageBitmap, width*50/100, height*25/100, width*95/100, height*75/100));
croppedImages.add(cropImage(selectedImageBitmap, width*25/100, height*5/100, width*75/100, height*50/100));
croppedImages.add(cropImage(selectedImageBitmap, width*25/100, height*50/100, width*75/100, height*95/100));
return croppedImages;
}
private Bitmap cropImage(Bitmap selectedImageBitmap, int left, int top, int right, int bottom) {
cropLayoutView.setImageBitmap(selectedImageBitmap);
cropLayoutView.setCropRect(new Rect(left,top,right,bottom));
return cropLayoutView.getCroppedImage();
}
Let me explain what is going on here. First, we need the size of the image to decide the amount we crop. Hence, we get height and width of the bitmap. Then, we arrange how we want to crop the picture by using the height and the width. The cropImage method simply crops the given image by given points with the help of CropLayoutView of Image Kit.
Note that, when height and width are both equal to zero, it points the top left of the image. As the height goes to its value from zero, we are going towards the bottom of the image and as the width goes to its value from zero, we are going towards the right side of the image. You should do your calculations according to that when deciding on cropping points.
Here, I used 6 different images to be sent to ML Kit which are:
Uncropped, original version of the image
A piece from middle of the image
A piece from right side of the image
A piece from left side of the image
A piece from top side of the image
A piece from bottom side of the image
Conclusion
I used Image Kit to crop any image into pieces before sending it to ML Kit for image classification. Then, I sent these pieces to ML Kit separately and collected the results of each. I observed that this approach increased the number of elements detected dramatically.
Thanks to this approach, ML Kit now returns both egg and salad for the example picture which was failed earlier:
After Image Kit​
References
I have shown you the key points of the application but you can find the source code for the complete version on GitHub:
GitHub - bugra7/Xecipes
Contribute to bugra7/Xecipes development by creating an account on GitHub.
github.com
Note that, you will not be able to run this application because you don’t have the agconnect-services.json file for it. Therefore, you should only take it as a reference to create your own application.
And lastly, if you have any questions, you are always welcome to ask them on HUAWEI Developer Forum:
HUAWEI Developer Forum | HUAWEI Developer
forums.developer.huawei.com
Hi, I have implemented scene detection using ml kit. But it doesn't give me correct result every time. I have followed official Huawei documentation for implementation. Can you help me resolving this issue?

Using Motion Capture to Animate a Model

It's so rewarding to set the model you've created into motion. If only there were an easy way to do this… well, actually there is!
I had long sought out this kind of solution, and then voila! I got my hands on motion capture, a capability from HMS Core 3D Modeling Kit, which comes with technologies like human body detection, model acceleration, and model compression, as well as a monocular human pose estimation algorithm from the deep learning perspective.
Crucially, this capability does NOT require advanced devices — a mobile phone with an RGB camera is good enough on its own. The camera captures 3D data from 24 key skeletal points on the body, which the capability uses to seamlessly animate a model.
What makes the motion capture capability even better is its straightforward integration process, which I'd like to share with you.
Application Scenarios​Motion capture is ideal for 3D content creation for gaming, film & TV, and healthcare, among other similar fields. It can be used to animate characters and create videos for user generated content (UGC) games, animate virtual streamers in real time, and provide injury rehab, to cite just a few examples.
Integration Process​Preparations​Refer to the official instructions to complete all necessary preparations.
Configuring the Project
Before developing the app, there are a few more things you'll need to do: Configure app information in AppGallery Connect; make sure that the Maven repository address of the 3D Modeling SDK has been configured in the project, and that the SDK has been integrated.
1. Create a motion capture engine
Java:
// Set necessary parameters as needed.
Modeling3dMotionCaptureEngineSetting setting = new Modeling3dMotionCaptureEngineSetting.Factory()
// Set the detection mode.
// Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON_QUATERNION: skeleton point quaternions of a human pose.
// Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON: skeleton point coordinates of a human pose.
.setAnalyzeType(Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON_QUATERNION
| Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON)
.create();
Modeling3dMotionCaptureEngine engine = Modeling3dMotionCaptureEngineFactory.getInstance().getMotionCaptureEngine(setting);
Modeling3dFrame encapsulates video frame or static image data sourced from a camera, as well as related data processing logic.
Customize the logic for processing the input video frames, to convert them to the Modeling3dFrame object for detection. The video frame format can be NV21.
Use android.graphics.Bitmap to convert the input image to the Modeling3dFrame object for detection. The image format can be JPG, JPEG, or PNG.
Java:
// Create a Modeling3dFrame object using a bitmap.
Modeling3dFrame frame = Modeling3dFrame.fromBitmap(bitmap);
// Create a Modeling3dFrame object using a video frame.
Modeling3dFrame.Property property = new Modeling3dFrame.Property.Creator().setFormatType(ImageFormat.NV21)
// Set the frame width.
.setWidth(width)
// Set the frame height.
.setHeight(height)
// Set the video frame rotation angle.
.setQuadrant(quadant)
// Set the video frame number.
.setItemIdentity(framIndex)
.create();
Modeling3dFrame frame = Modeling3dFrame.fromByteBuffer(byteBuffer, property);
2. Call the asynchronous or synchronous API for motion detection.
Sample code for calling the asynchronous API asyncAnalyseFrame:
Java:
Task<List<Modeling3dMotionCaptureSkeleton>> task = engine.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<Modeling3dMotionCaptureSkeleton>>() {
@Override
public void onSuccess(List<Modeling3dMotionCaptureSkeleton> results) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
Sample code for calling the synchronous API analyseFrame
Java:
SparseArray<Modeling3dMotionCaptureSkeleton> sparseArray = engine.analyseFrame(frame);
for (int i = 0; i < sparseArray.size(); i++) {
// Process the detection result.
};
3. Stop the motion capture engine to release detection resources, once the detection is complete
Java:
try {
if (engine != null) {
engine.stop();
}
} catch (IOException e) {
// Handle exceptions.
}
Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> 3D Modeling Kit official website
>> 3D Modeling Kit Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

How I Developed a Smile Filter for My App

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I recently read an article that explained how we as human beings are hardwired to enter the fight-or-flight mode when we realize that we are being watched. This feeling is especially strong when somebody else is trying to take a picture of us, which is why many of us find it difficult to smile in photos. This effect is so strong that we've all had the experience of looking at a photo right after it was taken and noticing straight away that the photo needs to be retaken because our smile wasn't wide enough or didn't look natural. So, the next time someone criticizes my smile in a photo, I'm just going to them, "It's not my fault. It's literally an evolutionary trait!"
Or, instead of making such an excuse, what about turning to technology for help? Actually, I have tried using some photo editor apps to modify my portrait photos, making my facial expression look nicer by, for example, removing my braces, whitening my teeth, and erasing my smile lines. However, maybe it's because of my rusty image editing skills, the modified images often turn out to be strange.
My lack of success with photo editing made me wonder: Wouldn't it be great if there was a function specially designed for people like me, who find it difficult to smile naturally in photos and who aren't good at photo editing, which could automatically give us picture-perfect smiles?
I then suddenly remembered that I had heard about an interesting function called smile filter that has been going viral on different apps and platforms. A smile filter is an app feature which can automatically add a natural-looking smile to a face detected in an image. I have tried it before and was really amazed by the result. In light of my sudden recall, I decided to create a demo app with a similar function, in order to figure out the principle behind it.
To provide my app with a smile filter, I chose to use the auto-smile capability provided by HMS Core Video Editor Kit. This capability automatically detects people in an image and then lightens up the detected faces with a smile (either closed- or open-mouth) that perfectly blends in with each person's facial structure. With the help of such a capability, a mobile app can create the perfect smile in seconds and save users from the hassle of having to use a professional image editing program.
Check the result out for yourselves:
Looks pretty natural, right? This is the result offered by my demo app integrated with the auto-smile capability. The original image looks like this:
Next, I will explain how I integrated the auto-smile capability into my app and share the relevant source code from my demo app.
Integration Procedure​Preparations​1, Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration​1. Set the app authentication information. This can be done via an API key or an access token.
Using an API key: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, using an access token: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID, which must be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development​
Code:
// Apply the auto-smile effect. Currently, this effect only supports image assets.
imageAsset.addFaceSmileAIEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Callback when the handling progress is received.
}
@Override
public void onSuccess() {
// Callback when the handling is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the handling failed.
}
});
// Stop applying the auto-smile effect.
imageAsset.interruptFaceSmile();
// Remove the auto-smile effect.
imageAsset.removeFaceSmileAIEffect();
And with that, I successfully integrated the auto-smile capability into my demo app, and now it can automatically add smiles to faces detected in the input image.
Conclusion​Research has demonstrated that it is normal for people to behave unnaturally when we are being photographed. Such unnaturalness becomes even more obvious when we try to smile. This explains why numerous social media apps and video/image editing apps have introduced smile filter functions, which allow users to easily and quickly add a naturally looking smile to faces in an image.
Among various solutions to such a function, HMS Core Video Editor Kit's auto-smile capability stands out by providing excellent, natural-looking results and featuring straightforward and quick integration.
What's better, the auto-smile capability can be used together with other capabilities from the same kit, to further enhance users' image editing experience. For example, when used in conjunction with the kit's AI color capability, you can add color to an old black-and-white photo and then use auto-smile to add smiles to the sullen expressions of the people in the photo. It's a great way to freshen up old and dreary photos from the past.
And that's just one way of using the auto-smile capability in conjunction with other capabilities. What ideas do you have? Looking forward to knowing your thoughts in the comments section.
References​How to Overcome Camera Shyness or Phobia
Introduction to Auto-Smile

How to Automatically Create a Scenic Timelapse Video

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Have you ever watched a video of the northern lights? Mesmerizing light rays that swirl and dance through the star-encrusted sky. It's even more stunning when they are backdropped by crystal-clear waters that flow smoothly between and under ice crusts. Complementing each other, the moving sky and water compose a dynamic scene that reflects the constant rhythm of the mother nature.
Now imagine that the video is frozen into an image: It still looks beautiful, but lacks the dynamism of the video. Such a contrast between still and moving images shows how videos are sometimes better than still images when it comes to capturing majestic scenery, since the former can convey more information and thus be more engaging.
This may be the reason why we sometimes regret just taking photos instead of capturing a video when we encounter beautiful scenery or a memorable moment.
In addition to this, when we try to add a static image to a short video, we will find that the transition between the image and other segments of the video appears very awkward, since the image is the only static segment in the whole video.
If we want to turn a static image into a dynamic video by adding some motion effects to the sky and water, one way to do this is to use a professional PC program to modify the image. However, this process is often very complicated and time-consuming: It requires adjustment of the timeline, frames, and much more, which can be a daunting prospect for amateur image editors.
Luckily, there are now numerous AI-driven capabilities that can automatically create time-lapse videos for users. I chose to use the auto-timelapse capability provided by HMS Core Video Editor Kit. It can automatically detect the sky and water in an image and produce vivid dynamic effects for them, just like this:
The movement speed and angle of the sky and water are customizable.
Now let's take a look at the detailed integration procedure for this capability, to better understand how such a dynamic effect is created.
Integration Procedure​Preparations​1. Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable the required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration​1. Set the app authentication information. This can be done via an API key or an access token.
Set an API key via the setApiKey method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, set an access token by using the setAccessToken method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID. This ID should be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development​
Code:
// Initialize the auto-timelapse engine.
imageAsset.initTimeLapseEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// When the initialization is successful, check whether there is sky or water in the image.
int motionType = -1;
imageAsset.detectTimeLapse(new HVETimeLapseDetectCallback() {
@Override
public void onResult(int state) {
// Record the state parameter, which is used to define a motion effect.
motionType = state;
}
});
// skySpeed indicates the speed at which the sky moves; skyAngle indicates the direction to which the sky moves; waterSpeed indicates the speed at which the water moves; waterAngle indicates the direction to which the water moves.
HVETimeLapseEffectOptions options =
new HVETimeLapseEffectOptions.Builder().setMotionType(motionType)
.setSkySpeed(skySpeed)
.setSkyAngle(skyAngle)
.setWaterAngle(waterAngle)
.setWaterSpeed(waterSpeed)
.build();
// Add the auto-timelapse effect.
imageAsset.addTimeLapseEffect(options, new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
}
@Override
public void onSuccess() {
}
@Override
public void onError(int errorCode, String errorMessage) {
}
});
// Stop applying the auto-timelapse effect.
imageAsset.interruptTimeLapse();
// Remove the auto-timelapse effect.
imageAsset.removeTimeLapseEffect();
Now, the auto-timelapse capability has been successfully integrated into an app.
Conclusion​When capturing scenic vistas, videos, which can show the dynamic nature of the world around us, are often a better choice than static images. In addition, when creating videos with multiple shots, dynamic pictures deliver a smoother transition effect than static ones.
However, for users not familiar with the process of animating static images, if they try do so manually using computer software, they may find the results unsatisfying.
The good news is that there are now mobile apps integrated with capabilities such as Video Editor Kit's auto-timelapse feature that can create time-lapse effects for users. The generated effect appears authentic and natural, the capability is easy to use, and its integration is straightforward. With such capabilities in place, a video/image app can provide users with a more captivating user experience.
In addition to video/image editing apps, I believe the auto-timelapse capability can also be utilized by many other types of apps. What other kinds of apps do you think would benefit from such a feature? Let me know in the comments section.

HMS Core ML Kit Evolves Image Segmentation

Changing an image/video background has always been a hassle, whereby the most tricky part is to extract the element other than the background.
Traditionally, it requires us to use a PC image-editing program that allows us to select the element, add a mask, replace the canvas, and more. If the element has an extremely uneven border, then the whole process can be very time-consuming.
Luckily, ML Kit from HMS Core offers a solution that streamlines the process: the image segmentation service, which supports both images and videos. This service draws upon a deep learning framework, as well as detection and recognition technology. The service can automatically recognize — within seconds — the elements and scenario of an image or a video, delivering a pixel-level recognition accuracy. By using a novel framework of semantic segmentation, image segmentation labels each and every pixel in an image and supports 11 element categories including humans, the sky, plants, food, buildings, and mountains.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
This service is a great choice for entertaining apps. For example, an image-editing app can use the service to realize swift background replacement. A photo-taking app can count on this service for optimization on different elements (for example, the green plant) to make them appear more attractive.
Below is an example showing how the service works in an app.
Cutout is another field where image segmentation plays a role. Most cutout algorithms, however, cannot delicately determine fine border details such as that of hair. The team behind ML Kit's image segmentation has been working on its algorithms designed for handling hair and highly hollowed-out subjects. As a result, the capability can now retain hair details during live-streaming and image processing, delivering a better cutout effect.
Development Procedure​
Before app development, there are some necessary preparations. In addition, the Maven repository address should be configured for the SDK, and the SDK should be integrated into the app project.
The image segmentation service offers three capabilities: human body segmentation, multiclass segmentation, and hair segmentation.
Human body segmentation: supports videos and images. The capability segments the human body from its background and is ideal for those who only need to segment the human body and background. The return value of this capability contains the coordinate array of the human body, human body image with a transparent background, and gray-scale image with a white human body and black background. Based on the return value, your app can further process an image to, for example, change the video background or cut out the human body.
Multiclass segmentation: offers the return value of the coordinate array of each element. For example, when the image processed by the capability contains four elements (human body, sky, plant, and cat & dog), the return value is the coordinate array of the four elements. Your app can further process these elements, such as replacing the sky.
Hair segmentation: segments hair from the background, with only images supported. The return value is a coordinate array of the hair element. For example, when the image processed by the capability is a selfie, the return value is the coordinate array of the hair element. Your app can then further process the element by, for example, changing the hair color.
Static Image Segmentation​
1. Create an image segmentation analyzer.
Integrate the human body segmentation model package.
Code:
// Method 1: Use default parameter settings to configure the image segmentation analyzer.
// The default mode is human body segmentation in fine mode. All segmentation results of human body segmentation are returned (pixel-level label information, human body image with a transparent background, gray-scale image with a white human body and black background, and an original image for segmentation).
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();
// Method 2: Use MLImageSegmentationSetting to customize the image segmentation analyzer.
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
// Set whether to use fine segmentation. true indicates yes, and false indicates no (fast segmentation).
.setExact(false)
// Set the segmentation mode to human body segmentation.
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
// Set the returned result types.
// MLImageSegmentationScene.ALL: All segmentation results are returned (pixel-level label information, human body image with a transparent background, gray-scale image with a white human body and black background, and an original image for segmentation).
// MLImageSegmentationScene.MASK_ONLY: Only pixel-level label information and an original image for segmentation are returned.
// MLImageSegmentationScene.FOREGROUND_ONLY: A human body image with a transparent background and an original image for segmentation are returned.
// MLImageSegmentationScene.GRAYSCALE_ONLY: A gray-scale image with a white human body and black background and an original image for segmentation are returned.
.setScene(MLImageSegmentationScene.FOREGROUND_ONLY)
.create();
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
Integrate the multiclass segmentation model package.
When the multiclass segmentation model package is used for processing an image, an image segmentation analyzer can be created only by using MLImageSegmentationSetting.
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting
.Factory()
// Set whether to use fine segmentation. true indicates yes, and false indicates no (fast segmentation).
.setExact(true)
// Set the segmentation mode to image segmentation.
.setAnalyzerType(MLImageSegmentationSetting.IMAGE_SEG)
.create();
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
Integrate the hair segmentation model package.
When the hair segmentation model package is used for processing an image, a hair segmentation analyzer can be created only by using MLImageSegmentationSetting.
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting
.Factory()
// Set the segmentation mode to hair segmentation.
.setAnalyzerType(MLImageSegmentationSetting.HAIR_SEG)
.create();
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2. Create an MLFrame object by using android.graphics.Bitmap for the analyzer to detect images. JPG, JPEG, and PNG images are supported. It is recommended that the image size range from 224 x 224 px to 1280 x 1280 px.
Code:
// Create an MLFrame object using the bitmap, which is the image data in bitmap format.
MLFrame frame = MLFrame.fromBitmap(bitmap);
3. Call asyncAnalyseFrame for image segmentation.
Code:
// Create a task to process the result returned by the analyzer.
Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame);
// Asynchronously process the result returned by the analyzer.
task.addOnSuccessListener(new
OnSuccessListener<MLImageSegmentation>() {
@Override
public void onSuccess(MLImageSegmentation segmentation) {
// Callback when recognition is successful.
}})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Callback when recognition failed.
}});
4. Stop the analyzer and release the recognition resources when recognition ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
The asynchronous call mode is used in the preceding example. Image segmentation also supports synchronous call of the analyseFrame function to obtain the detection result:
Code:
SparseArray<MLImageSegmentation> segmentations = analyzer.analyseFrame(frame);
References​
Home page of HMS Core ML Kit
Development Guide of HMS Core ML Kit

Categories

Resources