More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202327021435990014&fid=0101187876626530001
Introduction
Image kit provides 2 SDKs, the image vision SDK and Image Render SDK. We can add animations to your photos in minutes. The image render service provides 5 basic animation effects and 9 advanced effects.
Requirements
1. Huawei Device (Currently it will not support non-Huawei devices).
2. EMUI 8.1 above.
3. Minimum Android SDK version 26.
Use Cases
1. Image post processing: It provides 20 effects for the image processing and achieving high-quality image.
2. Theme design: Applies animations lock screens, wallpapers and themes
Steps
1. Create App in Android
2. Configure App in AGC
3. Integrate the SDK in our new Android project
4. Integrate the dependencies
5. Sync project
Integration
Create Application in Android Studio.
App level gradle depenencies.
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
Image kit dependencies
Code:
implementation 'com.huawei.hms:image-render:1.0.2.302'
Kotlin dependencies
Code:
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
Root level gradle dependencies
Code:
maven {url 'http://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
To use the image render API, we need to provide resource files including images and manifest.xml files. Using image render service will parse the manifest.xml
Below parameters can be used in ImageRender API.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Create Instance for ImageRenderImpl by calling the getInstance() method. To call this method app must implement callback method OnSuccess(), OnFailure(). If the ImageRender instance is successfully obtained.
Code:
fun initImageRender() {
ImageRender.getInstance(this, object : RenderCallBack {
override fun onSuccess(imageRender: ImageRenderImpl) {
showToast("ImageRenderAPI success")
imageRenderApi = imageRender
useImageRender()
}
override fun onFailure(i: Int) {
showToast("ImageRenderAPI failure, errorCode = $i")
}
})
}
Initialize Render service we need to pass source path and authJson, we can use the service after successfully authenticated.
Code:
fun useImageRender() {
val initResult: Int = imageRenderApi!!.doInit(sourcePath, authJson)
showToast("DoInit result == $initResult")
if (initResult == 0) {
val renderView: RenderView = imageRenderApi!!.getRenderView()
if (renderView.resultCode == ResultCode.SUCCEED) {
val view = renderView.view
if (null != view) {
frameLayout!!.addView(view)
}
}
} else {
showToast("Do init fail, errorCode == $initResult")
}
}
The Image Render service parses the image and script in sourcepath getRenderView() API will return the rendered views to the app.
User interaction is supported for advanced animation views.
Code:
fun initAuthJson() {
try {
authJson = JSONObject(string)
} catch (e: JSONException) {
System.out.println(e)
}
}
Code:
fun playAnimation(filterType: String) {
if (!Utils.copyAssetsFilesToDirs(this, filterType, sourcePath!!)) {
showToast("copy files failure, please check permissions")
return
}
if (imageRenderApi != null && frameLayout!!.childCount > 0) {
frameLayout!!.removeAllViews()
imageRenderApi!!.removeRenderView()
val initResult: Int = imageRenderApi!!.doInit(sourcePath, authJson)
showToast("DoInit result == $initResult")
if (initResult == 0) {
val renderView: RenderView = imageRenderApi!!.getRenderView()
if (renderView.resultCode == ResultCode.SUCCEED) {
val view = renderView.view
if (null != view) {
frameLayout!!.addView(view)
}
}
} else {
showToast("Do init fail, errorCode == $initResult")
}
}
}
Result:
Reference:
To know more about Image kit, check below URLs.
Image Kit:
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/render-service-dev-0000001050197008-V5
Image Editor Article:
https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201320928748890264&fid=0101187876626530001
GitHub:
https://github.com/DTSE-India-Community/HMS-Image-Kit
Related
More information about this, you can visit HUAWEI Developer Forum
Introduction
This article will guide you to use A/B testing in android project. It will provide details to use HMS and GMS.
Steps
1. Create App in Android
2. Configure App in AGC
3. Integrate the SDK in our new Android project
4. Integrate the dependencies
5. Sync project
Procedure
Step1: Create application in android studio.
HMS related dependencies, Add below dependencies in app directory
Code:
implementation 'com.huawei.agconnect:agconnect-remoteconfig:1.3.1.300'
apply plugin:'com.huawei.agconnect'
Add below dependencies in root directory
Code:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
GMS related dependencies, Add below dependencies in app directory
Code:
implementation 'com.google.android.gms:play-services-analytics:17.0.0'
implementation 'com.google.firebase:firebase-config:19.2.0'
Add below dependencies into root directory
Code:
classpath 'com.google.gms:google-services:4.3.3'
Step2: Create MobileCheckService class, using this class you can identify whether the device has HMS or GMS.
Code:
class MobileCheckService {
fun isGMSAvailable(context: Context?): Boolean {
if (null != context) {
val result: Int = GoogleApiAvailability.getInstance().isGooglePlayServicesAvailable(context)
if (com.google.android.gms.common.ConnectionResult.SUCCESS === result) {
return true
}
}
return false
}
fun isHMSAvailable(context: Context?): Boolean {
if (null != context) {
val result: Int = HuaweiApiAvailability.getInstance().isHuaweiMobileServicesAvailable(context)
if (com.huawei.hms.api.ConnectionResult.SUCCESS == result) {
return true
}
}
return false
}
}
Step3: Create instance for Mobilecheckservice inside activity class. Inside OnCreate() call checkAvailableMobileService().This method return whether the device has HMS or GMS.
Code:
private fun checkAvailableMobileService() {
if (mCheckService.isHMSAvailable(this)) {
Toast.makeText(baseContext, "HMS Mobile", Toast.LENGTH_LONG).show()
configHmsTest()
} else
if (mCheckService.isGMSAvailable(this)) {
Toast.makeText(baseContext, "GMS Mobile", Toast.LENGTH_LONG).show()
configGmsTest()
} else {
Toast.makeText(baseContext, "NO Service", Toast.LENGTH_LONG).show()
}
}
Step4: If the device support HMS, then use AGConnectConfig.
Code:
private fun configHmsTest() {
val config = AGConnectConfig.getInstance()
config.applyDefault(R.xml.remote_config_defaults)
config.clearAll()
config.fetch().addOnSuccessListener { configValues ->
config.apply(configValues)
config.mergedAll
var sampleTest = config.getValueAsString("Festive_coupon")
Toast.makeText(baseContext, sampleTest, Toast.LENGTH_LONG).show()
}.addOnFailureListener { Toast.makeText(baseContext, "Fetch Fail", Toast.LENGTH_LONG).show() }
}
Step5: If the device support GMS, then use FirebaseRemoteConfig.
Code:
private fun configGmsTest() {
val firebase = FirebaseRemoteConfig.getInstance();
val configSettings = FirebaseRemoteConfigSettings.Builder().build()
firebase.setConfigSettingsAsync(configSettings)
firebase.setDefaultsAsync(R.xml.remote_config_defaults)
firebase.fetch().addOnCompleteListener { configValues ->
if (configValues.isSuccessful) {
firebase.fetchAndActivate()
var name = firebase.getString("Festive_coupon")
Toast.makeText(baseContext, name, Toast.LENGTH_LONG).show()
} else {
Toast.makeText(baseContext, "Failed", Toast.LENGTH_LONG).show()
}
}
}
App Configuration in Firebase:
Note: A/B test is using HMS configuration, refer
https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201248355275100167&fid=0101187876626530001
Step1: To configure app into firebase Open firebase https://console.firebase.google.com/u/0/?pli=1
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step2: Click Add Project and add required information like App Name, package name, SHA-1.
Step3: After configuration is successful, then click A/B Testing in Grow menu.
To Start A/b testing experiment Click Create experiment button, It will show you list of supported experiments. Using Firebase you can do three experiments.
· Notifications
· Remote Config
· In-App Messaging
Notification: This experiment will use for sending messages to engage the right users at the right moment.
Remote Config: This experiment will use to change app-behavior dynamically and also using server-side configuration parameters.
In-App Messaging: This experiment will use to send different In-App Messages.
Step4: Choose AbTesting > Remote Config > Create a Remote Config experiment, provide the required information to test, as follows
Step5: Choose AbTesting > Remote Config > App_Behaviour, following page will display.
Step6: Click Start experiment, then start A/B test based on the experiment conditions it will trigger
Step7: After successful completion of experiment, we can get report.
Conclusion:
Using A/B test, you can control the entire experiment from HMS or GMS dashboard, this form of testing will be highly effective for the developers.
Reference:
To know more about firebase console, follow the URL https://firebase.google.com/docs/ab-testing
Share your thoughts on this article, if you are already worked with A/B tests, then you can share your experience on separation between them with us
More information like this, you can visit HUAWEI Developer Forum
Introduction:
HMS ML Kit features image super-resolution service which provides 1x super-resolution capability. This feature removes the compression noise of images to obtain clear images.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Precautions:
1. Prior to the image super-resolution service, it is necessary to convert images into bitmaps in ARGB format. After
the service processes, the image output are bitmaps in ARGB format.
2. Maximum size of an input image is 1024 x 768 px or 768 x 1024 px. The minimum size is 64 x 64 px.
Integration:
1. Create a project in android studio and Huawei AGC.
2. Provide the SHA-256 Key in App Information Section.
3. Download the agconnect-services.json from AGCand save into app directory.
4. In root build.gradle
Navigate to allprojects > repositories and buildscript > repositories and add the below line.
Code:
maven { url 'http://developer.huawei.com/repo/' }
5. In app build.gradle
Configure the Maven dependency
Code:
implementation 'com.huawei.hms:ml-computer-vision-imageSuperResolution:2.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-imageSuperResolution-model:2.0.2.300'
Apply plugin
Code:
apply plugin: 'com.huawei.agconnect'
6. Permissions in Manifest
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Code Implementation:
Here is an image convertor application that uses HMS image super-resolution service. This application can select image from gallery and also can capture image. Follow the steps
1. Create an image super-resolution analyzer.
Code:
private void createAnalyzer() {
MLImageSuperResolutionAnalyzerSetting settings = new MLImageSuperResolutionAnalyzerSetting.Factory()
// Set the scale of image super resolution to 1x.
.setScale(MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_1X)
.create();
analyzer = MLImageSuperResolutionAnalyzerFactory.getInstance().getImageSuperResolutionAnalyzer(settings);
}
2. Create an MLFrame object by using android.graphics.Bitmap.
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(srcBitmap).create();
3. Perform super-resolution processing on the image.
Code:
Task<MLImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSuperResolutionResult>() {
public void onSuccess(MLImageSuperResolutionResult result) {
// Recognition success.
Toast.makeText(getApplicationContext(), "Success", Toast.LENGTH_SHORT).show();
setImage(result.getBitmap());
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Recognition failure.
Toast.makeText(getApplicationContext(), "Failed:" + e.getMessage(), Toast.LENGTH_SHORT).show();
}
});
4. After the recognition is complete, stop the analyzer to release recognition resources.
Code:
private void release() {
if (analyzer == null) {
return;
}
analyzer.stop();
}
5. Capture picture from camera.
Code:
private void capturePictureFromCamera(){
if (checkSelfPermission(Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED)
{
requestPermissions(new String[]{Manifest.permission.CAMERA}, MY_CAMERA_PERMISSION_CODE);
}
else
{
Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(cameraIntent, CAMERA_REQUEST);
}
}
6. Acceessing image from Gallery.
Code:
private void getImageFromGallery(){
Intent intent = new Intent();
intent.setAction(Intent.ACTION_GET_CONTENT);
intent.setType("image/*");
startActivityForResult(intent, GALLERY_REQUEST);
}
Screen Shots:
Conclusion:
Image super-resolution service is widely used in day to day life and supports in common scenarios like improving low-quality images on the network, obtaining clear images while reading news, improving image clarity of Identification Card etc.This service intelligently reduces the image noise and provides clear image without changing resolution.
Reference:
HMSCore-Guides
Is there any image size limitations.
How to Reset the Mobile app?
I have installed an emulated to play the world best game on my computer. But emulator is not working smoothly.
Please share some good technique to run it well so that I can enjoy the taste of the game.
Thank you.
cool thing, I'll definitely have to try it, especially with my old photos.
Does this work with every image? I've some old black and white photos
JohnMes said:
cool thing, I'll definitely have to try it, especially with my old photos.
Click to expand...
Click to collapse
It will be a good choice.:highfive:
Rushikesh787 said:
Does this work with every image? I've some old black and white photos
Click to expand...
Click to collapse
I think it can be better to have a try. Maybe we can gain a surprise.
will it convert all format images ?
I would definitely want to try this out
Awesome
I would definitely try this app on my girlfriend's photo
When users search for places, they may not specify exactly what aspect of that place they are interested in. Search results should include information about both parent nodes (the place itself) and child nodes (related information), because it makes it easier for users to find the information they're looking for. For example, if a user searches for an airport, your app can also return information about child nodes, such as terminals, parking lots, and entrances and exits. This enables your app to provide more scenario-specific results, making it easier for users to explore their surroundings.
This post shows you how you can integrate Site Kit into your app and return information about both parent and child nodes for the places your users search for.
1. Preparations
Before you get started, there is a few preparations you'll need to make. First, make sure that you have configured the Maven repository address of the Site SDK in your project, and integrated the Site SDK.
1.1 Configure the Maven repository address in the project-level build.gradle file.
Code:
<p style="line-height: 1.5em;">buildscript {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
// Add a dependency on the AppGallery Connect plugin.
dependencies {
classpath "com.android.tools.build:gradle:3.3.2"
}
}
</p>
Code:
<p style="line-height: 1.5em;">allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
</p>
1.2 Add a dependency on the Site SDK in the build.gradle file in the app directory.
Code:
<p style="line-height: 1.5em;">dependencies {
implementation 'com.huawei.hms:site:4.0.0.202'
}
</p>
2. Development Procedure
2.1 Create a SearchService object.
Code:
<p style="line-height: 1.5em;">SearchService searchService = SearchServiceFactory.create(this, Utils.getApiKey());
</p>
2.2 Create the SearchResultListener class so your app can process the search result.
The SearchResultListener class implements the SearchResultListener<TextSearchResponse> method. The onSearchResult(TextSearchResponse results) method in this class is used to obtain the search result and implement the specific service.
Code:
<p style="line-height: 1.5em;">SearchResultListener<TextSearchResponse> resultListener = new SearchResultListener<TextSearchResponse>() {
@Override
public void onSearchResult(TextSearchResponse results) {
Log.d(TAG, "onTextSearchResult: " + results.toString());
List<Site> siteList;
if (results == null || results.getTotalCount() <= 0 || (siteList = results.getSites()) == null || siteList.size() <= 0) {
resultTextView.setText("Result is Empty!");
return;
}
for (Site site : siteList) {
// Handle the search result as needed.
....
// Obtain information about child nodes.
if ((site.getPoi() != null)) {
ChildrenNode[] childrenNodes = poi.getChildrenNodes();
// Handle the information as needed.
....
}
}
}
@Override
public void onSearchError(SearchStatus status) {
resultTextView.setText("Error : " + status.getErrorCode() + " " + status.getErrorMessage());
}
}; </p>
2.3 Create the TextSearchRequest class and set the request parameters.
Code:
<p style="line-height: 1.5em;">TextSearchRequest request = new TextSearchRequest();
String query = "Josep Tarradellas Airport";
request.setQuery(query);
Double lat = 41.300621;
Double lng = 2.0797638;
request.setLocation(new Coordinate(lat, lng));
// Set to obtain child node information.
request.setChildren(true);
</p>
2.4 Set a request result handler and bind it with the request.
Code:
<p style="line-height: 1.5em;">searchService.textSearch(request, resultListener);
</p>
Once you have completed the steps above, your app will be able to return information about both the parent node and its child nodes. This attachment shows how the search results will look:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
You can find the source code on GitHub.
For more details, you can go to:
l Our official website
l Our development guide
l Reddit to join our developer discussion
l GitHub to download demos and sample codes
l Stack Overflow to solve any integration problems
Nice write up.
Image segmentation technology is gathering steam thanks to the development of multiple fields. Take the autonomous vehicle as an example, which has been developing rapidly since last year and become a showpiece for both well-established companies and start-ups. Most of them use computer vision, which includes image segmentation, as the technical basis for self-driving cars, and it is image segmentation that allows a car to understand the situation on the road and to tell the road from the people.
Image segmentation is not only applied to autonomous vehicles, but is also used in a number of different fields, including:
Medical imaging, where it helps doctors make diagnosis and perform tests
Satellite image analysis, where it helps analyze tons of data
Media apps, where it cuts people from video to prevent bullet comments from obstructing them.
It is a widespread application. I myself am also a fan of this technology. Recently, I've tried an image segmentation service from HMS Core ML Kit, which I found outstanding. This service has an original framework for semantic segmentation, which labels each and every pixel in an image, so the service can clearly, completely cut out something as delicate as a hair. The service also excels at processing images with different qualities and dimensions. It uses algorithms of structured learning to prevent white borders — which is a common headache of segmentation algorithms — so that the edges of the segmented image appear more natural.
I'm delighted to be able to share my experience of implementing this service here.
PreparationsFirst, configure the Maven repository and integrate the SDK of the service. I followed the instructions here to complete all these.
1. Configure the Maven repository address
Java:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Add build dependencies
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.1.0.301'
// Import the package of the human body segmentation model.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.1.0.303'
}
3. Add the permission in the AndroidManifest.xml file.
Java:
// Permission to write to external storage.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Development Procedure1. Dynamically request the necessary permissions
Java:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
private boolean allPermissionsGranted() {
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
return false;
}
}
return true;
}
private void getRuntimePermissions() {
List<String> allNeededPermissions = new ArrayList<>();
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
allNeededPermissions.add(permission);
}
}
if (!allNeededPermissions.isEmpty()) {
ActivityCompat.requestPermissions(
this, allNeededPermissions.toArray(new String[0]), PERMISSION_REQUESTS);
}
}
private static boolean isPermissionGranted(Context context, String permission) {
if (ContextCompat.checkSelfPermission(context, permission) == PackageManager.PERMISSION_GRANTED) {
return true;
}
return false;
}
private String[] getRequiredPermissions() {
try {
PackageInfo info =
this.getPackageManager()
.getPackageInfo(this.getPackageName(), PackageManager.GET_PERMISSIONS);
String[] ps = info.requestedPermissions;
if (ps != null && ps.length > 0) {
return ps;
} else {
return new String[0];
}
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
return new String[0];
}
}
2. Create an image segmentation analyzer
Java:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
// Set the segmentation mode to human body segmentation.
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
3. Use android.graphics.Bitmap to create an MLFrame object for the analyzer to detect images
Java:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
4. Call asyncAnalyseFrame for image segmentation
Java:
// Create a task to process the result returned by the analyzer.
Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
// Asynchronously process the result returned by the analyzer.
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override
public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {.
if (mlImageSegmentationResults != null) {
// Obtain the human body segment cut out from the image.
foreground = mlImageSegmentationResults.getForeground();
preview.setImageBitmap(MainActivity.this.foreground);
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
return;
}
});
5. Change the image background
Java:
// Obtain an image from the album.
backgroundBitmap = Utils.loadFromPath(this, id, targetedSize.first, targetedSize.second);
BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
preview.setBackground(drawable);
preview.setImageBitmap(this.foreground);
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> HUAWEI Developers official website
>> Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
BackgroundOh how great it is to be able to reset bank details from the comfort of home and avoid all the hassle of going to the bank, queuing up, and proving you are who you say you are.
All these have become true with the help of some tech magic known as face verification, which is perfect for verifying a user's identity remotely. I have been curious about how the tech works, so here it is: I decided to integrate the face verification service from HMS Core ML Kit into a demo app. Below is how I did it.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development ProcessPreparations1. Make necessary configurations as detailed here.
2. Configure the Maven repository address for the face verification service.
Open the project-level build.gradle file of the Android Studio project.
Add the Maven repository address and AppGallery Connect plugin. Go to allprojects > repositories and configure the Maven repository address for the face verification service.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > repositories to configure the Maven repository address.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > dependencies to add the plugin configuration.
Code:
buildscript{
dependencies {
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
Function Building1. Create an instance of the face verification analyzer.
Code:
MLFaceVerificationAnalyzer analyzer = MLFaceVerificationAnalyzerFactory.getInstance().getFaceVerificationAnalyzer();
2. Create an MLFrame object via android.graphics.Bitmap. This object is used to set the face verification template image whose format can be JPG, JPEG, PNG, or BMP.
Code:
// Create an MLFrame object.
MLFrame templateFrame = MLFrame.fromBitmap(bitmap);
3. Set the template image. The setting will fail if the template does not contain a face, and the face verification service will use the template set last time.
Code:
List<MLFaceTemplateResult> results = analyzer.setTemplateFace(templateFrame);
for (int i = 0; i < results.size(); i++) {
// Process the result of face detection in the template.
}
4. Use android.graphics.Bitmap to create an MLFrame object that is used to set the image for comparison. The image format can be JPG, JPEG, PNG, or BMP.
Code:
// Create an MLFrame object.
MLFrame compareFrame = MLFrame.fromBitmap(bitmap);
5. Perform face verification by calling the asynchronous or synchronous method. The returned verification result (MLFaceVerificationResult) contains the facial information obtained from the comparison image and the confidence indicating the faces in the comparison image and template image being of the same person.
Asynchronous method:
Code:
Task<List<MLFaceVerificationResult>> task = analyzer.asyncAnalyseFrame(compareFrame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFaceVerificationResult>>() {
@Override
public void onSuccess(List<MLFaceVerificationResult> results) {
// Callback when the verification is successful.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Callback when the verification fails.
}
});
Synchronous method:
Code:
SparseArray<MLFaceVerificationResult> results = analyzer.analyseFrame(compareFrame);
for (int i = 0; i < results.size(); i++) {
// Process the verification result.
}
6. Stop the analyzer and release the resources that it occupies, when verification is complete.
Code:
if (analyzer != null) {
analyzer.stop();
}
This is how the face verification function is built. This kind of tech not only saves hassle, but is great for honing my developer skills.
ReferencesFace Verification from HMS Core ML Kit
Why Facial Verification is the Biometric Technology for Financial Services in 2022