Heat Map with Huawei Map Kit - Huawei Developers

Huawei provides a map SDK for developers. We can show markers, make clustering, draw routes, shapes etc. with this SDK. If you don’t have an idea about Huawei Map Kit, you can check this article for more information.
In this article, we will focus how can we use heat map with Huawei Map Kit via external library. Also beside heat map, it is possible to integrate extra flexible features like customizable marker clustering, icon generator, poly decoding and encoding, spherical geometry, KML and GeoJSON with this library.
Huawei Map was released at the end of 2019 and is updated very frequently by Huawei core developers but it does not have official helper utility library for now.
So what should we do if we need a feature which Huawei Map Kit does not support?
Power of Open Source
Google shares this Utility Library on Github, so other developers can see the code and contribute it.
When we look at the repo, we see that, library contains some Google Map related classes.
Code:
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.Tile;
import com.google.android.gms.maps.model.TileProvider;
Since Huawei Map core SDK is very similar to Google Map SDK, we can replace them with Huawei related classes.
Code:
import com.huawei.hms.maps.model.LatLng;
import com.huawei.hms.maps.model.Tile;
import com.huawei.hms.maps.model.TileProvider;
Above example is only for HeatMapTileProvider class. We should replace whole GMS related class with HMS related class in the library, otherwise we will get type mismatch error.
After replace all Google dependencies with Huawei dependencies, we can use library with Huawei Map Kit.
Heat Map
Heat maps are different way of representing data using colors.
The library has components that enable us to easily make heat maps. One of them is the WeightedLatLng class, which can store latitude, longitude and the weight (intensity). The other one is simple LatLng class.
In this example we will use simple LatLng class. Find a dummy json which contains latitude and longitude data just like below.
Code:
{
"lat": -35.9798,
"lng": 142.917
},
{
"lat": -38.3363,
"lng": 143.783
},
....
If we want to use WeightedLatLng we also need a intensity value.
Code:
{
"density": 123.4,
"lat": -35.9798,
"lng": 142.917
},
{
"density": 123.4,
"lat": -38.3363,
"lng": 143.783
},
...
Put json file under the raw folder.
Code:
private ArrayList<LatLng> readItems(int resource) throws JSONException {
ArrayList<LatLng> list = new ArrayList<>();
InputStream inputStream = getResources().openRawResource(resource);
String json = new Scanner(inputStream).useDelimiter("\\A").next();
JSONArray array = new JSONArray(json);
for (int i = 0; i < array.length(); i++) {
JSONObject object = array.getJSONObject(i);
double lat = object.getDouble("lat");
double lng = object.getDouble("lng");
list.add(new LatLng(lat, lng));
}
return list;
}
Then we read this raw file and convert it to a list of LatLng Now our data is ready.
After that, we will be creating a HeatMapTileProvider object using its builder.
Code:
mProvider = new HeatmapTileProvider.Builder().data(mDataList).build();
Use weightedData instead of data if you use WeightedLatLng
Code:
mProvider = new HeatmapTileProvider.Builder()
.weightedData(data) // load our weighted data
.radius(50) // optional, in pixels, can be anything between 20 and 50
.maxIntensity(1000.0) // set the maximum intensity
.build()
Last, we have to create a TileOverlay and add this overlay to our Map.
Code:
mOverlay = hMap.addTileOverlay(new TileOverlayOptions().tileProvider(mProvider));
Lets run the project and move the camera.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Customizing the Heat Map
We can style heat map according to our requirements.
We can change the display colors by passing a Gradient to the tile provider builder.
Code:
private static final int[] ALT_HEATMAP_GRADIENT_COLORS = {
Color.argb(0, 0, 255, 255),// transparent
Color.argb(255 / 3 * 2, 0, 255, 255),
Color.rgb(0, 191, 255),
Color.rgb(0, 0, 127),
Color.rgb(255, 0, 0)
};
public static final float[] ALT_HEATMAP_GRADIENT_START_POINTS = {
0.0f, 0.10f, 0.20f, 0.60f, 1.0f
};
public static final Gradient ALT_HEATMAP_GRADIENT = new Gradient(ALT_HEATMAP_GRADIENT_COLORS,
ALT_HEATMAP_GRADIENT_START_POINTS);
Set gradient while creating provider or after created
Code:
mProvider.setGradient(ALT_HEATMAP_GRADIENT);
mOverlay.clearTileCache();
// or while creating
mProvider = new HeatmapTileProvider.Builder()
.weightedData(data) // load our weighted data
.radius(50)
.maxIntensity(1000.0)
.gradient(ALT_HEATMAP_GRADIENT)
.build();
We can also change the opacity of our heat map layer and radius.
Code:
mProvider.setRadius(10); // Default 20
mProvider.setOpacity(0.4); // Default 0.7
mOverlay.clearTileCache();
// or while creating
mProvider = new HeatmapTileProvider.Builder()
.weightedData(data) // load our weighted data
.radius(50)
.maxIntensity(1000.0)
.gradient(ALT_HEATMAP_GRADIENT)
.radius(10)
.opacity(0.4)
.build();
Summary
Google provides a utility library for Google Map but replacing GMS related classes with HMS related classes, we can use this utility library with Huawei Map Kit until Huawei develop their own utility library.
You can also check this repo where all GMS related classes are replaced with HMS related classes.
For the original, you can visit https://forums.developer.huawei.com/forumPortal/en/topic/0203417811482880018

Very interesting.

Really helpful

very useful ,can we show shortest path using Huawei map kit?

NIce. Thank you.

Related

Solve a Word Search Game using Firebase ML Kit and Huawei ML Kit

Allow your users the freedom to choose their Android platform providing the same feature
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Some time ago I developed a Word Search game solver Android application using the services from Firebase ML Kit.
Solve WordSearch games with Android and ML Kit
A Kotlin ML Kit Data Structure & Algorithm Story
It was an interesting trip discovering the features of a framework that allows the developer to use AI capabilities without knowing all the rocket science behind.
In the specific, I’ve used the Document recognition feature to try to extract text from a word search game image.
After the text recognition phase, the output was cleaned and arranged into a matrix to be processed by the solver algorithm. This algo tried to look for all the words formed by grouping the letters respecting the rules of the games: contiguous letters in all the straight directions (vertical, horizontal, diagonal)
This app ran well on all the Android devices capable to run the Google Firebase SDK and the Google Mobile Services (GMS).
Since the second half of last year all new Huawei devices cannot run the GMS any more due to government restrictions, you can read more about this here:
[Update 14: Temporary License Extended Again]
Google has revoked Huawei's Android license
www.xda-developers.com
My app was not capable to run on the brand new Huawei devices
So I tried to look for solutions to make this case study app running on the new Huawei terminals.
Let’s follow my journey…
The Discovery of HMS ML Kit
I went throughout the Huawei documentation on HUAWEI Developer--The official site for HUAWEI developers. Provides HUAWEI appgallery service,HUAWEI Mobile Services,AI SDK,VR SDK
Here you can find many SDKs AKA Kits offering a set of smart features to the developers.
I’ve found one offering the features that I was looking for: HMS ML Kit. It is quite similar to the one from Firebase as it allows the developer to use Machine Learning capabilities like Image, Text, Face recognition and so on.
Huawei ML Kit
In particular, for my specific use case, I’ve used the text analyzer capable to run locally and taking advantage of the neural processing using NPU hardware.
Documentation HMS ML Kit Text recognition
Integrating HMS ML Kit was super easy. If you want to give it a try It’s just a matter of adding a dependency in your build.gradle file, enabling the service from the AppGallery web dashboard if you want to use the Cloud API and download the agconnect-services.json configuration file and use it in your app.
You can refer to the official guide here for the needed steps:
Documentation HMS ML Kit
Architectural Approach
My first desire was to maintain and deploy only one apk so I wanted to integrate both the Firebase ML Kit SDK and the HMS ML Kit one.
I thought about the main feature
Decode the image and getting back the text detected together with the bounding boxes surrounding each character to better display the spotted text to the user.
This was defined by this interface
Code:
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
interface DocumentTextRecognizer {
fun processImage(bitmap: Bitmap, success: (Document) -> Unit, error: (String?) -> Unit)
}
I’ve also defined my own data classes to have a common output format from both services
Code:
data class Symbol(
val text: String?,
val rect: Rect,
val idx: Int = 0,
val length: Int = 0
)
data class Document(val stringValue: String, val count: Int, val symbols: List<Symbol>)
Where Document represents the text result returned by the ML Kit services, it contains a list of Symbol (the character recognized) each one with its own char, the bounding box surrounding it (Rect), and the index in the string detected as both MLKit service will group some chars in a string with a unique bounding box.
Then I’ve created an object capable to instantiate the right service depending which service (HMS or GMS) is running on the device
Code:
object DocumentTextRecognizerService {
private fun getServiceType(context: Context) = when {
isGooglePlayServicesAvailable(
context
) -> ServiceType.GOOGLE
isHuaweiMobileServicesAvailable(
context
) -> ServiceType.HUAWEI
else -> ServiceType.GOOGLE
}
private fun isGooglePlayServicesAvailable(context: Context): Boolean {
return GoogleApiAvailability.getInstance()
.isGooglePlayServicesAvailable(context) == ConnectionResult.SUCCESS
}
private fun isHuaweiMobileServicesAvailable(context: Context): Boolean {
return HuaweiApiAvailability.getInstance()
.isHuaweiMobileServicesAvailable(context) == com.huawei.hms.api.ConnectionResult.SUCCESS
}
fun create(context: Context): DocumentTextRecognizer {
val type =
getServiceType(
context
)
if (type == ServiceType.HUAWEI)
return HMSDocumentTextRecognizer()
return GMSDocumentTextRecognizer()
}
}
This was pretty much all to make it works.
The ViewModel can use the service provided
Code:
class WordSearchAiViewModel(
private val resourceProvider: ResourceProvider,
private val recognizer: DocumentTextRecognizer
) : ViewModel() {
val resultList: MutableLiveData<List<String>> = MutableLiveData()
val resultBoundingBoxes: MutableLiveData<List<Symbol>> = MutableLiveData()
private lateinit var dictionary: List<String>
fun detectDocumentTextIn(bitmap: Bitmap) {
loadDictionary()
recognizer.processImage(bitmap, {
postWordsFound(it)
postBoundingBoxes(it)
},
{
Log.e("WordSearchAIViewModel", it)
})
}
by the right recognizer instantiated when the WordSearchAiViewModel is instantiated as well.
Running the app and choosing a word search game image on a Mate 30 Pro (an HMS device) shows this result
The Recognizer Brothers
You can check the code of the two recognizers below. What they are doing is to use the custom SDK implementation to get the result and adapt it to the interface, you can virtually use any other service capable to do the same.
Code:
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
import com.google.firebase.ml.vision.FirebaseVision
import com.google.firebase.ml.vision.common.FirebaseVisionImage
class GMSDocumentTextRecognizer : DocumentTextRecognizer {
private val detector = FirebaseVision.getInstance().onDeviceTextRecognizer
override fun processImage(
bitmap: Bitmap,
success: (Document) -> Unit,
error: (String?) -> Unit
) {
val firebaseImage = FirebaseVisionImage.fromBitmap(bitmap)
detector.processImage(firebaseImage)
.addOnSuccessListener { firebaseVisionDocumentText ->
if (firebaseVisionDocumentText != null) {
val words = firebaseVisionDocumentText.textBlocks
.flatMap { it -> it.lines }
.flatMap { it.elements }
val symbols: MutableList<Symbol> = emptyList<Symbol>().toMutableList()
words.forEach {
val rect = it.boundingBox
if (rect != null) {
it.text.forEachIndexed { idx, value ->
symbols.add(
Symbol(
value.toString(),
rect,
idx,
it.text.length
)
)
}
}
}
val document =
Document(
firebaseVisionDocumentText.text,
firebaseVisionDocumentText.textBlocks.size,
symbols
)
success(document)
}
}
.addOnFailureListener { error(it.localizedMessage) }
}
}
Code:
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
import com.huawei.hms.mlsdk.MLAnalyzerFactory
import com.huawei.hms.mlsdk.common.MLFrame
class HMSDocumentTextRecognizer : DocumentTextRecognizer {
//private val detector = MLAnalyzerFactory.getInstance().remoteDocumentAnalyzer
private val detector = MLAnalyzerFactory.getInstance().localTextAnalyzer
override fun processImage(
bitmap: Bitmap,
success: (Document) -> Unit,
error: (String?) -> Unit
) {
val hmsFrame = MLFrame.fromBitmap(bitmap)
detector.asyncAnalyseFrame(hmsFrame)
.addOnSuccessListener { mlDocument ->
if (mlDocument != null) {
val words = mlDocument.blocks
.flatMap { it.contents }
.flatMap { it.contents }
val symbols: MutableList<Symbol> = emptyList<Symbol>().toMutableList()
words.forEach {
val rect = it.border
it.stringValue.forEachIndexed { idx, value ->
symbols.add(Symbol(
value.toString(),
rect,
idx,
it.stringValue.length
))
}
}
val document =
Document(
mlDocument.stringValue,
mlDocument.blocks.size,
symbols
)
success(document)
}
}
.addOnFailureListener { error(it.localizedMessage) }
}
}
Conclusion
As good Android developers we should develop and deploy our apps in all the platforms our user can reach, love and adopt, without excluding anyone.
We should spend some time trying to give the users the same experience. This is a small sample about it and others will comes in the future.

How to use HUAWEI ML Kit service to quickly develop a photo translation app

Photo translation app is quite useful when traveling abroad and this article will help the developers build this app in short time. We use HUAWEI ML kit help to build this app and this will largely accelerate the whole development process.
Introduction
There must be a lot of friends who like to travel. Sometimes it's better to go abroad for a tour. Before the tour, we will make all kinds of strategies for eating, wearing, living, traveling and playing routes.
Imaginary tourism:
Before departure, the imagined tourist destination may have beautiful buildings:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Delicious food
Beautiful women
Carefree life
Actual tourism:
But in reality, if you go to a place where the language is different from ur mother tongue, you may encounter the following problems:
A confusing map
Unreadable menu
Street sign
That's too hard to travel abroad without any translation tool !!!
Photo translator will help you
With text recognition and translation services, none of the above is a problem. There are only two steps to complete the development of photo translation small application:
Text recognition
First take a photo and then send the image to Huawei HMS ml kit text recognition service for text recognition
Huawei's text recognition service provides offline SDK (end side) and cloud side at the same time. The end side is free and can be detected in real time, and the cloud side recognition type and accuracy are higher. In this actual battle, we use the capabilities provided by cloud side.
Photo translation app development
1 Development preparation
Due to the use of cloud services, it is necessary to register the developer account with Huawei's developer alliance and open these services in the cloud. Here we will not go into details, just follow the operation steps of the official appgallery connect configuration and service opening:
Registered developer, open service reference please go to:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-enable-service
1.1 add Maven in project level gradle
Open the Android studio project level build.gradle file.
Add the maven address
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
1.2 add SDK dependency in application level build.gradle
Integrated SDK. (Due to the use of cloud-side capabilities, only SDK basic packages can be introduced)
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-translate:1.0.2.300'}
1.3 apply for camera and storage permission in Android manifest.xml file
Code:
<uses-permission android:name="android.permission.CAMERA" /><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /><uses-feature android:name="android.hardware.camera" /><uses-feature android:name="android.hardware.camera.autofocus" />
Two key steps of code development
2.1 dynamic authority application
Code:
private static final int CAMERA_PERMISSION_CODE = 1; @Override
public void onCreate(Bundle savedInstanceState) {
// Checking camera permission
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}
2.2 create a cloud text analyzer. You can create a text analyzer from the text detection configurator "mlremotetextsetting".
Code:
MLRemoteTextSetting setting = (new MLRemoteTextSetting.Factory()).
setTextDensityScene(MLRemoteTextSetting.OCR_LOOSE_SCENE).create();this.textAnalyzer = MLAnalyzerFactory.getInstance().getRemoteTextAnalyzer(setting);
2.3 create "mlframe" object through android.graphics.bitmap for analyzer to detect pictures.
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 call "asyncanalyseframe" method for text detection.
Code:
Task<MLText> task = this.textAnalyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override public void onSuccess(MLText mlText) {
// Transacting logic for segment success.
if (mlText != null) {
RemoteTranslateActivity.this.remoteDetectSuccess(mlText);
} else {
RemoteTranslateActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
RemoteTranslateActivity.this.displayFailure();
return;
}
});
2.5 create a text translator. You can create a translator through class "mlremotetranslatesetting".
Code:
MLRemoteTranslateSetting.Factory factory = new MLRemoteTranslateSetting
.Factory()
// Set the target language code. The ISO 639-1 standard is used.
.setTargetLangCode(this.dstLanguage);
if (!this.srcLanguage.equals("AUTO")) {
// Set the source language code. The ISO 639-1 standard is used.
factory.setSourceLangCode(this.srcLanguage);
}
this.translator = MLTranslatorFactory.getInstance().getRemoteTranslator(factory.create());
2.6 call "asyncanalyseframe" method to translate the content obtained by text recognition.
Code:
final Task<String> task = translator.asyncTranslate(this.sourceText);
task.addOnSuccessListener(new OnSuccessListener<String>() {
@Override public void onSuccess(String text) {
if (text != null) {
RemoteTranslateActivity.this.remoteDisplaySuccess(text);
} else {
RemoteTranslateActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
RemoteTranslateActivity.this.displayFailure();
}
});
2.7 release resources after translation.
Code:
if (this.textAnalyzer != null) {
try {
this.textAnalyzer.close();
} catch (IOException e) {
SmartLog.e(RemoteTranslateActivity.TAG, "Stop analyzer failed: " + e.getMessage());
}
}
if (this.translator != null) {
this.translator.stop();
}
3 source code
The demo source code has been uploaded to GitHub(the project directory is: Photo translate). You can do scene based optimization for reference.
https://github.com/HMS-MLKit/HUAWEI-HMS-MLKit-Sample
4 demo
5 Brainstorming
The app development demonstrats how to use the two cloud side capabilities of Huawei HMS ml kit, text recognition and translation. Huawei's text recognition and translation can also help developers to do many other interesting and powerful functions, such as:
[general text recognition]
1. text recognition of bus license plate
2. Text recognition in document reading
[card recognition]
1. The card number of the bank card can be identified through text recognition, which is used in the scenarios such as bank card binding, etc
2. Of course, in addition to identifying bank cards, you can also identify various card numbers in your life, such as membership cards and preferential cards
3. In addition, it can also realize the identification of ID card, Hong Kong and Macao pass and other certificate numbers
[translation]
1. Signpost and signboard translation
2. Document translation
3. Web page translation, such as identifying the language type of the comment area of the website and translating it into the language of the corresponding country;
4. Introduction and translation of overseas products
5. Translation of restaurant order menu
FOR MORE REFERENCE PLZ CLICK:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
Reply to rikkirose
rikkirose said:
Thanks for the guide. I'm not sure that this application is suitable for high-quality translation of documents, as machine translators do this poorly, but otherwise it looks very simple and convenient.
Click to expand...
Click to collapse
Hi,rikkirose,document translation is not yet supported, and it is expected to be supported in August this year. Currently,High-quality translation, the key areas of optimization are news, travel, technology, and social. If it is not within those scopes, and if you really want to try it, you can provide us sample, we can do verification and quality improvement for you.
Please feel free to email and transfer the sample and detail requirement to this email:[email protected]
Hi,
Nice Post. Can we use Huawei ML Kit to translate our communication to other languages. It will help tourists to communicate. Is it possible.???
Very interesting, thanks

Develop an ID Photo DIY Applet with HMS MLKit SDK in 30 Min

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201257812100840239&fid=0101187876626530001
It’s an application level development and we won’t go through the algorithm of image segmentation. Use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don’t know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don’t have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn’t meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
1.2 Add SDK Dependency in Application Level build.gradle
Introducing SDK and basic SDK of face recognition:
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user’s device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--Uses storage permissions--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // Open the corresponding configuration page based on the package name.
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
The image segmentation detector can be created through the image segmentation detection configurator “mlimagesegmentation setting".
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create “mlframe” Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator “MLImageSegmentationSetting".
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call “asyncanalyseframe” Method for Image Segmentation
Code:
// Create a task to process the result returned by the image segmentation detector. Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // Asynchronously processing the result returned by the image segmentation detector Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let’s see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-MLKit/HUAWEI-HMS-MLKit-Sample=
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
People’s portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
Previous link:
NO. 1:One article to understand Huawei HMS ML Kit text recognition, bank card recognition, general card identification
NO.2: Integrating MLkit and publishing ur app on Huawei AppGallery
NO.3.: Comparison Between Zxing and Huawei HMS Scan Kit
NO.4: How to use Huawei HMS MLKit service to quickly develop a photo translation app

Use MLKit Service to Quickly Develop A Photo Translation App

More information like this, you can visit HUAWEI Developer Forum​
Photo translation app is quite useful when traveling abroad and this article will help the developers build this app in short time
We use HUAWEi Mlkit help to build this app and this will largely accelerate the whole development process.
Introduction
There must be a lot of friends who like to travel. Sometimes it’s better to go abroad for a tour. Before the tour, we will make all kinds of strategies for eating, wearing, living, traveling and playing routes.
Imaginary tourism:
Before departure, the imagined tourist destination may have beautiful buildings:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
delicious food:
beautiful women:
Actual tourism:
But in reality, if you go to a place where the language is different from ur mother tongue, you may encounter the following problems:
A confusing map
Unreadable menu
Street sign
Various goods:
That’s too hard to travel abroad without any translation tool !!!
Photo translator will help you
With text recognition and translation services, none of the above is a problem. There are only two steps to complete the development of photo translation small application:
Text recognition
First take a photo and then send the image to Huawei HMS ml kit text recognition service for text recognition
Huawei’s text recognition service provides offline SDK (end side) and cloud side at the same time. The end side is free and can be detected in real time, and the cloud side recognition type and accuracy are higher. In this actual battle, we use the capabilities provided by cloud side.
Photo translation app development
1 Development preparation
Due to the use of cloud services, it is necessary to register the developer account with Huawei’s developer alliance and open these services in the cloud. Here we will not go into details, just follow the operation steps of the official appgallery connect configuration and service opening:
Registered developer, open service reference please go to:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-enable-service
1.1 add Maven in project level gradle
Open the Android studio project level build.gradle file.
Add the maven address
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
1.2 add SDK dependency in application level build.gradle
Integrate the SDK. (Because cloud capabilities are used, only the SDK basic package needs to be introduced.)
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-translate:1.0.2.300'}
1.3 apply for camera and storage permission in Android manifest.xml file
Code:
<uses-permission android:name="android.permission.CAMERA" /><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /><uses-feature android:name="android.hardware.camera" /><uses-feature android:name="android.hardware.camera.autofocus" />
Two key steps of code development
2.1 dynamic authority application
Code:
private static final int CAMERA_PERMISSION_CODE = 1; @Override
public void onCreate(Bundle savedInstanceState) {
// Checking camera permission
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}
2.2 create a cloud text analyzer. You can create a text analyzer from the text detection configurator “mlremotetextsetting”.
Code:
MLRemoteTextSetting setting = (new MLRemoteTextSetting.Factory()).
setTextDensityScene(MLRemoteTextSetting.OCR_LOOSE_SCENE).create();this.textAnalyzer = MLAnalyzerFactory.getInstance().getRemoteTextAnalyzer(setting);
2.3 create “mlframe” object through android.graphics.bitmap for analyzer to detect pictures.
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 call “asyncanalyseframe” method for text detection.
Code:
Task<MLText> task = this.textAnalyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override public void onSuccess(MLText mlText) {
// Transacting logic for segment success.
if (mlText != null) {
RemoteTranslateActivity.this.remoteDetectSuccess(mlText);
} else {
RemoteTranslateActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
RemoteTranslateActivity.this.displayFailure();
return;
}
});
2.5 create a text translator. You can create a translator through class “mlremotetranslatesetting”.
Code:
MLRemoteTranslateSetting.Factory factory = new MLRemoteTranslateSetting
.Factory()
// Set the target language code. The ISO 639-1 standard is used.
.setTargetLangCode(this.dstLanguage);
if (!this.srcLanguage.equals("AUTO")) {
// Set the source language code. The ISO 639-1 standard is used.
factory.setSourceLangCode(this.srcLanguage);
}
this.translator = MLTranslatorFactory.getInstance().getRemoteTranslator(factory.create());
2.6 call “asyncanalyseframe” method to translate the content obtained by text recognition.
Code:
final Task<String> task = translator.asyncTranslate(this.sourceText);
task.addOnSuccessListener(new OnSuccessListener<String>() {
@Override public void onSuccess(String text) {
if (text != null) {
RemoteTranslateActivity.this.remoteDisplaySuccess(text);
} else {
RemoteTranslateActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
RemoteTranslateActivity.this.displayFailure();
}
});
2.7 release resources after translation.
Code:
if (this.textAnalyzer != null) {
try {
this.textAnalyzer.close();
} catch (IOException e) {
SmartLog.e(RemoteTranslateActivity.TAG, "Stop analyzer failed: " + e.getMessage());
}
}
if (this.translator != null) {
this.translator.stop();
}
3 source code
The demo source code has been uploaded to GitHub(the project directory is: Photo translate). You can do scene based optimization for reference.
https://github.com/HMS-MLKit/HUAWEI-HMS-MLKit-Sample
4 Demo
5 Brainstorming
The app development demonstrats how to use the two cloud side capabilities of Huawei HMS ml kit, text recognition and translation. Huawei’s text recognition and translation can also help developers to do many other interesting and powerful functions, such as:
[general text recognition]
1. text recognition of bus license plate
2. Text recognition in document reading
[card recognition]
1. The card number of the bank card can be identified through text recognition, which is used in the scenarios such as bank card binding, etc
2. Of course, in addition to identifying bank cards, you can also identify various card numbers in your life, such as membership cards and preferential cards
3. In addition, it can also realize the identification of ID card, Hong Kong and Macao pass and other certificate numbers
[translation]
1. Signpost and signboard translation
2. Document translation
3. Web page translation, such as identifying the language type of the comment area of the website and translating it into the language of the corresponding country;
4. Introduction and translation of overseas products
5. Translation of restaurant order menu
FOR MORE REFERENCE PLZ CLICK:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
Previous link:
NO. 1:One article to understand Huawei HMS ML Kit text recognition, bank card recognition, general card identification
NO.2: Integrating MLkit and publishing ur app on Huawei AppGallery
does Ml kit work locally or processing is done at server?

Note on Developing a Person Tracking Function

Background​Videos are memories — so why not spend more time making them look better? Many mobile apps on the market simply offer basic editing functions, such as applying filters and adding stickers. That said, it is not enough for those who want to create dynamic videos, where a moving person stays in focus. Traditionally, this requires a keyframe to be added and the video image to be manually adjusted, which could scare off many amateur video editors.
I am one of those people and I've been looking for an easier way of implementing this kind of feature. Fortunately for me, I stumbled across the track person capability from HMS Core Video Editor Kit, which automatically generates a video that centers on a moving person, as the images below show.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Before using the capability
After using the capability​
Thanks to the capability, I can now confidently create a video with the person tracking effect.
Let's see how the function is developed.
Development Process​Preparations​Configure the app information in AppGallery Connect.
Project Configuration​1. Set the authentication information for the app via an access token or API key.
Use the setAccessToken method to set an access token during app initialization. This needs setting only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. The API key needs to be set only once.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a unique License ID.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for HuaweiVideoEditor.
When creating a video editing project, first create a HuaweiVideoEditor object and initialize its runtime environment. Release this object when exiting a video editing project.
(1) Create a HuaweiVideoEditor object.​
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
(2) Specify the preview area position.
The area renders video images. This process is implemented via SurfaceView creation in the SDK. The preview area position must be specified before the area is created.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Configure the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
(3) Initialize the runtime environment. LicenseException will be thrown if license verification fails.
Creating the HuaweiVideoEditor object will not occupy any system resources. The initialization time for the runtime environment has to be manually set. Then, necessary threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
4. Add a video or an image.
Create a video lane. Add a video or an image to the lane using the file path.
Code:
// Obtain the HVETimeLine object.
HVETimeLine timeline = editor.getTimeLine();
// Create a video lane.
HVEVideoLane videoLane = timeline.appendVideoLane();
// Add a video to the end of the lane.
HVEVideoAsset videoAsset = videoLane.appendVideoAsset("test.mp4");
// Add an image to the end of the video lane.
HVEImageAsset imageAsset = videoLane.appendImageAsset("test.jpg");
Function Building​
Code:
// Initialize the capability engine.
visibleAsset.initHumanTrackingEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Initialization progress.
}
@Override
public void onSuccess() {
// The initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The initialization failed.
}
});
// Track a person using the coordinates. Coordinates of two vertices that define the rectangle containing the person are returned.
List<Float> rects = visibleAsset.selectHumanTrackingPerson(bitmap, position2D);
// Enable the effect of person tracking.
visibleAsset.addHumanTrackingEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Handling progress.
}
@Override
public void onSuccess() {
// Handling successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Handling failed.
}
});
// Interrupt the effect.
visibleAsset.interruptHumanTracking();
// Remove the effect.
visibleAsset.removeHumanTrackingEffect();
References​The Importance of Visual Effects
Track Person

Categories

Resources