How a Background Remover Is Born - Huawei Developers

Why Do I Need a Background Remover​A background removal tool is not really a new feature, but rather its importance has grown as the world shifted over to online working and learning over the last few years. And I did not find how important this tool could be until just two weeks ago. On a warm, sunny morning with a coffee on hand, I joined an online conference. During this conference, one of my colleagues pointed out to me that they could see my untidy desk and an overflowing bin in the background. Naturally, this left me feeling embarrassed. I just wish I could travel back in time to use a background remover.
Now, I cannot travel in time, but I can certainly create a background removal tool. So, with this new-found motive, I looked online for some solutions and came across the body or head segmentation capability from HMS Core Video Editor Kit, and developed a demo app with it.
This service can divide the body or head from an input image or video and then generate a video, an image, or a sticker of the divided part. In this way, the body or head segmentation service helps realize the background removal effect.
Now, let's go deeper into the technical details about the service.
How the Background Remover Is Implemented​The algorithm of the service performs a series of operations on the input video, including extracting frames, using an AI model to process the video, and encoding. Among all these, the core is the AI model. How a service performs is affected by factors like device computing power and power consumption. Considering these, the development team of the service manages to equip it with a light-weight AI model that does a good job in feature extraction, by taking measures like compression, quantization, and pruning. In this way, the processing duration of the AI model is decreased to a relatively low level, without compromising the segmentation accuracy.
The mentioned algorithm supports both images and videos. An image takes the algorithm a single inference for the segmentation result. A video is actually a collection of images. If a model features poor segmentation capability, the segmentation accuracy for each image will be low. As a result, the segmentation results of consecutive images will be different from each other, and the segmentation result of the whole video will appear shaking. To resolve this, the team adopts technologies like inter-frame stabilization and the objective function for inter-frame consistency. Such measures do not compromise the model inference speed yet fully utilize the time sequence information of a video. Consequently, the algorithm sees its inter-frame stabilization improved, which contributes to an ideal segmentation effect.
By the way, the service requires that the input image or video contains up to 5 people whose contours should be visible. Besides, the service supports common motions of the people in the input image or video, like standing, lying, walking, sitting, and more.
The technical basics of the service conclude here, and let's see how it can be integrated with an app.
How to Equip an App with the Background Remover Functionality​Preparations​Go to AppGallery Connect and configure the app's details. In this step, we need to register a developer account, create an app, generate a signing certificate fingerprint, and activate the required services.
Integrate the HMS Core SDK.
Configure the obfuscation scripts.
Declare necessary permissions.
Setting Up a Video Editing Project​Prerequisites​1. Set the app authentication information either by:
Using an access token: Call the setAccessToken method to set an access token when the app is started. The access token needs to be set only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Using an API key: Call the setApiKey method to set an API key when the app is started. The API key needs to be set only once.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a License ID. The ID is used to manage the usage quotas of the kit, so make sure the ID is unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
Initializing the Runtime Environment for the Entry Class​A HuaweiVideoEditor object serves as the entry class of a whole video editing project. The lifecycle of this object and the project should be the same. Therefore, when creating a video editing project, create a HuaweiVideoEditor object first and then initialize its runtime environment. Remember to release this object when exiting the project.
1. Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
2. Determine the preview area position.
This area renders video images, a process that is implemented by creating SurfaceView within the SDK. Make sure that the position of this area is specified before the area is created.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the layout of the preview area.
editor.setDisplay(mSdkPreviewContainer);
3. Initialize the runtime environment of HuaweiVideoEditor. LicenseException will be reported when the license verification fails.
The HuaweiVideoEditor object, after being created, has not occupied any system resources. We need to manually set the time for initializing its runtime environment, and then the necessary threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Integrating the Segmentation Capability​
Code:
// Initialize the segmentation engine. segPart indicates the segmentation type, whose value is an integer. Value 1 indicates body segmentation, and a value other than 1 indicates head segmentation.
visibleAsset.initBodySegEngine(segPart, new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// After the initialization is successful, apply the segmentation effect.
visibleAsset.addBodySegEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Callback when the application progress is received.
}
@Override
public void onSuccess() {
// Callback when the effect is successfully applied.
}
@Override
public void onError(int errorCode, String errorMsg) {
// Callback when the effect failed to be applied.
}
});
// Stop applying the segmentation effect.
visibleAsset.interruptBodySegEffect();
// Remove the segmentation effect.
visibleAsset.removeBodySegEffect();
// Release the segmentation engine.
visibleAsset.releaseBodySegEngine();
And now the app is capable of removing the image or video background.
This function is ideal for e-conferencing apps, where the background is not important. For learning apps, it allows teachers to change the background to the theme of the lesson, for better immersion. Not only that, but when it's used in a short video app, users can put themselves in unusual backgrounds, such as space and the sea, to create fun and fantasy-themed videos.
Have you got any better ideas of how to use the background remover? Let us know in the comments section below.
Wrap up​Background removal tools are trending among apps in different fields, given that such a tool helps images and videos look better by removing unnecessary or messy backgrounds, as well as protecting user privacy.
The body or head segmentation service from Video Editor Kit is one such solution for removing a background. It supports both images and videos, and outputs a video, an image, or a sticker of the segmented part for further editing. Its streamlined integration makes it a perfect choice for enhancing videos and images.

Related

ML Kit: Document Recognition Development Procedure

HUAWEI ML Kit allows your apps to easily leverage Huawei's long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei's technology accumulation, HUAWEI ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
The document recognition service can recognize text with paragraph formats in document images. Document recognition involves a large amount of computing. The SDK calls the cloud API to recognize documents only in asynchronous mode. With the SDK, you can quickly build various document recognition apps.
The document recognition service can extract text from document images to convert paper documents into electronic copies, greatly improving the information input efficiency and reducing labor costs. For example, when medical and legal files need to be stored electronically, you can use this service to generate well-structured documents based on the text information in the file images. In this way, you can quickly record and archive the files.
1. Create a document analyzer. You are advised to use the provided custom document recognition parameter MLDocumentSetting to specify to-be-recognized languages and other settings to create a document analyzer. In this way, you will get a higher document recognition accuracy.
Code:
// Method 1: Use customized parameter settings.
MLDocumentSetting setting = new MLDocumentSetting.Factory()
// Specify the languages that can be recognized, which should comply with ISO 639-1.
.setLanguageList(new ArrayList<String>(){{this.add("zh"); this.add("en");}})
// Set the format of the returned text border box.
// MLRemoteTextSetting.NGON: Return the coordinates of the four vertices of the quadrilateral.
// MLRemoteTextSetting.ARC: Return the vertices of a polygon border in an arc. The coordinates of up to 72 vertices can be returned.
.setBorderType(MLRemoteTextSetting.ARC)
.create();
MLDocumentAnalyzer analyzer =
MLAnalyzerFactory.getInstance().getRemoteDocumentAnalyzer(setting);
// Method 2: Use the default parameter settings to automatically detect languages for text recognition. The format of the returned text box is MLRemoteTextSetting.NGON.
MLDocumentAnalyzer analyzer =
MLAnalyzerFactory.getInstance().getRemoteDocumentAnalyzer();
2. Create an MLFrame using android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are supported.
Code:
// Create an MLFrame object using the bitmap, which is the image data in bitmap format.
MLFrame frame = MLFrame.fromBitmap(bitmap);
3. Pass the MLFrame object to the asyncAnalyseFrame method for document recognition.
Code:
Task<MLDocument> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLDocument>() {
@Override
public void onSuccess(MLDocument document) {
// Recognition success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
}
});
4. After the recognition is complete, stop the analyzer to release recognition resources.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}

All About Maps - Episode 1: Showing Routes from GPX files on Maps

More articles like this one, you can visit HUAWEI Developer Forum and Medium.​
All About Maps
Let's talk about maps. I started an open source project called All About Maps (https://github.com/ulusoyca/AllAboutMaps). In this project I aim to demonstrate how we can implement the same map related use cases with different map providers in one codebase. We will use Mapbox Maps, Google Maps, and Huawei HMS Map Kit. This project uses following libraries and patterns:
MVVM pattern with Android Jetpack Libraries
Kotlin Coroutines for asynchronous operations
Dagger2 Dependency Injection
Android Clean Architecture
Note: The codebase changes by time. You can always find the latest code in develop branch. The code when this article is written can be seen by choosing the tag: episode_1-parse-gpx:
https://github.com/ulusoyca/AllAboutMaps/tree/episode_1-parse-gpx/
Motivation
Why do we need maps in our apps? What are the features a developer would expect from a map SDK? Let's try to list some:
Showing a coordinate on a map with camera options (zoom, tilt, latitude, longitude, bearing)
Adding symbols, photos, polylines, polygons to map
Handle user gestures (click, pinch, move events)
Showing maps with different map styles (Outdoor, Hybrid, Satallite, Winter, Dark etc.)
Data visualization (heatmaps, charts, clusters, time-lapse effect)
Offline map visualization (providing map tiles without network connectivity)
Generate snapshot image of a bounded region
We can probably add more items but I believe this is the list of features which all map provider companies would most likely provide. Knowing that we can achieve the same tasks with different map providers, we should not create huge dependencies to any specific provider in our codebase. When a product owner (PO) tells to developers to switch from Google Maps to Mapbox Maps, or Huawei Maps, developers should never see it as a big deal. It is software development. Business as usual.
One would probably think why a PO would want to switch from one map provider to another. In many cases, the reason is not the technical details. For example, Google Play Services may not be available in some devices or regions like China. Another case is when a company X which has a subscription to Mapbox, acquires company Y which uses Google Maps. In this case the transition to one provider is more efficient. Change in the terms of services, and pricing might be other motivations.
We need competition in the market! Let's switch easily when needed but how dependencies make things worse? Problematic dependencies in the codebase are usually created by developing software like there is no tomorrow. It is not always developers' fault. Tight schedules, anti-refactoring minded teams, unorganized plannings may cause careless coding and then eventually to technical depts. In this project, I aim to show how we can encapsulate the import lines below belonging to three different map providers to minimum number of classes with minimum lines:
import com.huawei.hms.maps.*
import com.google.android.gms.maps.*
import com.mapbox.mapboxsdk.maps.*
It should be noted that the way of achieving this in this post is just one proposal. There are always alternative and better ways of implementations. In the end, as software developers, we should deliver our tasks time-efficiently, without over-engineering.
About the project
In the home page of the project you will see the list of tutorials. Since this is the first blog post, there is only one item for now. To make our life easier with RecyclerViews, I use Epoxy library by Airbnb in the project. Once you click the buttons in the card, it will take to the detail page. Using bottom sheet we can switch between map providers. Note that Huawei Map Kit requires a Huawei mobile phone.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this first blog post, we will parse the GPX file of 120 km route of Cappadocia Ultra Trail race and show the route and check points (food stations) on map. I finished this race in 23 hours 45 mins and you can also read my experience here (https://link.medium.com/uWmrWLAzR6). GPX is an open standart which contains route points that constructs a polyline and waypoints which are the attraction location. In this case, the waypoints represents the food and aid stations in the race. We will show the route with a polyline and waypoints with markers on map.
Architecture
Architecture is definitely not an overrated concept. Since the early days of Android, we have been seeking for the best architectural patterns that suits with Android development. We have heard of MVC, MVP, MVVM, MVI and many other patterns will emerge. The change and adaptation to a new pattern is inevitable by time. We should keep in mind some basic and commonly accepted concepts like SOLID principles, seperation of concerns, maintainability, readibility, testablity etc. so that we can switch to between patterns easily when needed.
Nowadays, widely accepted architecture in Android community is modularization with Clean Architecture. If you have time to invest more, I would strongly suggest Joe Birch's clean architecture tutorials. As Joe suggests in his tutorials, we do not have to apply every rule line by line but instead we take whatever we feel like is needed. Here is my take and how I modularized the All About Maps app:
Note that dependency injection with Dagger2 is the core of this implementation. If you are not familiar with the concept, I strongly suggest you to read the best Dagger2 tutorial in the wild Dagger2 world by Nimrod Dayan.
Domain Module
Many of us are excited to start implementation with UI to see the results immediately but we should patiently build our blocks. We shall start with the domain module since we will put our business logic and define the entities and user interactions there.
First question: What entities do we need for a Map app?
We don't have to put every entity at once. Since our first tutorial is about drawing polylines and symbols we will need the following data:
LatLng class which holds Latitude and Longitude
Point which represents a geo-coordinate.
RouteInfo that holds points to be used to draw route and waypoints
Let's see the implementations:
Code:
inline class Latitude(val value: Float)
inline class Longitude(val value: Float)
Code:
data class LatLng(
val latitude: Latitude,
val longitude: Longitude
)
Code:
data class Point(
val latitude: Latitude,
val longitude: Longitude,
val altitude: Float? = null,
val name: String? = null
) {
val latLng: LatLng
get() = LatLng(latitude, longitude)
}
Code:
data class RouteInfo(
val routePoints: List<Point> = emptyList(),
val wayPoints: List<Point> = emptyList()
)
I could have used Float primitive type for Latitude and Longitude fields. However, I strongly suggest you to take advantage of Kotlin inline classes. In my relatively long career of working on maps, I spent hours on issues caused by mistakenly using longitude for latitude values.
Note that LatLng class is available in all Map SDKs. However, all the modules below the domain layer should use only our own LatLng to prevent the dependency to map SDKs in those modules. In the app layer we can map our LatLng class to corresponding classes:
Code:
import com.ulusoy.allaboutmaps.domain.entities.LatLng
import com.mapbox.mapboxsdk.geometry.LatLng as MapboxLatLng
import com.huawei.hms.maps.model.LatLng as HuaweiLatLng
import com.google.android.gms.maps.model.LatLng as GoogleLatLang
fun LatLng.toMapboxLatLng() = MapboxLatLng(
latitude.value.toDouble(),
longitude.value.toDouble()
)
fun LatLng.toHuaweiLatLng() = HuaweiLatLng(
latitude.value.toDouble(),
longitude.value.toDouble()
)
fun LatLng.toGoogleLatLng() = GoogleLatLang(
latitude.value.toDouble(),
longitude.value.toDouble()
)
Second question: What actions user can trigger?
Domain module contains the uses cases (interactors) that an application can perform to achieve goals based on user interactions. The code in this module is less likely to change compared to other modules. Business is business. For example, this application has one job for now: showing the route info with a polyline and markers. It can get the route info from a web server, a database or in this case from application resource file which is a GPX file. Neither the app module nor the domain module doesn't care where the route points and waypoints are retrieved from. It is not their concern. The concerns are seperated.
Lets see the use case definition in our domain module:
Code:
class GetRouteInfoUseCase
@Inject constructor(
private val routeInfoRepository: RouteInfoRepository
) {
suspend operator fun invoke(): RouteInfo {
return routeInfoRepository.getRouteInfo()
}
}
Code:
interface RouteInfoRepository {
suspend fun getRouteInfo(): RouteInfo
}
RouteInfoRepository is an interface that lives in the domain module and it is a contract between domain and datasource modules. Its concrete implementation lives in the datasource module.
Datasource Module
Datasource module is an abstraction world. Life here is based on interfaces. The domain module communicates with datasource module through the repository interface, then datasource module orchestrates the data flow in repository class and returns the final value.
Here, the domain module asks for the route info. Datasource module decides what to return after retrieving data from different data sources. For the sake of simplicity, in this case we have only one datasource: GPX parser. The route info is extracted from a GPX file. We don't know where, and how. Let's see the code:
Here is the concrete implementation of RouteInfoRepository interface. Route info datasource is injected as constructor parameter to this class.
Code:
class RouteInfoDataRepository
@Inject constructor(
@Named("GPX_DATA_SOURCE")
private val gpxFileDatasource: RouteInfoDatasource
) : RouteInfoRepository {
override suspend fun getRouteInfo(): RouteInfo {
return gpxFileDatasource.parseGpxFile()
}
}
Here is our one only route info data source: GpxFileDataSource. It still doesn't know how to get the data from gpx file. However, it knows where to get the data from thanks to contract GpxFileParser
Code:
class GpxFileDatasource
@Inject constructor(
private val gpxFileParser: GpxFileParser
): RouteInfoDatasource {
override suspend fun parseGpxFile(): RouteInfo {
return gpxFileParser.parseGpxFile()
}
}
What is a GPX file? How is it parsed? Where is the file located? Datasource doesn't care about these details. It only knows that the concrete implementation of GpxFileParser will return the RouteInfo. Here is the contract between the datasource and the concrete implementation:
Code:
interface GpxFileParser {
suspend fun parseGpxFile(): RouteInfo
}
Is it already too confusing with too many abstractions around? Is it overengineering? You might be right and choose to have less abstractions when you have one datasource like in this case. However, in real world, we have multiple datasources. Data is all around us. It may come from web server, from database or from our connected devices such as wearables. The benefit here is when things get more complicated with multiple datasources. Let's think about this complicated scenario.
App asks for the route info through a use case class.
Domain module forwards the request to data source.
Datasource orchestrates the data in the repository class
It first asks to Web servers (remote data source) to get the route info. However, the user is offline. Thus, the remote data source is not available.
Then it checks what we have locally by first checking if the route info is available in database or not.
It is not available in database but we have a gpx file in our resource folder (I know it doesn't make sense but to give an example).
The repository class asks GPX parser to parse the file and return the desired RouteInfo data.
Too complicated? It would be this much easier to implement this code in repository class based on the scenario:
Code:
class RouteInfoDataRepository
@Inject constructor(
@Named("GPX_DATA_SOURCE")
private val gpxFileDatasource: RouteInfoDatasource,
@Named("REMOTE_DATA_SOURCE")
private val remoteDatasource: RouteInfoDatasource,
@Named("DATABASE_SOURCE")
private val localDatasource: RouteInfoDatasource
) : RouteInfoRepository {
override suspend fun getRouteInfo(): RouteInfo? {
var routeInfo = remoteDatasource.parseGpxFile()
if (routeInfo == null) {
Timber.d("Route info is not available in remote source, now trying local database")
routeInfo = localDatasource.parseGpxFile()
if (routeInfo == null) {
Timber.d("Route info is not available in local database. Let's hope we have a gpx file in the app resource folder")
gpxFileDatasource.parseGpxFile()
}
}
return routeInfo
}
}
Thanks to Kotlin coroutines we can write these asynchronous operations sequentially.
For full content, you can visit HUAWEI Developer Forum.​

Implementing Real-Time Transcription in an Easy Way

Background​The real-time onscreen subtitle is a must-have function in an ordinary video app. However, developing such a function can prove costly for small- and medium-sized developers. And even when implemented, speech recognition is often prone to inaccuracy. Fortunately, there's a better way — HUAWEI ML Kit, which is remarkably easy to integrate, and makes real-time transcription an absolute breeze!
Introduction to ML Kit​ML Kit allows your app to leverage Huawei's longstanding machine learning prowess to apply cutting-edge artificial intelligence (AI) across a wide range of contexts. With Huawei's expertise built in, ML Kit is able to provide a broad array of easy-to-use machine learning capabilities, which serve as the building blocks for tomorrow's cutting-edge AI apps. ML Kit capabilities include those related to:
Text (including text recognition, document recognition, and ID card recognition)
Language/Voice (such as real-time/on-device translation, automatic speech recognition, and real-time transcription)
Image (such as image classification, object detection and tracking, and landmark recognition)
Face/Body (such as face detection, skeleton detection, liveness detection, and face verification)
Natural language processing (text embedding)
Custom model (including the on-device inference framework and model development tool)
Real-time transcription is required to implement the function mentioned above. Let's take a look at how this works in practice:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Now let's move on to how to integrate this service.
Integrating Real-Time Transcription​Steps
Registering as a Huawei developer on HUAWEI Developers
Creating an app
Create an app in AppGallery Connect. For details, see Getting Started with Android.
We have provided some screenshots for your reference:
3. Enabling ML Kit
4. Integrating the HMS Core SDK.
Add the AppGallery Connect configuration file by completing the steps below:
Download and copy the agconnect-service.json file to the app directory of your Android Studio project.
Call setApiKey during app initialization.
To learn more, go to Adding the AppGallery Connect Configuration File.
5.Configuring the maven repository address
Add build dependencies.
Import the real-time transcription SDK.
Code:
implementation 'com.huawei.hms:ml-computer-voice-realtimetranscription:2.2.0.300'
Add the AppGallery Connect plugin configuration.
Method 1: Add the following information under the declaration in the file header:
Code:
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block.
Code:
plugins {
id 'com.android.application'
// Add the following configuration:
id 'com.huawei.agconnect'
}
Please refer to Integrating the Real-Time Transcription SDK to learn more.
Setting the cloud authentication information
When using on-cloud services of ML Kit, you can set the API key or access token (recommended) in either of the following ways:
Access token
You can use the following API to initialize the access token when the app is started. The access token does not need to be set again once initialized.
MLApplication.getInstance().setAccessToken("your access token");
API key
You can use the following API to initialize the API key when the app is started. The API key does not need to be set again once initialized.
MLApplication.getInstance().setApiKey("your ApiKey");
For details, see Notes on Using Cloud Authentication Information.
Code Development​
Create and configure a speech recognizer.
Code:
MLSpeechRealTimeTranscriptionConfig config = new MLSpeechRealTimeTranscriptionConfig.Factory()
// Set the language. Currently, this service supports Mandarin Chinese, English, and French.
.setLanguage(MLSpeechRealTimeTranscriptionConstants.LAN_ZH_CN)
// Punctuate the text recognized from the speech.
.enablePunctuation(true)
// Set the sentence offset.
.enableSentenceTimeOffset(true)
// Set the word offset.
.enableWordTimeOffset(true)
// Set the application scenario. MLSpeechRealTimeTranscriptionConstants.SCENES_SHOPPING indicates shopping, which is supported only for Chinese. Under this scenario, recognition for the name of Huawei products has been optimized.
.setScenes(MLSpeechRealTimeTranscriptionConstants.SCENES_SHOPPING)
.create();
MLSpeechRealTimeTranscription mSpeechRecognizer = MLSpeechRealTimeTranscription.getInstance();
Create a speech recognition result listener callback.
Code:
// Use the callback to implement the MLSpeechRealTimeTranscriptionListener API and methods in the API.
protected class SpeechRecognitionListener implements MLSpeechRealTimeTranscriptionListener{
@Override
public void onStartListening() {
// The recorder starts to receive speech.
}
@Override
public void onStartingOfSpeech() {
// The user starts to speak, that is, the speech recognizer detects that the user starts to speak.
}
@Override
public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) {
// Return the original PCM stream and audio power to the user. This API is not running in the main thread, and the return result is processed in a sub-thread.
}
@Override
public void onRecognizingResults(Bundle partialResults) {
// Receive the recognized text from MLSpeechRealTimeTranscription.
}
@Override
public void onError(int error, String errorMessage) {
// Called when an error occurs in recognition.
}
@Override
public void onState(int state,Bundle params) {
// Notify the app of the status change.
}
}
The recognition result can be obtained from the listener callbacks, including onRecognizingResults. Design the UI content according to the obtained results. For example, display the text transcribed from the input speech.
Bind the speech recognizer.
Code:
mSpeechRecognizer.setRealTimeTranscriptionListener(new SpeechRecognitionListener());
Call startRecognizing to start speech recognition.
Code:
mSpeechRecognizer.startRecognizing(config);
Release resources after recognition is complete.
Code:
if (mSpeechRecognizer!= null) {
mSpeechRecognizer.destroy();
}
(Optional) Obtain the list of supported languages.
Code:
MLSpeechRealTimeTranscription.getInstance()
.getLanguages(new MLSpeechRealTimeTranscription.LanguageCallback() {
@Override
public void onResult(List<String> result) {
Log.i(TAG, "support languages==" + result.toString());
}
@Override
public void onError(int errorCode, String errorMsg) {
Log.e(TAG, "errorCode:" + errorCode + "errorMsg:" + errorMsg);
}
});
We have finished integration here, so let's test it out on a simple screen.
Tap START RECORDING. The text recognized from the input speech will display in the lower portion of the screen.
We've now built a simple audio transcription function.
Eager to build a fancier UI, with stunning animations, and other effects? By all means, take your shot!
For reference:​Real-Time Transcription
Sample Code for ML Kit
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source

Building High-Precision Location Services with Location Kit

HUAWEI Location Kit provides you with the tools to build ultra-precise location services into your apps, by utilizing GNSS, Wi-Fi, base stations, and a range of cutting-edge hybrid positioning technologies. Location Kit-supported solutions give your apps a leg up in a ruthlessly competitive marketplace, making it easier than ever for you to serve a vast, global user base.
Location Kit currently offers three main functions: fused location, geofence, and activity identification. When used in conjunction with the Map SDK, which is supported in 200+ countries and regions and 100+ languages, you'll be able to bolster your apps with premium mapping services that enjoy a truly global reach.
Fused location provides easy-to-use APIs that are capable of obtaining the user's location with meticulous accuracy, and doing so while consuming a minimal amount of power. HW NLP, Huawei's exclusive network location service, makes use of crowdsourced data to achieve heightened accuracy. Such high-precision, cost-effective positioning has enormous implications for a broad array of mobile services, including ride hailing navigation, food delivery, travel, and lifestyle services, providing customers and service providers alike with the high-value, real time information that they need.
To avoid boring you with the technical details, we've provided some specific examples of how positioning systems, geofence, activity identification, map display and route planning services can be applied in the real world.
For instance, you can use Location kit to obtain the user's current location and create a 500-meter geofence radius around it, which can be used to determine the user's activity status when the geofence is triggered, then automatically plan a route based on this activity status (for example, plan a walking route when the activity is identified as walking), and have it shown on the map.
This article addresses the following functions:
Fused location: Incorporates GNSS, Wi-Fi, and base station data via easy-to-use APIs, making it easy for your app to obtain device location information.
Activity identification: Identifies the user's motion status, using the acceleration sensor, network information, and magnetometer, so that you can tailor your app to account for the user's behavior.
Geofence: Allows you to set virtual geographic boundaries via APIs, to send out timely notifications when users enter, exit, or remain with the boundaries.
Map display: Includes the map display, interactive features, map drawing, custom map styles, and a range of other features.
Route planning: Provides HTTP/HTTPS APIs for you to initiate requests using HTTP/HTTPS, and obtain the returned data in JSON format.
Usage scenarios:
Using high-precision positioning technology to obtain real time location and tracking data for delivery or logistics personnel, for optimally efficient services. In the event of accidents or emergencies, the location of personnel could also be obtained with ease, to ensure their quick rescue.
Creating a geofence in the system, which can be used to monitor an important or dangerous area at all times. If someone enters such an area without authorization, the system could send out a proactive alert. This solution can also be linked with onsite video surveillance equipment. When an alert is triggered, the video surveillance camera could pop up to provide continual monitoring, free of any blind spots.
Tracking patients with special needs in hospitals and elderly residents in nursing homes, in order to provide them with the best possible care. Positioning services could be linked with wearable devices, for attentive 24/7 care in real time.
Using the map to directly find destinations, and perform automatic route planning.
I. Advantages of Location Kit and Map Kit
Low-power consumption (Location Kit): Implements geofence using the chipset, for optimized power efficiency
High precision (Location Kit): Optimizes positioning accuracy in urban canyons, correctly identifying the roadside of the user. Sub-meter positioning accuracy in open areas, with RTK (Real-time kinematic) technology support. Personal information, activity identification, and other data are not uploaded to the server while location services are performed. As the data processor, Location Kit only uses data, and does not store it.
Personalized map displays (Map Kit): Offers enriching map elements and a wide range of interactive methods for building your map.
Broad-ranging place searches (Map Kit): Covers 130+ million POIs and 150+ million addresses, and supports place input prompts.
Global coverage: Supports 200+ countries/regions, and 40+ languages.
For more information and development guides, please visit: https://developer.huawei.com/consumer/en/hms/huawei-MapKit
II. Demo App Introduction
In order to illustrate how to integrate Location Kit and Map Kit both easily and efficiently, we've provided a case study here, which shows the simplest coding method for running the demo.
This app is used to create a geofence on the map based on the location when the user opens the app. The user can drag on the red mark to set a destination. After being confirmed, when the user triggers the geofence condition, the app will automatically detect their activity status and plan a route for the user, such as planning a walking route if the activity status is walking, or cycling route if the activity status is cycling. You can also implement real-time voice navigation for the planned route.
III. Development Practice
You need to set the priority (which is 100 by default) before requesting locations. To request the precise GPS location, set the priority to 100. To request the network location, set the priority to 102 or 104. If you only need to passively receive locations, set the priority to 105.
Parameters related to activity identification include VEHICLE (100), BIKE (101), FOOT (102), and STILL (103).
Geofence-related parameters include ENTER_GEOFENCE_CONVERSION (1), EXIT_GEOFENCE_CONVERSION (2), and DWELL_GEOFENCE_CONVERSION (4).
The following describes how to run the demo using source code, helping you understand the implementation details.
Preparations
Preparing Tools
Huawei phones (It is recommended that multiple devices be tested)
Android Studio
2.Registering as a Developer
Register as a Huawei developer.
Create an app in AppGallery Connect.
Create an app in AppGallery Connect by referring to Location Kit development preparations or Map Kit development preparations.
Enable Location Kit and Map Kit for the app on the Manage APIs page.
Add the SHA-256 certificate fingerprint.
Download the agconnect-services.json file and add it to the app directory of the project.
Create an Android demo project.
Learn about the function restrictions.
To use the route planning function of Map Kit, refer to Supported Countries/Regions (Route Planning).
To use other services of Map Kit, refer to Supported Countries/Regions.
Device TypeFeatureOS VersionHMS Core (APK) VersionHuawei phonesFused locationEMUI 5.0 or later4.0.0 or laterGeofenceEMUI 8.0 or later4.0.4 or laterActivity identificationEMUI 9.1.1 or later3.0.2 or laterNon-Huawei Android phonesFused locationAndroid 8.0 or later (API level 26 or higher)4.0.1 or laterGeofenceAndroid 5.0 or later (API level 21 or higher)4.0.4 or laterActivity identificationNot supportedNot supported
Running the Demo App
Install the app on the test device after debugging the project in Android Studio successfully
Replace the project package name and JSON file with those of your own.
Tap related button in the demo app to create a geofence which has a radius of 200 and is centered on the current location automatically pinpointed by the demo app.
Drag the mark point on the map to select a destination.
View the route that is automatically planned based on the current activity status when the geofence is triggered.
The following figure shows the demo effect:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Key Steps
Add the Huawei Maven repository to the project-level build.gradle file.
Add the following Maven repository address to the project-level build.gradle file of your Android Studio project:
Code:
buildscript {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
dependencies {
...
// Add the AppGallery Connect plugin configuration.
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
}
}allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
}
Add dependencies on the SDKs in the app-level build.gradle file.
Code:
dependencies {
implementation 'com.huawei.hms:location:5.1.0.300'
implementation 'com.huawei.hms:maps:5.2.0.302' }
3. Add the following configuration to the next line under apply plugin: 'com.android.application' in the file header:
apply plugin: 'com.huawei.agconnect'
Note:
You must configure apply plugin: 'com.huawei.agconnect' under apply plugin: 'com.android.application'.
The minimum Android API level (minSdkVersion) required for the HMS Core Map SDK is 19.
4. Declare system permissions in the AndroidManifest.xml file.
Location Kit uses GNSS, Wi-Fi, and base station data for fused location, enabling your app to quickly and accurately obtain users' location information. Therefore, Location Kit requires permissions to access Internet, obtain the fine location, and obtain the coarse location. If your app needs to continuously obtain the location information when it runs in the background, you also need to declare the ACCESS_BACKGROUND_LOCATION permission in the AndroidManifest.xml file:
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.WAKE_LOCK" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="com.huawei.hms.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" />
Note: Because the ACCESS_FINE_LOCATION, WRITE_EXTERNAL_STORAGE, READ_EXTERNAL_STORAGE, and ACTIVITY_RECOGNITION permissions are dangerous system permissions, you need to dynamically apply for these permissions. If you do not have the permissions, Location Kit will reject to provide services for your app.
Key Code
I. Map Display
Currently, the Map SDK supports two map containers: SupportMapFragment and MapView. This document uses the SupportMapFragment container.
Add a Fragment object in the layout file (for example: activity_main.xml), and set map attributes in the file.
Code:
<fragment
android:id="@+id/mapfragment_routeplanningdemo"
android:name="com.huawei.hms.maps.SupportMapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent" />
To use a map in your app, implement the OnMapReadyCallback API.
RoutePlanningActivity extends AppCompatActivity implements OnMapReadyCallback
Load SupportMapView in the onCreate method, call getMapAsync to register the callback.
Fragment fragment = getSupportFragmentManager().findFragmentById(R.id.mapfragment_routeplanningdemo);
if (fragment instanceof SupportMapFragment) {
SupportMapFragment mSupportMapFragment = (SupportMapFragment) fragment;
mSupportMapFragment.getMapAsync(this);
}
Call the onMapReady callback to obtain the HuaweiMap object.
Code:
@Override
public void onMapReady(HuaweiMap huaweiMap) {
hMap = huaweiMap;
hMap.setMyLocationEnabled(true);
hMap.getUiSettings().setMyLocationButtonEnabled(true);
}
II. Function Implementation
Check the permissions.
Code:
if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.P) {
if (ActivityCompat.checkSelfPermission(context,
"com.huawei.hms.permission.ACTIVITY_RECOGNITION") != PackageManager.PERMISSION_GRANTED) {
String[] permissions = {"com.huawei.hms.permission.ACTIVITY_RECOGNITION"};
ActivityCompat.requestPermissions((Activity) context, permissions, 1);
Log.i(TAG, "requestActivityTransitionButtonHandler: apply permission");
}
} else {
if (ActivityCompat.checkSelfPermission(context,
"android.permission.ACTIVITY_RECOGNITION") != PackageManager.PERMISSION_GRANTED) {
String[] permissions = {"android.permission.ACTIVITY_RECOGNITION"};
ActivityCompat.requestPermissions((Activity) context, permissions, 2);
Log.i(TAG, "requestActivityTransitionButtonHandler: apply permission");
}
}
Check whether the location permissions have been granted. If no, the location cannot be obtained.
Code:
settingsClient.checkLocationSettings(locationSettingsRequest)
.addOnSuccessListener(locationSettingsResponse -> {
fusedLocationProviderClient
.requestLocationUpdates(mLocationRequest, mLocationCallback, Looper.getMainLooper())
.addOnSuccessListener(aVoid -> {
//Processing when the API call is successful.
});
})
.addOnFailureListener(e -> {});
if (null == mLocationCallbacks) {
mLocationCallbacks = new LocationCallback() {
@Override
public void onLocationResult(LocationResult locationResult) {
if (locationResult != null) {
List<HWLocation> locations = locationResult.getHWLocationList();
if (!locations.isEmpty()) {
for (HWLocation location : locations) {
hMap.moveCamera(CameraUpdateFactory.newLatLngZoom(new LatLng(location.getLatitude(), location.getLongitude()), 14));
latLngOrigin = new LatLng(location.getLatitude(), location.getLongitude());
if (null != mMarkerOrigin) {
mMarkerOrigin.remove();
}
MarkerOptions options = new MarkerOptions()
.position(latLngOrigin)
.title("Hello Huawei Map")
.snippet("This is a snippet!");
mMarkerOrigin = hMap.addMarker(options);
removeLocationUpdatesWith();
}
}
}
}
@Override
public void onLocationAvailability(LocationAvailability locationAvailability) {
if (locationAvailability != null) {
boolean flag = locationAvailability.isLocationAvailable();
Log.i(TAG, "onLocationAvailability isLocationAvailable:" + flag);
}
}
};
}
III. Geofence and Ground Overlay Creation
Create a geofence based on the current location and add a round ground overlay on the map.
Code:
GeofenceRequest.Builder geofenceRequest = new
GeofenceRequest.Builder geofenceRequest = new GeofenceRequest.Builder();
geofenceRequest.createGeofenceList(GeoFenceData.returnList());
geofenceRequest.setInitConversions(7);
try {
geofenceService.createGeofenceList(geofenceRequest.build(), pendingIntent)
.addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(Task<Void> task) {
if (task.isSuccessful()) {
Log.i(TAG, "add geofence success!");
if (null == hMap) {
return; }
if (null != mCircle) {
mCircle.remove();
mCircle = null;
}
mCircle = hMap.addCircle(new CircleOptions()
.center(latLngOrigin)
.radius(500)
.strokeWidth(1)
.fillColor(Color.TRANSPARENT));
} else {Log.w(TAG, "add geofence failed : " + task.getException().getMessage());}
}
});
} catch (Exception e) {
Log.i(TAG, "add geofence error:" + e.getMessage());
}
// Geofence service
Code:
<receiver
android:name=".GeoFenceBroadcastReceiver"
android:exported="true">
<intent-filter>
<action android:name=".GeoFenceBroadcastReceiver.ACTION_PROCESS_LOCATION" />
</intent-filter>
</receiver>
if (intent != null) {
final String action = intent.getAction();
if (ACTION_PROCESS_LOCATION.equals(action)) {
GeofenceData geofenceData = GeofenceData.getDataFromIntent(intent);
if (geofenceData != null && isListenGeofence) {
int conversion = geofenceData.getConversion();
MainActivity.setGeofenceData(conversion);
}
}
}
Mark the selected point on the map to obtain the destination information, check the current activity status, and plan routes based on the detected activity status.
Code:
hMap.setOnMapClickListener(latLng -> {
latLngDestination = new LatLng(latLng.latitude, latLng.longitude);
if (null != mMarkerDestination) {
mMarkerDestination.remove();
}
MarkerOptions options = new MarkerOptions()
.position(latLngDestination)
.title("Hello Huawei Map");
mMarkerDestination = hMap.addMarker(options);
if (identification.getText().equals("To exit the fence,Your activity is about to be detected.")) {
requestActivityUpdates(5000);
}
});
// Activity identification API
activityIdentificationService.createActivityIdentificationUpdates(detectionIntervalMillis, pendingIntent)
.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
Log.i(TAG, "createActivityIdentificationUpdates onSuccess");
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.e(TAG, "createActivityIdentificationUpdates onFailure:" + e.getMessage());
}
});
// URL of the route planning API (cycling route is used as an example): https://mapapi.cloud.huawei.com/mapApi/v1/routeService/bicycling?key=API KEY
NetworkRequestManager.getBicyclingRoutePlanningResult(latLngOrigin, latLngDestination,
new NetworkRequestManager.OnNetworkListener() {
@Override
public void requestSuccess(String result) {
generateRoute(result);
}
@Override
public void requestFail(String errorMsg) {
Message msg = Message.obtain();
Bundle bundle = new Bundle();
bundle.putString("errorMsg", errorMsg);
msg.what = 1;
msg.setData(bundle);
mHandler.sendMessage(msg);
}
});
Note:
The route planning function provides a set of HTTPS-based APIs used to plan routes for walking, cycling, and driving and calculate route distances. The APIs return route data in JSON format and provide the route planning capabilities.
The route planning function can plan walking, cycling, and driving routes.
You can try to plan a route from one point to another point and then draw the route on the map, achieving the navigation effects.
Related Parameters
In indoor environments, the navigation satellite signals are usually weak. Therefore, HMS Core (APK) will use the network location mode, which is relatively slow compared with the GNSS location. It is recommended that the test be performed outdoors.
In Android 9.0 or later, you are advised to test the geofence outdoors. In versions earlier than Android 9.0, you can test the geofence indoors.
Map Kit is unavailable in the Chinese mainland. Therefore, the Android SDK, JavaScript API, Static Map API, and Directions API are unavailable in the Chinese mainland. For details, please refer to Supported Countries/Regions.
In the Map SDK for Android 5.0.0.300 and later versions, you must set the API key before initializing a map. Otherwise, no map data will be displayed.
Currently, the driving route planning is unavailable in some countries and regions outside China. For details about the supported countries and regions, please refer to the Huawei official website.
Before building the APK, configure the obfuscation configuration file to prevent the HMS Core SDK from being obfuscated.
Open the obfuscation configuration file proguard-rules.pro in the app's root directory of your project and add configurations to exclude the HMS Core SDK from obfuscation.
If you are using AndResGuard, add its trustlist to the obfuscation configuration file.
For details, please visit the following link: https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/android-sdk-config-obfuscation-scripts-0000001061882229
To learn more, visit the following links:
Documentation on the HUAWEI Developers website:
https://developer.huawei.com/consumer/en/hms/huawei-locationkit
https://developer.huawei.com/consumer/en/hms/huawei-MapKit
To download the demo and sample code, please visit GitHub.
To solve integration problems, please go to Stack Overflow at the following link:
https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Newest
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source
Can we get the location offline?

Implement Virtual Try-on With Hand Skeleton Tracking

You have likely seen user reviews complaining about how the online shopping experiences, in particular the inability to try on clothing items before purchase. Augmented reality (AR) enabled virtual try-on has resolved this longstanding issue, making it possible for users to try on items before purchase.
Virtual try-on allows the user to try on clothing, or accessories like watches, glasses, and makeup, virtually on their phone. Apps that offer AR try-on features empower their users to make informed purchases, based on which items look best and fit best, and therefore considerably improve the online shopping experience for users. For merchants, AR try-on can both boost conversion rates and reduce return rates, as customers are more likely to be satisfied with what they have purchased after the try-on. That is why so many online stores and apps are now providing virtual try-on features of their own.
When developing an online shopping app, AR is truly a technology that you can't miss. For example, if you are building an app or platform for watch sellers, you will want to provide a virtual watch try-on feature, which is dependent on real-time hand recognition and tracking. This can be done with remarkable ease in HMS Core AR Engine, which provides a wide range of basic AR capabilities, including hand skeleton tracking, human body tracking, and face tracking. Once you have integrated this tool kit, your users will be able to try on different watches virtually within your app before purchases. Better yet, the development process is highly streamlined. During the virtual try-on, the user's hand skeleton is recognized in real time by the engine, with a high degree of precision, and virtual objects are superimposed on the hand. The user can even choose to place an item on their fingertip! Next I will show you how you can implement this marvelous capability.
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Implementation​AR Engine provides a hand skeleton tracking capability, which identifies and tracks the positions and postures of up to 21 hand skeleton points, forming a hand skeleton model.
Thanks to the gesture recognition capability, the engine is able to provide AR apps with fun, interactive features. For example, your app will allow users to place virtual objects in specific positions, such as on the fingertips or in the palm, and enable the virtual hand to perform intricate movements.
Now I will show you how to develop an app that implements AR watch virtual try-on based on this engine.
Integration Procedure​Requirements on the Development Environment​JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​1. Before getting started, you will need to register as a Huawei developer and complete identity verification on the HUAWEI Developers website. You can click here to find out the detailed registration and identity verification procedure.
2. Before getting started, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development​1. Check whether AR Engine has been installed on the current device. If so, your app will be able to run properly on the device. If not, you need to prompt the user to install AR Engine on the device, for example, by redirecting the user to AppGallery and prompting the user to install it. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Initialize an AR scene. AR Engine supports five scenes, including motion tracking (ARWorldTrackingConfig) scene, face tracking (ARFaceTrackingConfig) scene, hand recognition (ARHandTrackingConfig) scene, human body tracking (ARBodyTrackingConfig) scene, and image recognition (ARImageTrackingConfig) scene.
Call ARHandTrackingConfig to initialize the hand recognition scene.
Code:
mArSession = new ARSession(context);
ARHandTrackingConfig config = new ARHandTrackingconfig(mArSession);
3. After obtaining an ARhandTrackingconfig object, you can set the front or rear camera. The sample code is as follows:
Code:
Config.setCameraLensFacing(ARConfigBase.CameraLensFacing.FRONT);
4. After obtaining config, configure it in ArSession, and start hand recognition.
Code:
mArSession.configure(config);
mArSession.resume();
5. Initialize the HandSkeletonLineDisplay class, which draws the hand skeleton based on the coordinates of the hand skeleton points.
Code:
Class HandSkeletonLineDisplay implements HandRelatedDisplay{
// Methods used in this class are as follows:
// Initialization method.
public void init(){
}
// Method for drawing the hand skeleton. When calling this method, you need to pass the ARHand object to obtain data.
public void onDrawFrame(Collection<ARHand> hands,){
// Call the getHandskeletonArray() method to obtain the coordinates of hand skeleton points.
Float[] handSkeletons = hand.getHandskeletonArray();
// Pass handSkeletons to the method for updating data in real time.
updateHandSkeletonsData(handSkeletons);
}
// Method for updating the hand skeleton point connection data. Call this method when any frame is updated.
public void updateHandSkeletonLinesData(){
// Method for creating and initializing the data stored in the buffer object.
GLES20.glBufferData(…,mVboSize,…);
// Update the data in the buffer object.
GLES20.glBufferSubData(…,mPointsNum,…);
}
}
6. Initialize the HandRenderManager class, which is used to render the data obtained from AR Engine.
Code:
Public class HandRenderManager implements GLSurfaceView.Renderer{
// Set the ARSession object to obtain the latest data in the onDrawFrame method.
Public void setArSession(){
}
}
7. Initialize the onDrawFrame() method in the HandRenderManager class.
Code:
Public void onDrawFrame(){
// In this method, call methods such as setCameraTextureName() and update() to update the calculation result of ArEngine.
// Call this API when the latest data is obtained.
mSession.setCameraTextureName();
ARFrame arFrame = mSession.update();
ARCamera arCamera = arFrame.getCamera();
// Obtain the tracking result returned during hand tracking.
Collection<ARHand> hands = mSession.getAllTrackables(ARHand.class);
// Pass the obtained hands object in a loop to the method for updating gesture recognition information cyclically for processing.
For(ARHand hand : hands){
updateMessageData(hand);
}
}
8. On the HandActivity page, set a render for SurfaceView.
Code:
mSurfaceView.setRenderer(mHandRenderManager);
Setting the rendering mode.
mSurfaceView.setRenderMode(GLEurfaceView.RENDERMODE_CONTINUOUSLY);
Conclusion​Augmented reality creates immersive, digital experiences that bridge the digital and real worlds, making human-machine interactions more seamless than ever. Fields like gaming, online shopping, tourism, medical training, and interior decoration have seen surging demand for AR apps and devices. In particular, AR is expected to dominate the future of online shopping, as it offers immersive experiences based on real-time interactions with virtual products, which is what younger generations are seeking for. This considerably improves user's shopping experience, and as a result helps merchants a lot in improving the conversion rate and reducing the return rate. If you are developing an online shopping app, virtual try-on is a must-have feature for your app, and AR Engine can give you everything you need. Try the engine to experience what smart, interactive features it can bring to users, and how it can streamline your development.

Categories

Resources