ML Kit: Document Recognition Development Procedure - Huawei Developers

HUAWEI ML Kit allows your apps to easily leverage Huawei's long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei's technology accumulation, HUAWEI ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
The document recognition service can recognize text with paragraph formats in document images. Document recognition involves a large amount of computing. The SDK calls the cloud API to recognize documents only in asynchronous mode. With the SDK, you can quickly build various document recognition apps.
The document recognition service can extract text from document images to convert paper documents into electronic copies, greatly improving the information input efficiency and reducing labor costs. For example, when medical and legal files need to be stored electronically, you can use this service to generate well-structured documents based on the text information in the file images. In this way, you can quickly record and archive the files.
1. Create a document analyzer. You are advised to use the provided custom document recognition parameter MLDocumentSetting to specify to-be-recognized languages and other settings to create a document analyzer. In this way, you will get a higher document recognition accuracy.
Code:
// Method 1: Use customized parameter settings.
MLDocumentSetting setting = new MLDocumentSetting.Factory()
// Specify the languages that can be recognized, which should comply with ISO 639-1.
.setLanguageList(new ArrayList<String>(){{this.add("zh"); this.add("en");}})
// Set the format of the returned text border box.
// MLRemoteTextSetting.NGON: Return the coordinates of the four vertices of the quadrilateral.
// MLRemoteTextSetting.ARC: Return the vertices of a polygon border in an arc. The coordinates of up to 72 vertices can be returned.
.setBorderType(MLRemoteTextSetting.ARC)
.create();
MLDocumentAnalyzer analyzer =
MLAnalyzerFactory.getInstance().getRemoteDocumentAnalyzer(setting);
// Method 2: Use the default parameter settings to automatically detect languages for text recognition. The format of the returned text box is MLRemoteTextSetting.NGON.
MLDocumentAnalyzer analyzer =
MLAnalyzerFactory.getInstance().getRemoteDocumentAnalyzer();
2. Create an MLFrame using android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are supported.
Code:
// Create an MLFrame object using the bitmap, which is the image data in bitmap format.
MLFrame frame = MLFrame.fromBitmap(bitmap);
3. Pass the MLFrame object to the asyncAnalyseFrame method for document recognition.
Code:
Task<MLDocument> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLDocument>() {
@Override
public void onSuccess(MLDocument document) {
// Recognition success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
}
});
4. After the recognition is complete, stop the analyzer to release recognition resources.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}

Related

Implementing Real-Time Transcription in an Easy Way

Background​The real-time onscreen subtitle is a must-have function in an ordinary video app. However, developing such a function can prove costly for small- and medium-sized developers. And even when implemented, speech recognition is often prone to inaccuracy. Fortunately, there's a better way — HUAWEI ML Kit, which is remarkably easy to integrate, and makes real-time transcription an absolute breeze!
Introduction to ML Kit​ML Kit allows your app to leverage Huawei's longstanding machine learning prowess to apply cutting-edge artificial intelligence (AI) across a wide range of contexts. With Huawei's expertise built in, ML Kit is able to provide a broad array of easy-to-use machine learning capabilities, which serve as the building blocks for tomorrow's cutting-edge AI apps. ML Kit capabilities include those related to:
Text (including text recognition, document recognition, and ID card recognition)
Language/Voice (such as real-time/on-device translation, automatic speech recognition, and real-time transcription)
Image (such as image classification, object detection and tracking, and landmark recognition)
Face/Body (such as face detection, skeleton detection, liveness detection, and face verification)
Natural language processing (text embedding)
Custom model (including the on-device inference framework and model development tool)
Real-time transcription is required to implement the function mentioned above. Let's take a look at how this works in practice:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Now let's move on to how to integrate this service.
Integrating Real-Time Transcription​Steps
Registering as a Huawei developer on HUAWEI Developers
Creating an app
Create an app in AppGallery Connect. For details, see Getting Started with Android.
We have provided some screenshots for your reference:
3. Enabling ML Kit
4. Integrating the HMS Core SDK.
Add the AppGallery Connect configuration file by completing the steps below:
Download and copy the agconnect-service.json file to the app directory of your Android Studio project.
Call setApiKey during app initialization.
To learn more, go to Adding the AppGallery Connect Configuration File.
5.Configuring the maven repository address
Add build dependencies.
Import the real-time transcription SDK.
Code:
implementation 'com.huawei.hms:ml-computer-voice-realtimetranscription:2.2.0.300'
Add the AppGallery Connect plugin configuration.
Method 1: Add the following information under the declaration in the file header:
Code:
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block.
Code:
plugins {
id 'com.android.application'
// Add the following configuration:
id 'com.huawei.agconnect'
}
Please refer to Integrating the Real-Time Transcription SDK to learn more.
Setting the cloud authentication information
When using on-cloud services of ML Kit, you can set the API key or access token (recommended) in either of the following ways:
Access token
You can use the following API to initialize the access token when the app is started. The access token does not need to be set again once initialized.
MLApplication.getInstance().setAccessToken("your access token");
API key
You can use the following API to initialize the API key when the app is started. The API key does not need to be set again once initialized.
MLApplication.getInstance().setApiKey("your ApiKey");
For details, see Notes on Using Cloud Authentication Information.
Code Development​
Create and configure a speech recognizer.
Code:
MLSpeechRealTimeTranscriptionConfig config = new MLSpeechRealTimeTranscriptionConfig.Factory()
// Set the language. Currently, this service supports Mandarin Chinese, English, and French.
.setLanguage(MLSpeechRealTimeTranscriptionConstants.LAN_ZH_CN)
// Punctuate the text recognized from the speech.
.enablePunctuation(true)
// Set the sentence offset.
.enableSentenceTimeOffset(true)
// Set the word offset.
.enableWordTimeOffset(true)
// Set the application scenario. MLSpeechRealTimeTranscriptionConstants.SCENES_SHOPPING indicates shopping, which is supported only for Chinese. Under this scenario, recognition for the name of Huawei products has been optimized.
.setScenes(MLSpeechRealTimeTranscriptionConstants.SCENES_SHOPPING)
.create();
MLSpeechRealTimeTranscription mSpeechRecognizer = MLSpeechRealTimeTranscription.getInstance();
Create a speech recognition result listener callback.
Code:
// Use the callback to implement the MLSpeechRealTimeTranscriptionListener API and methods in the API.
protected class SpeechRecognitionListener implements MLSpeechRealTimeTranscriptionListener{
@Override
public void onStartListening() {
// The recorder starts to receive speech.
}
@Override
public void onStartingOfSpeech() {
// The user starts to speak, that is, the speech recognizer detects that the user starts to speak.
}
@Override
public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) {
// Return the original PCM stream and audio power to the user. This API is not running in the main thread, and the return result is processed in a sub-thread.
}
@Override
public void onRecognizingResults(Bundle partialResults) {
// Receive the recognized text from MLSpeechRealTimeTranscription.
}
@Override
public void onError(int error, String errorMessage) {
// Called when an error occurs in recognition.
}
@Override
public void onState(int state,Bundle params) {
// Notify the app of the status change.
}
}
The recognition result can be obtained from the listener callbacks, including onRecognizingResults. Design the UI content according to the obtained results. For example, display the text transcribed from the input speech.
Bind the speech recognizer.
Code:
mSpeechRecognizer.setRealTimeTranscriptionListener(new SpeechRecognitionListener());
Call startRecognizing to start speech recognition.
Code:
mSpeechRecognizer.startRecognizing(config);
Release resources after recognition is complete.
Code:
if (mSpeechRecognizer!= null) {
mSpeechRecognizer.destroy();
}
(Optional) Obtain the list of supported languages.
Code:
MLSpeechRealTimeTranscription.getInstance()
.getLanguages(new MLSpeechRealTimeTranscription.LanguageCallback() {
@Override
public void onResult(List<String> result) {
Log.i(TAG, "support languages==" + result.toString());
}
@Override
public void onError(int errorCode, String errorMsg) {
Log.e(TAG, "errorCode:" + errorCode + "errorMsg:" + errorMsg);
}
});
We have finished integration here, so let's test it out on a simple screen.
Tap START RECORDING. The text recognized from the input speech will display in the lower portion of the screen.
We've now built a simple audio transcription function.
Eager to build a fancier UI, with stunning animations, and other effects? By all means, take your shot!
For reference:​Real-Time Transcription
Sample Code for ML Kit
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source

Building High-Precision Location Services with Location Kit

HUAWEI Location Kit provides you with the tools to build ultra-precise location services into your apps, by utilizing GNSS, Wi-Fi, base stations, and a range of cutting-edge hybrid positioning technologies. Location Kit-supported solutions give your apps a leg up in a ruthlessly competitive marketplace, making it easier than ever for you to serve a vast, global user base.
Location Kit currently offers three main functions: fused location, geofence, and activity identification. When used in conjunction with the Map SDK, which is supported in 200+ countries and regions and 100+ languages, you'll be able to bolster your apps with premium mapping services that enjoy a truly global reach.
Fused location provides easy-to-use APIs that are capable of obtaining the user's location with meticulous accuracy, and doing so while consuming a minimal amount of power. HW NLP, Huawei's exclusive network location service, makes use of crowdsourced data to achieve heightened accuracy. Such high-precision, cost-effective positioning has enormous implications for a broad array of mobile services, including ride hailing navigation, food delivery, travel, and lifestyle services, providing customers and service providers alike with the high-value, real time information that they need.
To avoid boring you with the technical details, we've provided some specific examples of how positioning systems, geofence, activity identification, map display and route planning services can be applied in the real world.
For instance, you can use Location kit to obtain the user's current location and create a 500-meter geofence radius around it, which can be used to determine the user's activity status when the geofence is triggered, then automatically plan a route based on this activity status (for example, plan a walking route when the activity is identified as walking), and have it shown on the map.
This article addresses the following functions:
Fused location: Incorporates GNSS, Wi-Fi, and base station data via easy-to-use APIs, making it easy for your app to obtain device location information.
Activity identification: Identifies the user's motion status, using the acceleration sensor, network information, and magnetometer, so that you can tailor your app to account for the user's behavior.
Geofence: Allows you to set virtual geographic boundaries via APIs, to send out timely notifications when users enter, exit, or remain with the boundaries.
Map display: Includes the map display, interactive features, map drawing, custom map styles, and a range of other features.
Route planning: Provides HTTP/HTTPS APIs for you to initiate requests using HTTP/HTTPS, and obtain the returned data in JSON format.
Usage scenarios:
Using high-precision positioning technology to obtain real time location and tracking data for delivery or logistics personnel, for optimally efficient services. In the event of accidents or emergencies, the location of personnel could also be obtained with ease, to ensure their quick rescue.
Creating a geofence in the system, which can be used to monitor an important or dangerous area at all times. If someone enters such an area without authorization, the system could send out a proactive alert. This solution can also be linked with onsite video surveillance equipment. When an alert is triggered, the video surveillance camera could pop up to provide continual monitoring, free of any blind spots.
Tracking patients with special needs in hospitals and elderly residents in nursing homes, in order to provide them with the best possible care. Positioning services could be linked with wearable devices, for attentive 24/7 care in real time.
Using the map to directly find destinations, and perform automatic route planning.
I. Advantages of Location Kit and Map Kit
Low-power consumption (Location Kit): Implements geofence using the chipset, for optimized power efficiency
High precision (Location Kit): Optimizes positioning accuracy in urban canyons, correctly identifying the roadside of the user. Sub-meter positioning accuracy in open areas, with RTK (Real-time kinematic) technology support. Personal information, activity identification, and other data are not uploaded to the server while location services are performed. As the data processor, Location Kit only uses data, and does not store it.
Personalized map displays (Map Kit): Offers enriching map elements and a wide range of interactive methods for building your map.
Broad-ranging place searches (Map Kit): Covers 130+ million POIs and 150+ million addresses, and supports place input prompts.
Global coverage: Supports 200+ countries/regions, and 40+ languages.
For more information and development guides, please visit: https://developer.huawei.com/consumer/en/hms/huawei-MapKit
II. Demo App Introduction
In order to illustrate how to integrate Location Kit and Map Kit both easily and efficiently, we've provided a case study here, which shows the simplest coding method for running the demo.
This app is used to create a geofence on the map based on the location when the user opens the app. The user can drag on the red mark to set a destination. After being confirmed, when the user triggers the geofence condition, the app will automatically detect their activity status and plan a route for the user, such as planning a walking route if the activity status is walking, or cycling route if the activity status is cycling. You can also implement real-time voice navigation for the planned route.
III. Development Practice
You need to set the priority (which is 100 by default) before requesting locations. To request the precise GPS location, set the priority to 100. To request the network location, set the priority to 102 or 104. If you only need to passively receive locations, set the priority to 105.
Parameters related to activity identification include VEHICLE (100), BIKE (101), FOOT (102), and STILL (103).
Geofence-related parameters include ENTER_GEOFENCE_CONVERSION (1), EXIT_GEOFENCE_CONVERSION (2), and DWELL_GEOFENCE_CONVERSION (4).
The following describes how to run the demo using source code, helping you understand the implementation details.
Preparations
Preparing Tools
Huawei phones (It is recommended that multiple devices be tested)
Android Studio
2.Registering as a Developer
Register as a Huawei developer.
Create an app in AppGallery Connect.
Create an app in AppGallery Connect by referring to Location Kit development preparations or Map Kit development preparations.
Enable Location Kit and Map Kit for the app on the Manage APIs page.
Add the SHA-256 certificate fingerprint.
Download the agconnect-services.json file and add it to the app directory of the project.
Create an Android demo project.
Learn about the function restrictions.
To use the route planning function of Map Kit, refer to Supported Countries/Regions (Route Planning).
To use other services of Map Kit, refer to Supported Countries/Regions.
Device TypeFeatureOS VersionHMS Core (APK) VersionHuawei phonesFused locationEMUI 5.0 or later4.0.0 or laterGeofenceEMUI 8.0 or later4.0.4 or laterActivity identificationEMUI 9.1.1 or later3.0.2 or laterNon-Huawei Android phonesFused locationAndroid 8.0 or later (API level 26 or higher)4.0.1 or laterGeofenceAndroid 5.0 or later (API level 21 or higher)4.0.4 or laterActivity identificationNot supportedNot supported
Running the Demo App
Install the app on the test device after debugging the project in Android Studio successfully
Replace the project package name and JSON file with those of your own.
Tap related button in the demo app to create a geofence which has a radius of 200 and is centered on the current location automatically pinpointed by the demo app.
Drag the mark point on the map to select a destination.
View the route that is automatically planned based on the current activity status when the geofence is triggered.
The following figure shows the demo effect:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Key Steps
Add the Huawei Maven repository to the project-level build.gradle file.
Add the following Maven repository address to the project-level build.gradle file of your Android Studio project:
Code:
buildscript {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
dependencies {
...
// Add the AppGallery Connect plugin configuration.
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
}
}allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
}
Add dependencies on the SDKs in the app-level build.gradle file.
Code:
dependencies {
implementation 'com.huawei.hms:location:5.1.0.300'
implementation 'com.huawei.hms:maps:5.2.0.302' }
3. Add the following configuration to the next line under apply plugin: 'com.android.application' in the file header:
apply plugin: 'com.huawei.agconnect'
Note:
You must configure apply plugin: 'com.huawei.agconnect' under apply plugin: 'com.android.application'.
The minimum Android API level (minSdkVersion) required for the HMS Core Map SDK is 19.
4. Declare system permissions in the AndroidManifest.xml file.
Location Kit uses GNSS, Wi-Fi, and base station data for fused location, enabling your app to quickly and accurately obtain users' location information. Therefore, Location Kit requires permissions to access Internet, obtain the fine location, and obtain the coarse location. If your app needs to continuously obtain the location information when it runs in the background, you also need to declare the ACCESS_BACKGROUND_LOCATION permission in the AndroidManifest.xml file:
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.WAKE_LOCK" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="com.huawei.hms.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" />
Note: Because the ACCESS_FINE_LOCATION, WRITE_EXTERNAL_STORAGE, READ_EXTERNAL_STORAGE, and ACTIVITY_RECOGNITION permissions are dangerous system permissions, you need to dynamically apply for these permissions. If you do not have the permissions, Location Kit will reject to provide services for your app.
Key Code
I. Map Display
Currently, the Map SDK supports two map containers: SupportMapFragment and MapView. This document uses the SupportMapFragment container.
Add a Fragment object in the layout file (for example: activity_main.xml), and set map attributes in the file.
Code:
<fragment
android:id="@+id/mapfragment_routeplanningdemo"
android:name="com.huawei.hms.maps.SupportMapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent" />
To use a map in your app, implement the OnMapReadyCallback API.
RoutePlanningActivity extends AppCompatActivity implements OnMapReadyCallback
Load SupportMapView in the onCreate method, call getMapAsync to register the callback.
Fragment fragment = getSupportFragmentManager().findFragmentById(R.id.mapfragment_routeplanningdemo);
if (fragment instanceof SupportMapFragment) {
SupportMapFragment mSupportMapFragment = (SupportMapFragment) fragment;
mSupportMapFragment.getMapAsync(this);
}
Call the onMapReady callback to obtain the HuaweiMap object.
Code:
@Override
public void onMapReady(HuaweiMap huaweiMap) {
hMap = huaweiMap;
hMap.setMyLocationEnabled(true);
hMap.getUiSettings().setMyLocationButtonEnabled(true);
}
II. Function Implementation
Check the permissions.
Code:
if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.P) {
if (ActivityCompat.checkSelfPermission(context,
"com.huawei.hms.permission.ACTIVITY_RECOGNITION") != PackageManager.PERMISSION_GRANTED) {
String[] permissions = {"com.huawei.hms.permission.ACTIVITY_RECOGNITION"};
ActivityCompat.requestPermissions((Activity) context, permissions, 1);
Log.i(TAG, "requestActivityTransitionButtonHandler: apply permission");
}
} else {
if (ActivityCompat.checkSelfPermission(context,
"android.permission.ACTIVITY_RECOGNITION") != PackageManager.PERMISSION_GRANTED) {
String[] permissions = {"android.permission.ACTIVITY_RECOGNITION"};
ActivityCompat.requestPermissions((Activity) context, permissions, 2);
Log.i(TAG, "requestActivityTransitionButtonHandler: apply permission");
}
}
Check whether the location permissions have been granted. If no, the location cannot be obtained.
Code:
settingsClient.checkLocationSettings(locationSettingsRequest)
.addOnSuccessListener(locationSettingsResponse -> {
fusedLocationProviderClient
.requestLocationUpdates(mLocationRequest, mLocationCallback, Looper.getMainLooper())
.addOnSuccessListener(aVoid -> {
//Processing when the API call is successful.
});
})
.addOnFailureListener(e -> {});
if (null == mLocationCallbacks) {
mLocationCallbacks = new LocationCallback() {
@Override
public void onLocationResult(LocationResult locationResult) {
if (locationResult != null) {
List<HWLocation> locations = locationResult.getHWLocationList();
if (!locations.isEmpty()) {
for (HWLocation location : locations) {
hMap.moveCamera(CameraUpdateFactory.newLatLngZoom(new LatLng(location.getLatitude(), location.getLongitude()), 14));
latLngOrigin = new LatLng(location.getLatitude(), location.getLongitude());
if (null != mMarkerOrigin) {
mMarkerOrigin.remove();
}
MarkerOptions options = new MarkerOptions()
.position(latLngOrigin)
.title("Hello Huawei Map")
.snippet("This is a snippet!");
mMarkerOrigin = hMap.addMarker(options);
removeLocationUpdatesWith();
}
}
}
}
@Override
public void onLocationAvailability(LocationAvailability locationAvailability) {
if (locationAvailability != null) {
boolean flag = locationAvailability.isLocationAvailable();
Log.i(TAG, "onLocationAvailability isLocationAvailable:" + flag);
}
}
};
}
III. Geofence and Ground Overlay Creation
Create a geofence based on the current location and add a round ground overlay on the map.
Code:
GeofenceRequest.Builder geofenceRequest = new
GeofenceRequest.Builder geofenceRequest = new GeofenceRequest.Builder();
geofenceRequest.createGeofenceList(GeoFenceData.returnList());
geofenceRequest.setInitConversions(7);
try {
geofenceService.createGeofenceList(geofenceRequest.build(), pendingIntent)
.addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(Task<Void> task) {
if (task.isSuccessful()) {
Log.i(TAG, "add geofence success!");
if (null == hMap) {
return; }
if (null != mCircle) {
mCircle.remove();
mCircle = null;
}
mCircle = hMap.addCircle(new CircleOptions()
.center(latLngOrigin)
.radius(500)
.strokeWidth(1)
.fillColor(Color.TRANSPARENT));
} else {Log.w(TAG, "add geofence failed : " + task.getException().getMessage());}
}
});
} catch (Exception e) {
Log.i(TAG, "add geofence error:" + e.getMessage());
}
// Geofence service
Code:
<receiver
android:name=".GeoFenceBroadcastReceiver"
android:exported="true">
<intent-filter>
<action android:name=".GeoFenceBroadcastReceiver.ACTION_PROCESS_LOCATION" />
</intent-filter>
</receiver>
if (intent != null) {
final String action = intent.getAction();
if (ACTION_PROCESS_LOCATION.equals(action)) {
GeofenceData geofenceData = GeofenceData.getDataFromIntent(intent);
if (geofenceData != null && isListenGeofence) {
int conversion = geofenceData.getConversion();
MainActivity.setGeofenceData(conversion);
}
}
}
Mark the selected point on the map to obtain the destination information, check the current activity status, and plan routes based on the detected activity status.
Code:
hMap.setOnMapClickListener(latLng -> {
latLngDestination = new LatLng(latLng.latitude, latLng.longitude);
if (null != mMarkerDestination) {
mMarkerDestination.remove();
}
MarkerOptions options = new MarkerOptions()
.position(latLngDestination)
.title("Hello Huawei Map");
mMarkerDestination = hMap.addMarker(options);
if (identification.getText().equals("To exit the fence,Your activity is about to be detected.")) {
requestActivityUpdates(5000);
}
});
// Activity identification API
activityIdentificationService.createActivityIdentificationUpdates(detectionIntervalMillis, pendingIntent)
.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
Log.i(TAG, "createActivityIdentificationUpdates onSuccess");
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.e(TAG, "createActivityIdentificationUpdates onFailure:" + e.getMessage());
}
});
// URL of the route planning API (cycling route is used as an example): https://mapapi.cloud.huawei.com/mapApi/v1/routeService/bicycling?key=API KEY
NetworkRequestManager.getBicyclingRoutePlanningResult(latLngOrigin, latLngDestination,
new NetworkRequestManager.OnNetworkListener() {
@Override
public void requestSuccess(String result) {
generateRoute(result);
}
@Override
public void requestFail(String errorMsg) {
Message msg = Message.obtain();
Bundle bundle = new Bundle();
bundle.putString("errorMsg", errorMsg);
msg.what = 1;
msg.setData(bundle);
mHandler.sendMessage(msg);
}
});
Note:
The route planning function provides a set of HTTPS-based APIs used to plan routes for walking, cycling, and driving and calculate route distances. The APIs return route data in JSON format and provide the route planning capabilities.
The route planning function can plan walking, cycling, and driving routes.
You can try to plan a route from one point to another point and then draw the route on the map, achieving the navigation effects.
Related Parameters
In indoor environments, the navigation satellite signals are usually weak. Therefore, HMS Core (APK) will use the network location mode, which is relatively slow compared with the GNSS location. It is recommended that the test be performed outdoors.
In Android 9.0 or later, you are advised to test the geofence outdoors. In versions earlier than Android 9.0, you can test the geofence indoors.
Map Kit is unavailable in the Chinese mainland. Therefore, the Android SDK, JavaScript API, Static Map API, and Directions API are unavailable in the Chinese mainland. For details, please refer to Supported Countries/Regions.
In the Map SDK for Android 5.0.0.300 and later versions, you must set the API key before initializing a map. Otherwise, no map data will be displayed.
Currently, the driving route planning is unavailable in some countries and regions outside China. For details about the supported countries and regions, please refer to the Huawei official website.
Before building the APK, configure the obfuscation configuration file to prevent the HMS Core SDK from being obfuscated.
Open the obfuscation configuration file proguard-rules.pro in the app's root directory of your project and add configurations to exclude the HMS Core SDK from obfuscation.
If you are using AndResGuard, add its trustlist to the obfuscation configuration file.
For details, please visit the following link: https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/android-sdk-config-obfuscation-scripts-0000001061882229
To learn more, visit the following links:
Documentation on the HUAWEI Developers website:
https://developer.huawei.com/consumer/en/hms/huawei-locationkit
https://developer.huawei.com/consumer/en/hms/huawei-MapKit
To download the demo and sample code, please visit GitHub.
To solve integration problems, please go to Stack Overflow at the following link:
https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Newest
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source
Can we get the location offline?

Integration of Huawei Account kit and Analytics kit in Know My Board in your language - Part 1

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction​
In this article, we will learn about Know board in your language and also we will learn about integration of the Huawei Account kit and Analytics kit in Know board application.
It’s really quite interesting application. Have patience to read the full article. Because great things will take time.
Let us understand how this application works.
The purpose of this application is to showcase how easy it is to understand the foreign language using HMS Kits. One of the most common problems encountered by foreigners while traveling is language barriers. Our application use case can support travellers to not only understand the travel directions but also it can support them throughout the journey to enhance the local connect by translating food menu, shop details etc. This app helps you to translate the foreign language and provide results in English language.
Huawei Kits Used in this application
Huawei Account kit
Huawei Analytics kit
Huawei ML Kit
Huawei Location Kit
Huawei Site kit
Huawei Map Kit
Huawei Account kit: Account Kit provides you with simple, secure, and quick sign-in and authorization functions. Instead of entering accounts and passwords and waiting for authentication, users can just tap the Sign in with HUAWEI ID button to quickly and securely sign in to your app with their HUAWEI IDs.
Huawei Analytics Kit: Analytics Kit is a one-stop user behaviour analysis platform for products such as mobile apps, web apps, quick apps, quick games, and mini-programs. It offers scenario-specific data collection, management, analysis, and usage, helping enterprises achieve effective user acquisition, product optimization, precise operations, and business growth.
Huawei ML Kit: We are using ML Kit Text Recognition service to fetch the text from Image. Once the text is fetched we are using Text Translation service to get the result in English language.
Huawei Location kit: Using location kit we will get users current location so that they can get the current locale which can be shared with ML Kit. Using the same location co-ordinate we will try to call the translation API and showcase translated results to customer.
Huawei Map kit: It is used to show users shared location and nearby landmarks on the map.
Huawei Site Kit: To show the landmark on the map which can be shared with different users at the given at the time of request for this we have used the nearby service from the site kit.
Prerequisite​
AppGallery Account
Android Studio 3.X
SDK Platform 19 or later
Gradle 4.6 or later
HMS Core (APK) 4.0.0.300 or later
Huawei Phone EMUI 3.0 or later
Non-Huawei Phone Android 4.4 or later
Account kit
Huawei Account kit provides simple, secure, and quick sign-in and authorization functions. User is not required to enter accounts details and wait for authorization. User can just tap the Sign in with HUAWEI ID button to quickly and securely sign in to your app with their Huawei IDs. It save lot of time to user instead of entering all the details just sign in with Huawei Account. And also it is very secure.
Analytics Kit
What is Mobile analytics?
Mobile analytics captures data from mobile app, website, and web app visitors to identify unique users, track their journeys, record their behaviour, and report on the app’s performance. Similar to traditional web analytics, mobile analytics are used to improve conversions, and are the key to crafting world-class mobile experiences.
When a person says that I know theoretical concept, only when he/she knows the answer for all WH questions. To complete this guide, let’s understand all WH questions.
1. Who has to use analytics?
2. Which one to use?
3. What is Huawei Analytics kit?
4. When to use HMS Analytics kit?
5. Why to use analytics kit?
6. Where to use analytics Kit?
Once you get answer for all the above questions, then you will get theoretical knowledge. But to understand with result you should know answer for below question.
1. How to integrate Huawei analytics kit?
Who has to use the analytics kit?
The answer is very simple, the analytics kit will be used in the mobile/web application. So off course software developer has to use analytics kit.
Which one to use?
Since there are many analytics vendors in the market. But for mobile application I recommend Huawei analytics kit. Now definitely you will have question why? To answer this I’ll give some reasons.
Very easy to integrate.
Documentation is too good.
Community is too good. Response from community is so fast.
Moreover, it is very similar to other vendors, so no need to learn new things.
You can see events in real time.
What is Huawei Analytics kit?
Flutter Analytics plugin enables the communication between HMS Core analytics SDK and Flutter platform. This plugin exposed all the functionality which is provided by HMS core analytics SDK.
Huawei Analytics kit offers you a range of analytics models that helps you to analyse the users’ behaviour with predefined and custom events, you can gain a deeper insight into your users, products and content. It helps you to gain insight into that how users behave on different platforms based on the user behaviour events and user attributes reported through apps.
Huawei Analytics kit, our one-stop analytics platform provides developers with intelligent, convenient and powerful analytics capabilities, using this we can optimize apps performance and identify marketing channels.
Collect and report custom events.
Set a maximum of 25 user attributes.
Automate event collection and session calculation.
Preset event IDs and parameters.
When to use HMS Analytics kit?
Mobile app analytics are a developer’s best friend. It helps you gain understanding about that how users’ behaviour and app can be optimized to reach your goals. Without mobile app analytics, you would be trying out different things blindly without any data to back up your experiments.
That’s why it’s extremely important for developers to understand their mobile app analytics to track their progress while working towards achieving their goals.
Why to use analytics kit?
Mobile app analytics are essential to development process for many reasons. They give you insights into that how users are using your app, which parts of the app they interact with, and what actions they take within the app. You can use these insights to come up with an action plan to improve your product in future, like adding new features that the users seem to need, or improve existing ones in a way that would make the users lives easier, or removing features that the users don’t seem to use.
You’ll also gain insights into whether you’re achieving your goals for your mobile app, whether its revenue, awareness, or other KPIs, and then take the data you have to adjust your strategy and optimize your app to reach your further goals.
When it comes to why? Always everyone thinks about benefits.
Benefits of Analytics
App analytics helps you to drive ROI over every aspect of performance.
App analytics helps you to gather accurate data to better serve for customers.
App analytics allows you to drive personalized and customer-focused marketing.
App analytics allows you to track individual and group achievements of marketing goals from campaigns.
App analytics offers data-driven insights into issues concerning churn and retention.
Where to use analytics Kit?
This is very important question, because you already know that why to use the analytics kit. So, wherever you want understand about user behaviour, which part of the application users are using regularly, which functionality of the application users are using more. In the scenario you can use analytics kit in either mobile/web application you can use the analytics kit.
Service integration on AppGallery.
1. We need to register as a developer account in AppGallery Connect.
2. Create an app by referring to Creating a Project and Creating an App in the Project.
3. Set the data storage location based on the current location.
4. Enabling Account Kit and Analytics Kit Service on AppGallery.
5. Generating a Signing Certificate Fingerprint.
6. Configuring the Signing Certificate Fingerprint.
7. Get your agconnect-services.json file to the app root directory.
Client development
1. Create android project in android studio IDE.
2. Add the maven URL inside the repositories of buildscript and allprojects respectively (project level build.gradle file).
Code:
maven { url 'https://developer.huawei.com/repo/' }
3. Add the classpath inside the dependency section of the project level build.gradle file.
Code:
classpath 'com.huawei.agconnect:agcp:1.6.0.300'
4. Add the plugin in the app-level build.gradle file.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the below library in the app-level build.gradle file dependencies section.
Code:
implementation 'com.huawei.hms:hwid:6.4.0.301'
implementation 'com.huawei.hms:hianalytics:6.4.1.302
'
6. Add all the below permission in the AndroidManifest.xml.
XML:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.CHANGE_WIFI_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
7. Sync the project.
LoginViewModel.kt
Java:
package com.huawei.hms.knowmyboard.dtse.activity.viewmodel
import android.app.Application
import com.huawei.hms.knowmyboard.dtse.activity.app.MyApplication.Companion.activity
import androidx.lifecycle.AndroidViewModel
import com.huawei.hms.support.account.service.AccountAuthService
import androidx.lifecycle.MutableLiveData
import android.graphics.Bitmap
import android.util.Log
import android.view.View
import com.huawei.hms.location.LocationResult
import androidx.lifecycle.LiveData
import android.widget.Toast
import com.huawei.hms.support.account.request.AccountAuthParams
import com.huawei.hms.support.account.request.AccountAuthParamsHelper
import com.huawei.hms.support.account.AccountAuthManager
import com.huawei.hms.common.ApiException
import java.util.ArrayList
class LoginViewModel(application: Application) : AndroidViewModel(application) {
var service: AccountAuthService? = null
private val message = MutableLiveData<String>()
val textRecongnized = MutableLiveData<ArrayList<String>>()
val imagePath = MutableLiveData<Bitmap>()
val locationResult = MutableLiveData<LocationResult>()
fun sendData(msg: String) {
message.value = msg
}
fun getMessage(): LiveData<String> {
return message
}
fun setImage(imagePath: Bitmap) {
this.imagePath.value = imagePath
}
fun setLocationResult(locationResult: LocationResult) {
this.locationResult.value = locationResult
}
fun setTextRecognized(textRecognized: ArrayList<String>) {
this.textRecongnized.value = textRecognized
}
fun logoutHuaweiID() {
if (service != null) {
service!!.signOut()
sendData("KnowMyBoard")
Toast.makeText(getApplication(), "You are logged out from Huawei ID", Toast.LENGTH_LONG)
.show()
}
}
fun loginClicked(view: View?) {
val authParams =
AccountAuthParamsHelper(AccountAuthParams.DEFAULT_AUTH_REQUEST_PARAM).setAuthorizationCode()
.createParams()
service = AccountAuthManager.getService(activity, authParams)
activity!!.startActivityForResult(service?.signInIntent, 8888)
}
fun cancelAuthorization() {
if (service != null) {
// service indicates the AccountAuthService instance generated using the getService method during the sign-in authorization.
service!!.cancelAuthorization().addOnCompleteListener { task ->
if (task.isSuccessful) {
// Processing after a successful authorization cancellation.
Log.i("TAG", "onSuccess: ")
sendData("KnowMyBoard")
Toast.makeText(getApplication(), "Cancelled authorization", Toast.LENGTH_LONG)
.show()
} else {
// Handle the exception.
val exception = task.exception
if (exception is ApiException) {
val statusCode = exception.statusCode
Log.i("TAG", "onFailure: $statusCode")
Toast.makeText(
getApplication(),
"Failed to cancel authorization",
Toast.LENGTH_LONG
).show()
}
}
}
} else {
Toast.makeText(getApplication(), "Login required", Toast.LENGTH_LONG).show()
}
}
}
UserData.kt
Java:
package com.huawei.hms.knowmyboard.dtse.activity.model
import com.huawei.hms.support.api.entity.auth.Scope
import java.util.HashSet
class UserData {
var uid: String? = null
var openId: String? = null
var displayName: String? = null
var photoUriString: String? = null
var accessToken: String? = null
var status = 0
var gender = 0
var serviceCountryCode: String? = null
var countryCode: String? = null
var grantedScopes: Set<Scope>? = null
var serverAuthCode: String? = null
var unionId: String? = null
var email: String? = null
var extensionScopes: HashSet<Scope> = HashSet<Scope>()
var idToken: String? = null
var expirationTimeSecs: Long = 0
var givenName: String? = null
var familyName: String? = null
var ageRange: String? = null
var homeZone = 0
var carrierId = 0
}
MainActivity.kt
Java:
import com.huawei.hms.knowmyboard.dtse.activity.app.MyApplication.Companion.activity
import androidx.appcompat.app.AppCompatActivity
import com.huawei.hms.knowmyboard.dtse.activity.viewmodel.LoginViewModel
import com.huawei.hms.mlsdk.text.MLTextAnalyzer
import android.graphics.Bitmap
import com.huawei.hms.mlsdk.langdetect.local.MLLocalLangDetector
import com.huawei.hms.mlsdk.translate.local.MLLocalTranslator
import android.app.ProgressDialog
import android.os.Bundle
import androidx.lifecycle.ViewModelProvider
import android.content.Intent
import com.huawei.hms.support.account.AccountAuthManager
import com.google.gson.Gson
import com.huawei.hms.common.ApiException
import android.provider.MediaStore
import com.huawei.hms.mlsdk.common.MLFrame
import com.huawei.hms.mlsdk.text.MLText
import com.huawei.hmf.tasks.OnSuccessListener
import com.huawei.hms.mlsdk.langdetect.MLLangDetectorFactory
import com.huawei.hms.mlsdk.langdetect.local.MLLocalLangDetectorSetting
import com.huawei.hmf.tasks.OnFailureListener
import android.content.DialogInterface
import android.net.Uri
import android.util.Log
import androidx.appcompat.app.AlertDialog
import com.huawei.hms.knowmyboard.dtse.R
import com.huawei.hms.knowmyboard.dtse.activity.model.UserData
import com.huawei.hms.mlsdk.common.MLApplication
import com.huawei.hms.mlsdk.translate.local.MLLocalTranslateSetting
import com.huawei.hms.mlsdk.translate.MLTranslatorFactory
import com.huawei.hms.mlsdk.model.download.MLModelDownloadStrategy
import com.huawei.hms.mlsdk.model.download.MLModelDownloadListener
import com.huawei.hms.mlsdk.text.MLLocalTextSetting
import com.huawei.hms.mlsdk.MLAnalyzerFactory
import java.io.IOException
import java.lang.Exception
import java.util.ArrayList
class MainActivity() : AppCompatActivity() {
var loginViewModel: LoginViewModel? = null
private var mTextAnalyzer: MLTextAnalyzer? = null
var imagePath: Uri? = null
var bitmap: Bitmap? = null
var result = ArrayList<String>()
var myLocalLangDetector: MLLocalLangDetector? = null
var myLocalTranslator: MLLocalTranslator? = null
var textRecognized: String? = null
var progressDialog: ProgressDialog? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
loginViewModel = ViewModelProvider(this).get(LoginViewModel::class.java)
activity = this
progressDialog = ProgressDialog(this)
progressDialog!!.setCancelable(false)
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
// Process the authorization result to obtain the authorization code from AuthAccount.
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == 8888) {
val authAccountTask = AccountAuthManager.parseAuthResultFromIntent(data)
if (authAccountTask.isSuccessful) {
// The sign-in is successful, and the user's ID information and authorization code are obtained.
val authAccount = authAccountTask.result
val userData = UserData()
userData.accessToken = authAccount.accessToken
userData.countryCode = authAccount.countryCode
userData.displayName = authAccount.displayName
userData.email = authAccount.email
userData.familyName = authAccount.familyName
userData.givenName = authAccount.givenName
userData.idToken = authAccount.idToken
userData.openId = authAccount.openId
userData.uid = authAccount.uid
userData.photoUriString = authAccount.avatarUri.toString()
userData.unionId = authAccount.unionId
val gson = Gson()
Log.e("TAG", "sign in success : " + gson.toJson(authAccount))
loginViewModel = ViewModelProvider([email protected]).get(
LoginViewModel::class.java
)
loginViewModel!!.sendData(authAccount.displayName)
progressDialog!!.dismiss()
} else {
// The sign-in failed.
Log.e(
"TAG",
"sign in failed:" + (authAccountTask.exception as ApiException).statusCode
)
progressDialog!!.dismiss()
}
}
}
}
Analytics kit
Java:
package com.huawei.hms.knowmyboard.dtse.activity.app
import com.huawei.hms.knowmyboard.dtse.activity.app.MyApplication
import com.huawei.hms.analytics.HiAnalytics
import android.app.Activity
import android.app.Application
import android.os.Bundle
import com.huawei.hms.analytics.HiAnalyticsInstance
class MyApplication : Application() {
override fun onCreate() {
super.onCreate()
// Or initialize Analytics Kit with the given context.
instance = HiAnalytics.getInstance(this)
instance?.setUserProfile("userKey", "value")
// Enable tracking of the custom event in proper positions of the code.
val bundle = Bundle()
bundle.putString("language", "English")
bundle.putString("Country", "India")
instance?.onEvent("board_details", bundle)
}
companion object {
var activity: Activity? = null
var instance: HiAnalyticsInstance? = null
}
}
Enable Analytics kit in debug mode using below command
Code:
adb shell setprop debug.huawei.hms.analytics.app <package_name>
Disable the debug mode using below command
Code:
adb shell setprop debug.huawei.hms.analytics.app .none.
Result​
Tips and Tricks​
1. Make sure you are already registered as a Huawei developer.
2. Set min SDK version to 21 or later, otherwise you will get AndriodManifest to merge issue.
3. Make sure you have added the agconnect-services.json file to the app folder.
4. Make sure you have added the SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
6. If you get the 6003 error code, there may be SHA-256 certificate issue.
Conclusion​
In this article, we have learnt how integrate the Huawei Account kit and Analytics Kit in application using Android Studio and Kotlin. We also learnt the Application details and how does this application works and other than that Account Kit, HMS core provides a wider range of services to enhance the user experience with the Huawei platform. We have learnt the steps to create projects and steps to integrate the account kit and analytics kit.
Reference​
Account Kit - Official document
Account Kit - Training Video
Analytics Kit - Official document

How a Background Remover Is Born

Why Do I Need a Background Remover​A background removal tool is not really a new feature, but rather its importance has grown as the world shifted over to online working and learning over the last few years. And I did not find how important this tool could be until just two weeks ago. On a warm, sunny morning with a coffee on hand, I joined an online conference. During this conference, one of my colleagues pointed out to me that they could see my untidy desk and an overflowing bin in the background. Naturally, this left me feeling embarrassed. I just wish I could travel back in time to use a background remover.
Now, I cannot travel in time, but I can certainly create a background removal tool. So, with this new-found motive, I looked online for some solutions and came across the body or head segmentation capability from HMS Core Video Editor Kit, and developed a demo app with it.
This service can divide the body or head from an input image or video and then generate a video, an image, or a sticker of the divided part. In this way, the body or head segmentation service helps realize the background removal effect.
Now, let's go deeper into the technical details about the service.
How the Background Remover Is Implemented​The algorithm of the service performs a series of operations on the input video, including extracting frames, using an AI model to process the video, and encoding. Among all these, the core is the AI model. How a service performs is affected by factors like device computing power and power consumption. Considering these, the development team of the service manages to equip it with a light-weight AI model that does a good job in feature extraction, by taking measures like compression, quantization, and pruning. In this way, the processing duration of the AI model is decreased to a relatively low level, without compromising the segmentation accuracy.
The mentioned algorithm supports both images and videos. An image takes the algorithm a single inference for the segmentation result. A video is actually a collection of images. If a model features poor segmentation capability, the segmentation accuracy for each image will be low. As a result, the segmentation results of consecutive images will be different from each other, and the segmentation result of the whole video will appear shaking. To resolve this, the team adopts technologies like inter-frame stabilization and the objective function for inter-frame consistency. Such measures do not compromise the model inference speed yet fully utilize the time sequence information of a video. Consequently, the algorithm sees its inter-frame stabilization improved, which contributes to an ideal segmentation effect.
By the way, the service requires that the input image or video contains up to 5 people whose contours should be visible. Besides, the service supports common motions of the people in the input image or video, like standing, lying, walking, sitting, and more.
The technical basics of the service conclude here, and let's see how it can be integrated with an app.
How to Equip an App with the Background Remover Functionality​Preparations​Go to AppGallery Connect and configure the app's details. In this step, we need to register a developer account, create an app, generate a signing certificate fingerprint, and activate the required services.
Integrate the HMS Core SDK.
Configure the obfuscation scripts.
Declare necessary permissions.
Setting Up a Video Editing Project​Prerequisites​1. Set the app authentication information either by:
Using an access token: Call the setAccessToken method to set an access token when the app is started. The access token needs to be set only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Using an API key: Call the setApiKey method to set an API key when the app is started. The API key needs to be set only once.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a License ID. The ID is used to manage the usage quotas of the kit, so make sure the ID is unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
Initializing the Runtime Environment for the Entry Class​A HuaweiVideoEditor object serves as the entry class of a whole video editing project. The lifecycle of this object and the project should be the same. Therefore, when creating a video editing project, create a HuaweiVideoEditor object first and then initialize its runtime environment. Remember to release this object when exiting the project.
1. Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
2. Determine the preview area position.
This area renders video images, a process that is implemented by creating SurfaceView within the SDK. Make sure that the position of this area is specified before the area is created.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the layout of the preview area.
editor.setDisplay(mSdkPreviewContainer);
3. Initialize the runtime environment of HuaweiVideoEditor. LicenseException will be reported when the license verification fails.
The HuaweiVideoEditor object, after being created, has not occupied any system resources. We need to manually set the time for initializing its runtime environment, and then the necessary threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Integrating the Segmentation Capability​
Code:
// Initialize the segmentation engine. segPart indicates the segmentation type, whose value is an integer. Value 1 indicates body segmentation, and a value other than 1 indicates head segmentation.
visibleAsset.initBodySegEngine(segPart, new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// After the initialization is successful, apply the segmentation effect.
visibleAsset.addBodySegEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Callback when the application progress is received.
}
@Override
public void onSuccess() {
// Callback when the effect is successfully applied.
}
@Override
public void onError(int errorCode, String errorMsg) {
// Callback when the effect failed to be applied.
}
});
// Stop applying the segmentation effect.
visibleAsset.interruptBodySegEffect();
// Remove the segmentation effect.
visibleAsset.removeBodySegEffect();
// Release the segmentation engine.
visibleAsset.releaseBodySegEngine();
And now the app is capable of removing the image or video background.
This function is ideal for e-conferencing apps, where the background is not important. For learning apps, it allows teachers to change the background to the theme of the lesson, for better immersion. Not only that, but when it's used in a short video app, users can put themselves in unusual backgrounds, such as space and the sea, to create fun and fantasy-themed videos.
Have you got any better ideas of how to use the background remover? Let us know in the comments section below.
Wrap up​Background removal tools are trending among apps in different fields, given that such a tool helps images and videos look better by removing unnecessary or messy backgrounds, as well as protecting user privacy.
The body or head segmentation service from Video Editor Kit is one such solution for removing a background. It supports both images and videos, and outputs a video, an image, or a sticker of the segmented part for further editing. Its streamlined integration makes it a perfect choice for enhancing videos and images.

Build a Seamless Sign-in Experience Across Different Apps and Platforms with Keyring

Mobile apps have significantly changed the way we live, bringing about greater convenience. With our mobiles we can easily book hotels online when we go sightseeing, buy train and flight tickets online for business trips, or just pay for a dinner using scan and pay.
There is rarely a one-app-fits-all approach of offering such services, so users have to switch back and forth between multiple apps. This also requires users to register and sign in to different apps, which is a trouble itself because users will need to complete complex registration process and repeatedly enter their account names and passwords.
In addition, as technology develops, a developer usually has multiple Android apps and app versions, such as the quick app and web app, for different platforms. If users have to repeatedly sign in to different apps or versions by the same developer, the churn rate will likely increase. What's more, the developer may need to even pay for sending SMS messages if users choose to sign in to their apps through SMS verification codes.
Is there anything the developer can do to streamline the sign-in process between different apps and platforms so that users do not need to enter their account names and passwords again and again?
Well fortunately, HMS Core Keyring makes this possible. Keyring is a Huawei service that offers credential management APIs for storing user credentials locally on users' Android phones and tablets and sharing the credentials between different apps and different platform versions of an app. Developers can call relevant APIs in their Android apps, web apps, or quick apps to use Keyring services, such as encrypt the sign-in credentials of users for local storage on user devices and share the credentials between different apps and platforms, thus creating a seamless sign-in experience for users across different apps and platforms. Besides, all credentials will be stored in Keyring regardless of which type of APIs developers are calling, to implement unified credential management and sharing.
In this article, I'll share how I used Keyring to manage and share sign-in credentials of users. I hope this will help you.
Advantages​
First, I'd like to explain some advantages of Keyring.
Building a seamless sign-in experience
Your app can call Keyring APIs to obtain sign-in credentials stored on user devices, for easy sign-in.
Ensuring data security and reliability
Keyring encrypts sign-in credentials of users for local storage on user devices and synchronizes the credentials between devices via end-to-end encryption technology. The encrypted credentials cannot be decrypted on the cloud.
Reducing the churn rate during sign-in
Keyring can simplify the sign-in process for your apps, thus reducing the user churn rate.
Reducing the operations cost
With Keyring, you can reduce the operations cost, such as the expense for SMS messages used by users to sign in to your app.
Development Procedure​
Next, let's look at how to integrate Keyring. Before getting started, you will need to make some preparations, such as register as a Huawei developer, generate and configure your signing certificate fingerprint in AppGallery Connect, and enable Keyring. You can click here to learn about the detailed preparation steps, which will not be introduced in this article.
After making necessary preparations, you can now start integrating the Keyring SDK. I'll detail the implementation steps in two scenarios.
User Sign-in Scenario​
In this scenario, you need to follow the steps below to implement relevant logic.
1. Initialize the CredentialClient object in the onCreate method of your activity. Below is a code snippet example.
Code:
CredentialClient credentialClient = CredentialManager.getCredentialClient(this);
2. Check whether a credential is available. Below is a code snippet example.
Code:
List<AppIdentity> trustedAppList = new ArrayList<>();
trustedAppList.add(new AndroidAppIdentity("yourAppName", "yourAppPackageName", "yourAppCodeSigningCertHash"));
trustedAppList.add(new WebAppIdentity("youWebSiteName", "www.yourdomain.com"));
trustedAppList.add(new WebAppIdentity("youWebSiteName", "login.yourdomain.com"));
SharedCredentialFilter sharedCredentialFilter = SharedCredentialFilter.acceptTrustedApps(trustedAppList);
credentialClient.findCredential(sharedCredentialFilter, new CredentialCallback<List<Credential>>() {
@Override
public void onSuccess(List<Credential> credentials) {
if (credentials.isEmpty()) {
Toast.makeText(MainActivity.this, R.string.no_available_credential, Toast.LENGTH_SHORT).show();
} else {
for (Credential credential : credentials) {
}
}
}
@Override
public void onFailure(long errorCode, CharSequence description) {
Toast.makeText(MainActivity.this, R.string.query_credential_failed, Toast.LENGTH_SHORT).show();
}
});
3. Call the Credential.getContent method to obtain the credential content and obtain the result from CredentialCallback<T>. Below is a code snippet example.
Code:
private Credential mCredential;
// Obtained credential.
mCredential.getContent(new CredentialCallback<byte[]>() {
@Override
public void onSuccess(byte[] bytes) {
String hint = String.format(getResources().getString(R.string.get_password_ok),
new String(bytes));
Toast.makeText(MainActivity.this, hint, Toast.LENGTH_SHORT).show();
mResult.setText(new String(bytes));
}
@Override
public void onFailure(long l, CharSequence charSequence) {
Toast.makeText(MainActivity.this, R.string.get_password_failed,
Toast.LENGTH_SHORT).show();
mResult.setText(R.string.get_password_failed);
}
});
4. Call the credential saving API when a user enters a new credential, to save the credential. Below is a code snippet example.
Code:
AndroidAppIdentity app2 = new AndroidAppIdentity(sharedToAppName,
sharedToAppPackage, sharedToAppCertHash);
List<AppIdentity> sharedAppList = new ArrayList<>();
sharedAppList.add(app2);
Credential credential = new Credential(username, CredentialType.PASSWORD, userAuth,
password.getBytes());
credential.setDisplayName("user_niceday");
credential.setSharedWith(sharedAppList);
credential.setSyncable(true);
credentialClient.saveCredential(credential, new CredentialCallback<Void>() {
@Override
public void onSuccess(Void unused) {
Toast.makeText(MainActivity.this,
R.string.save_credential_ok,
Toast.LENGTH_SHORT).show();
}
@Override
public void onFailure(long errorCode, CharSequence description) {
Toast.makeText(MainActivity.this,
R.string.save_credential_failed + " " + errorCode + ":" + description,
Toast.LENGTH_SHORT).show();
}
});
User Sign-out Scenario​
Similarly, follow the steps below to implement relevant logic.
1. Initialize the CredentialClient object in the onCreate method of your activity. Below is a code snippet example.
Code:
CredentialClient credentialClient = CredentialManager.getCredentialClient(this);
2. Check whether a credential is available. Below is a code snippet example.
Code:
List<AppIdentity> trustedAppList = new ArrayList<>();
trustedAppList.add(new AndroidAppIdentity("yourAppName", "yourAppPackageName", "yourAppCodeSigningCertHash"));
trustedAppList.add(new WebAppIdentity("youWebSiteName", "www.yourdomain.com"));
trustedAppList.add(new WebAppIdentity("youWebSiteName", "login.yourdomain.com"));
SharedCredentialFilter sharedCredentialFilter = SharedCredentialFilter.acceptTrustedApps(trustedAppList);
credentialClient.findCredential(sharedCredentialFilter, new CredentialCallback<List<Credential>>() {
@Override
public void onSuccess(List<Credential> credentials) {
if (credentials.isEmpty()) {
Toast.makeText(MainActivity.this, R.string.no_available_credential, Toast.LENGTH_SHORT).show();
} else {
for (Credential credential : credentials) {
// Further process the available credentials, including obtaining the credential information and content and deleting the credentials.
}
}
}
@Override
public void onFailure(long errorCode, CharSequence description) {
Toast.makeText(MainActivity.this, R.string.query_credential_failed, Toast.LENGTH_SHORT).show();
}
});
3. Call the deleteCredential method to delete the credential and obtain the result from CredentialCallback. Below is a code snippet example.
Code:
credentialClient.deleteCredential(credential, new CredentialCallback<Void>() {
@Override
public void onSuccess(Void unused) {
String hint = String.format(getResources().getString(R.string.delete_ok),
credential.getUsername());
Toast.makeText(MainActivity.this, hint, Toast.LENGTH_SHORT).show();
}
@Override
public void onFailure(long errorCode, CharSequence description) {
String hint = String.format(getResources().getString(R.string.delete_failed),
description);
Toast.makeText(MainActivity.this, hint, Toast.LENGTH_SHORT).show();
}
});
Keyring offers two modes for sharing credentials: sharing credentials using API parameters and sharing credentials using Digital Asset Links. I will detail the two modes below.
Sharing Credentials Using API Parameters​
In this mode, when calling the saveCredential method to save credentials, you can call the setSharedWith method to set parameters of the Credential object, to implement credential sharing. A credential can be shared to a maximum of 128 apps.
The sample code is as follows:
Code:
AndroidAppIdentity app1 = new AndroidAppIdentity("your android app name",
"your android app package name", "3C:99:C3:....");
QuickAppIdentity app2 = new QuickAppIdentity("your quick app name",
"your quick app package name", "DC:99:C4:....");
List<AppIdentity> sharedAppList = new ArrayList<>(); // List of apps with the credential is shared.
sharedAppList.add(app1);
sharedAppList.add(app2);
Credential credential = new Credential("username", CredentialType.PASSWORD, true,
"password".getBytes());
credential.setSharedWith(sharedAppList); // Set the credential sharing relationship.
credentialClient.saveCredential(credential, new CredentialCallback<Void>() {
@Override
public void onSuccess(Void unused) {
Toast.makeText(MainActivity.this,
R.string.save_credential_ok,
Toast.LENGTH_SHORT).show();
}
@Override
public void onFailure(long errorCode, CharSequence description) {
Toast.makeText(MainActivity.this,
R.string.save_credential_failed + " " + errorCode + ":" + description,
Toast.LENGTH_SHORT).show();
}
});
Sharing Credentials Using Digital Asset Links​
In this mode, you can add credential sharing relationships in the AndroidManifest.xml file of your Android app. The procedure is as follows:
1. Add the following content to the <application> element in the AndroidManifest.xml file:
Code:
<application>
<meta-data
android:name="asset_statements"
android:value="@string/asset_statements" />
</application>
2. Add the following content to the res\values\strings.xml file:
Code:
<string name="asset_statements">your digital asset links statements</string>
The Digital Asset Links statements are JSON strings comply with the Digital Asset Links protocol. The sample code is as follows:
Code:
[{
"relation": ["delegate_permission/common.get_login_creds"],
"target": {
"namespace": "web",
"site": "https://developer.huawei.com" // Set your website domain name.
}
},
{
"relation": ["delegate_permission/common.get_login_creds"],
"target": {
"namespace": "android_app",
"package_name": "your android app package name",
"sha256_cert_fingerprints": [
"F2:52:4D:..."
]
}
},
{
"relation": ["delegate_permission/common.get_login_creds"],
"target": {
"namespace": "quick_app",
"package_name": "your quick app package name",
"sha256_cert_fingerprints": [
"C3:68:9F:..."
]
}
}
]
The relation attribute has a fixed value of ["delegate_permission/common.get_login_creds"], indicating that the credential is shared with apps described in the target attribute.
And that's all for integrating Keyring. That was pretty straightforward, right? You can click here to find out more about Keyring and try it out.
Conclusion​
More and more developers are prioritizing the need for a seamless sign-in experience to retain users and reduce the user churn rate. This is especially true for developers with multiple apps and app versions for different platforms, because it can help them share the user base of their different apps. There are many ways to achieve this. As I illustrated earlier in this article, my solution for doing so is to integrate Keyring, which turns out to be very effective. If you have similar demands, have a try at this service and you may be surprised.
Did I miss anything? Let me know in the comments section below.

Categories

Resources