1. Intro
If you're looking to develop a customized and cost-effective deep learning model, you'd be remiss not to try the recently released custom model service in HUAWEI ML Kit. This service gives you the tools to manage the size of your model, and provides simple APIs for you to perform inference. The following is a demonstration of how you can run your model on a device at a minimal cost.
The service allows you to pre-train image classification models, and the steps below show you the process for training and using a custom model.
2. Implementation
a. Install HMS Toolkit from Android Studio Marketplace. After the installation, restart Android Studio.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
b. Transfer learning by using AI Create.
Basic configuration
* Note: First install a Python IDE.
AI Create uses MindSpore as the training framework and MindSpore Lite as the inference framework. Follows the steps below to complete the basic configuration.
i) In Coding Assistant, go to AI > AI Create.
ii) Select Image or Text, and click on Confirm.
iii) Restart the IDE. Select Image or Text, and click on Confirm. The MindSpore tool will be automatically installed.
HMS Toolkit allows you to generate an API or demo project in one click, enabling you to quickly verify and call the image classification model in your app.
Before using the transfer learning function for image classification, prepare image resources for training as needed.
*Note: The images should be clear and placed in different folders by category.
Model training
Train the image classification model pre-trained in ML Kit to learn hundreds of images in specific fields (such as vehicles and animals) in a matter of minutes. A new model will then be generated, which will be capable of automatically classifying images. Follow the steps below for model training.
i) In Coding Assistant, go to AI > AI Create > Image.
ii) Set the following parameters and click on Confirm.
Operation type: Select New Model.
Model Deployment Location: Select Deployment Cloud.
iii) Drag or add the image folders to the Please select train image folder area.
iv) Set Output model file path and train parameters.
v) Retain the default values of the train parameters as shown below:
Iteration count: 100
Learning rate: 0.01
vi) Click on Create Model to start training and to generate an image classification model.
After the model is generated, view the model learning results (training precision and verification precision), corresponding learning parameters, and training data.
Model verification
After the model training is complete, you can verify the model by adding the image folders in the Please select test image folder area under Add test image. The tool will automatically use the trained model to perform the test and display the test results.
Click on Generate Demo to have HMS Toolkit generate a demo project, which automatically integrates the trained model. You can directly run and build the demo project to generate an APK file, and run the file on a simulator or real device to verify image classification performance.
c. Use the model.
Upload the model
The image classification service classifies elements in images into logical categories, such as people, objects, environments, activities, or artwork, to define image themes and application scenarios. The service supports both on-device and on-cloud recognition modes, and offers the pre-trained model capability.
To upload your model to the cloud, perform the following steps:
i) Sign in to AppGallery Connect and click on My projects.
ii) Go to ML Kit > Custom ML, to have the model upload page display. On this page, you can also upgrade existing models.
Load the remote model
Before loading a remote model, check whether the remote model has been downloaded. Load the local model if the remote model has not been downloaded.
Code:
localModel = new MLCustomLocalModel.Factory("localModelName")
.setAssetPathFile("assetpathname")
.create();
remoteModel =new MLCustomRemoteModel.Factory("yourremotemodelname").create();
MLLocalModelManager.getInstance()
// Check whether the remote model exists.
.isModelExist(remoteModel)
.addOnSuccessListener(new OnSuccessListener<Boolean>() {
@Override
public void onSuccess(Boolean isDownloaded) {
MLModelExecutorSettings settings;
// If the remote model exists, load it first. Otherwise, load the existing local model.
if (isDownloaded) {
settings = new MLModelExecutorSettings.Factory(remoteModel).create();
} else {
settings = new MLModelExecutorSettings.Factory(localModel).create();
}
final MLModelExecutor modelExecutor = MLModelExecutor.getInstance(settings);
executorImpl(modelExecutor, bitmap);
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Exception handling.
}
});
Perform inference using the model inference engine
Set the input and output formats, input image data to the inference engine, and use the loaded modelExecutor(MLModelExecutor) to perform the inference.
Code:
private void executorImpl(final MLModelExecutor modelExecutor, Bitmap bitmap){
// Prepare input data.
final Bitmap inputBitmap = Bitmap.createScaledBitmap(srcBitmap, 224, 224, true);
final float[][][][] input = new float[1][224][224][3];
for (int i = 0; i < 224; i++) {
for (int j = 0; j < 224; j++) {
int pixel = inputBitmap.getPixel(i, j);
input[batchNum][j][i][0] = (Color.red(pixel) - 127) / 128.0f;
input[batchNum][j][i][1] = (Color.green(pixel) - 127) / 128.0f;
input[batchNum][j][i][2] = (Color.blue(pixel) - 127) / 128.0f;
}
}
MLModelInputs inputs = null;
try {
inputs = new MLModelInputs.Factory().add(input).create();
// If the model requires multiple inputs, you need to call add() for multiple times so that image data can be input to the inference engine at a time.
} catch (MLException e) {
// Handle the input data formatting exception.
}
// Perform inference. You can use addOnSuccessListener to listen for successful inference which is processed in the onSuccess callback. In addition, you can use addOnFailureListener to listen for inference failure which is processed in the onFailure callback.
modelExecutor.exec(inputs, inOutSettings).addOnSuccessListener(new OnSuccessListener<MLModelOutputs>() {
@Override
public void onSuccess(MLModelOutputs mlModelOutputs) {
float[][] output = mlModelOutputs.getOutput(0);
// The inference result is in the output array and can be further processed.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Inference exception.
}
});
}
3. Summary
By utilizing Huawei's deep learning framework, you'll be able to create and use a deep learning model for your app by following just a few steps!
Furthermore, the custom model service for ML Kit is compatible with all mainstream model inference platforms and frameworks on the market, including MindSpore, TensorFlow Lite, Caffe, and Onnx. Different models can be converted into .ms format, and run perfectly within the on-device inference framework.
Custom models can be deployed to the device in a smaller size after being quantized and compressed. To further reduce the APK size, you can host your models on the cloud. With this service, even a novice in the field of deep learning is able to quickly develop an AI-driven app which serves a specific purpose.
Learn More
For more information, please visit HUAWEI Developers.
For detailed instructions, please visit Development Guide.
You can join the HMS Core developer discussion on Reddit.
You can download the demo and sample code from GitHub.
To solve integration problems, please go to Stack Overflow.
It seems there are many new fuctions on ML Kit. Wonderful!
Hi, have you tried searching for public APIs?
Related
Introduction
In this article, we will learn how to implement Huawei Safety detect kit in to mobile applications. Mobile devices have become more popular than laptops. Now a days users engage in nearly all activities on mobile devices, right from watching the news, checking emails, online shopping, doing bank transactions. Through these apps, business can gather usable information, which can help business to take precise decisions for better services.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What is Huawei Safety Detect Service?
Safety Detect builds robust security capabilities, including system integrity check (SysIntegrity), app security check (AppsCheck), malicious URL check (URLCheck), fake user detection (UserDetect), and malicious Wi-Fi detection (WifiDetect), into your app, effectively protecting it against security threats.
1. SysIntegrity API: Checks whether the device running your app is secure, for example, whether it is rooted.
2. AppsCheck API: Checks for malicious apps and provides you with a list of malicious apps.
3. URLCheck API: Determines the threat type of a specific URL.
4. UserDetect API: Checks whether your app is interacting with a fake user.
5. WifiDetect API: Checks whether the Wi-Fi to be connected is secure.
Why Security is required for Apps
Mobile app security is a measure to secure application from threats like malware and other digital frauds that risk critical personal and financial information from hackers to avoid all of these we need to integrate the safety detect.
What are all the restrictions exists?
Currently two restrictions are there WifiDetect and UserDetect.
1. WifiDetect function available only in Chinese mainland.
2. UserDetect function not available in Chinese mainland.
Advantages
1. Provides a Trusted Execution Environment (TEE) to check system integrity.
2. Makes building security into your app easy with a rapid integration wizard.
3. Checks security for a diversity of apps: e-commerce, finance, multimedia, and news.
Requirements
1. Any operating system(i.e. MacOS, Linux and Windows)
2. Any IDE with Flutter SDK installed (i.e. IntelliJ, Android Studio and VsCode etc.)
3. A little knowledge of Dart and Flutter.
4. A Brain to think
Setting up the project
1. Before start creating application we have to make sure we connect our project to AppGallery. For more information check this link
2. After that follow the URL for cross-platform plugins. Download required plugins.
3. Enable the Safety Detect in the Manage API section and add the plugin.
4. After completing all the above steps, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. You can find all the plugins in pub.dev with the latest versions.
Code:
huawei_safetydetect:
path: ../huawei_safetydetect/
After adding them, run flutter pub get command. Now all the plugins are ready to use.
Note: Set multiDexEnabled to true in the android/app directory, so the app will not crash.
Why we need SysIntegrity API and How to Use?
The SysIntegrity API is called to check the system integrity of a device. If the device is not safe, appropriate measures are taken.
Before implementing this API we need to check device have latest version of HMS core must be installed on users device.
Obtain a nonce value will be used to determine whether the returned result corresponds to the request and did not encounter and replay attacks. The nonce value must contain a minimum of 16 bytes and is intended to be used only once. Request for the AppId as input parameters.
Code:
getAppId() async {
String appID = await SafetyDetect.getAppID;
setState(() {
appId = appID;
});
}
checkSysIntegrity() async {
Random secureRandom = Random.secure();
List randomIntegers = List<int>();
for (var i = 0; i < 24; i++) {
randomIntegers.add(secureRandom.nextInt(255));
}
Uint8List nonce = Uint8List.fromList(randomIntegers);
try {
String result = await SafetyDetect.sysIntegrity(nonce, appId);
List<String> jwsSplit = result.split(".");
String decodedText = utf8.decode(base64Url.decode(jwsSplit[1]));
showToast("SysIntegrityCheck result is: $decodedText");
} on PlatformException catch (e) {
showToast("Error occured while getting SysIntegrityResult. Error is : $e");
}
}
}
Why we need AppsCheck API and How to Use?
You can obtain all malicious applications and evaluate whether you can restrict the behaviour of your application based on the risk.
You can directly call the getMaliciousAppsList() method to get all the malicious apps.
Code:
void getMaliciousAppsList() async {
List<MaliciousAppData> maliciousApps = List();
maliciousApps = await SafetyDetect.getMaliciousAppsList();
setState(() {
showToast("malicious apps: ${maliciousApps.toString()}");
});
}
In the return from task, you will get a list of malicious applications. You can find out the package name, SHA256 value and category of an application in this list.
Why we need User Detect API and How to Use?
This API can help your app prevent batch registration, credential stuffing attacks, activity bonus hunting, and content crawling. If a user is a suspicious one or risky one, a verification code is sent to the user for secondary verification. If the detection result indicates that the user is a real one, the user can sign in to my app. Otherwise, the user is not allowed to MainPage.
Code:
void _signInHuawei() async {
final helper = new HmsAuthParamHelper();
helper
..setAccessToken()
..setIdToken()
..setProfile()
..setEmail()
..setAuthorizationCode();
try {
HmsAuthHuaweiId authHuaweiId =
await HmsAuthService.signIn(authParamHelper: helper);
StorageUtil.putString("Token", authHuaweiId.accessToken);
} on Exception catch (e) {}
}
userDetection() async {
try {
String token = await SafetyDetect.userDetection(appId);
print("User verification succeded, user token: $token");
if(token!=null){
userDetection();
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => HomePageScreen()),
);
}
} on PlatformException catch (e) {
print(
"Error occurred: " + e.code + ":" + SafetyDetectStatusCodes[e.code]);
}
}
Why we need URLCheck API and How to Use?
You can determine the dangerous urls using URL Check API. Currently UrlSafety API provide determinate MALWARE and PHISHING threats. When you visit a URL, this API checks whether the URL is a malicious one. If so, you can evaluate the risk and alert the user about the risk or block the URL.
Code:
InkWell(
onTap: () {
loadUrl();
},
child: Text(
'Visit: $url',
style:
TextStyle(color: textColor),
))
void loadUrl() async {
Future.delayed(const Duration(seconds: 5), () async {
urlCheck();
});
}
void urlCheck() async {
List<UrlThreatType> threatTypes = [
UrlThreatType.malware,
UrlThreatType.phishing
];
List<UrlCheckThreat> urlCheckResults =
await SafetyDetect.urlCheck(url, appId, threatTypes);
if (urlCheckResults.length == 0) {
showToast("No threat is detected for the URL");
} else {
urlCheckResults.forEach((element) {
print("${element.getUrlThreatType} is detected on the URL");
});
}
}
Why we need WifiDetect API and How to Use?
This API checks characteristics of the Wi-Fi and router to be connected, analyzes the Wi-Fi information, and returns the Wi-Fi detection results after classification, helping you prevent possible attacks to your app from malicious Wi-Fi. If attacks are detected app can interrupt the user operation or it will asks user permission.
Code:
@override
void initState() {
getWifiDetectStatus();
super.initState();
}
getWifiDetectStatus() async {
try {
WifiDetectResponse wifiDetectStatus =
await SafetyDetect.getWifiDetectStatus();
ApplicationUtils.displayToast(
'Wifi detect status is: ${wifiDetectStatus.getWifiDetectType.toString()}');
} on PlatformException catch (e) {
if (e.code.toString() == "19003") {
ApplicationUtils.displayToast(' The WifiDetect API is unavailable in this region');
}
}
}
Note: Currently this API supports Chinese mainland.
Demo
Tips & Tricks
1. Download latest HMS Flutter plugin.
2. Set minSDK version to 19 or later.
3. Do not forget to click pug get after adding dependencies.
4. Latest HMS Core APK is required.
Conclusion
These were some of the best practices that a mobile app developer must follow in order to have a fully secure and difficult-to-crack application.
In the near future, security will act as one of the differentiating and competing innovations in the app world, with customers preferring secure apps to maintain the privacy of their data over other mobile applications.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Safety detect Kit URL
Checkout in forum
1. Background
Cordova is an open-source cross-platform development framework that allows you to use HTML and JavaScript to develop apps across multiple platforms, such as Android and iOS. So how exactly does Cordova enable apps to run on different platforms and implement the functions? The abundant plugins in Cordova are the main reason, and free you to focus solely on app functions, without having to interact with the APIs at the OS level.
HMS Core provides a set of Cordova-related plugins, which enable you to integrate kits with greater ease and efficiency.
2. Introduction
Here, I'll use the Cordova plugin in HUAWEI Push Kit as an example to demonstrate how to call Java APIs in JavaScript through JavaScript-Java messaging.
The following implementation principles can be applied to all other kits, except for Map Kit and Ads Kit (which will be detailed later), and help you master troubleshooting solutions.
3. Basic Structure of Cordova
When you call loadUrl in MainActivity, CordovaWebView will be initialized and Cordova starts up. In this case, CordovaWebView will create PluginManager, NativeToJsMessageQueue, as well as ExposedJsApi of JavascriptInterface. ExposedJsApi and NativeToJsMessageQueue will play a role in the subsequent communication.
During the plugin loading, all plugins in the configuration file will be read when the PluginManager object is created, and plugin mappings will be created. When the plugin is called for the first time, instantiation is conducted and related functions are executed.
A message can be returned from Java to JavaScript in synchronous or asynchronous mode. In Cordova, set async in the method to distinguish the two modes.
In synchronous mode, Cordova obtains data from the header of the NativeToJsMessageQueue queue, finds the message request based on callbackID, and returns the data to the success method of the request.
In asynchronous mode, Cordova calls the loop method to continuously obtain data from the NativeToJsMessageQueue queue, finds the message request, and returns the data to the success method of the request.
In the Cordova plugin of Push Kit, the synchronization mode is used.
4. Plugin CallYou may still be unclear on how the process works, based on the description above, so I've provided the following procedure:
1. Install the plugin.
Run the cordova plugin add u/hmscore/cordova-plugin-hms-push command to install the latest plugin. After the command is executed, the plugin information is added to the plugins directory.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The plugin.xml file records all information to be used, such as JavaScript and Android classes. During the plugin initialization, the classes will be loaded to Cordova. If a method or API is not configured in the file, it is unable to be used.
2. Create a message mapping.
The plugin provides the methods for creating mappings for the following messages:
1) HmsMessaging
In the HmsPush.js file, call the runHmsMessaging API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.
The message will be transferred to the HmsPushMessaging class. The execute method in HmsPushMessaging can transfer the message to a method for processing based on the action type in the message.
Code:
public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException { hmsLogger.startMethodExecutionTimer(action); switch (action) { case "isAutoInitEnabled": isAutoInitEnabled(callbackContext); break; case "setAutoInitEnabled": setAutoInitEnabled(args.getBoolean(1), callbackContext); break; case "turnOffPush": turnOffPush(callbackContext); break; case "turnOnPush": turnOnPush(callbackContext); break; case "subscribe": subscribe(args.getString(1), callbackContext); break;
The processing method returns the result to JavaScript. The result will be written to the nativeToJsMessageQueue queue.
Code:
callBack.sendPluginResult(new PluginResult(PluginResult.Status.OK,autoInit));
2) HmsInstanceId
In the HmsPush.js file, call the runHmsInstance API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.
The message will be transferred to the HmsPushInstanceId class. The execute method in HmsPushInstanceId can transfer the message to a method for processing based on the action type in the message.
Code:
public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException { if (!action.equals("init")) hmsLogger.startMethodExecutionTimer(action);
switch (action) { case "init": Log.i("HMSPush", "HMSPush initialized "); break; case "enableLogger": enableLogger(callbackContext); break; case "disableLogger": disableLogger(callbackContext); break; case "getToken": getToken(args.length() > 1 ? args.getString(1) : Core.HCM, callbackContext); break; case "getAAID": getAAID(callbackContext); break; case "getCreationTime": getCreationTime(callbackContext); break;
Similarly, the processing method returns the result to JavaScript. The result will be written to the nativeToJsMessageQueue queue.
Code:
callBack.sendPluginResult(new PluginResult(PluginResult.Status.OK,autoInit));
This process is similar to that for HmsPushMessaging. The main difference is that HmsInstanceId is used for HmsPushInstanceId-related APIs, and HmsMessaging is used for HmsPushMessaging-related APIs.
3) localNotification
In the HmsLocalNotification.js file, call the run API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.
The message will be transferred to the HmsLocalNotification class. The execute method in HmsLocalNotification can transfer the message to a method for processing based on the action type in the message.
Code:
public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException { switch (action) { case "localNotification": localNotification(args, callbackContext); break; case "localNotificationSchedule": localNotificationSchedule(args.getJSONObject(1), callbackContext); break; case "cancelAllNotifications": cancelAllNotifications(callbackContext); break; case "cancelNotifications": cancelNotifications(callbackContext); break; case "cancelScheduledNotifications": cancelScheduledNotifications(callbackContext); break; case "cancelNotificationsWithId": cancelNotificationsWithId(args.getJSONArray(1), callbackContext); break;
Call sendPluginResult to return the result. However, for localNotification, the result will be returned after the notification is sent.
3. Perform message push event callback.
In addition to the method calling, message push involves listening for many events, for example, receiving common messages, data messages, and tokens.
The callback process starts from Android.
In Android, the callback method is defined in HmsPushMessageService.java.
Based on the SDK requirements, you can opt to redefine certain callback methods, such as onMessageReceived, onDeletedMessages, and onNewToken.
When an event is triggered, an event notification is sent to JavaScript.
Code:
public static void runJS(final CordovaPlugin plugin, final String jsCode) { if (plugin == null) return; Log.d(TAG, "runJS()");
plugin.cordova.getActivity().runOnUiThread(() -> { CordovaWebViewEngine engine = plugin.webView.getEngine(); if (engine == null) { plugin.webView.loadUrl("javascript:" + jsCode);
} else { engine.evaluateJavascript(jsCode, (result) -> {
}); } }); }
Each event is defined and registered in HmsPushEvent.js.
exports.REMOTE_DATA_MESSAGE_RECEIVED = "REMOTE_DATA_MESSAGE_RECEIVED"; exports.TOKEN_RECEIVED_EVENT = "TOKEN_RECEIVED_EVENT"; exports.ON_TOKEN_ERROR_EVENT = "ON_TOKEN_ERROR_EVENT"; exports.NOTIFICATION_OPENED_EVENT = "NOTIFICATION_OPENED_EVENT"; exports.LOCAL_NOTIFICATION_ACTION_EVENT = "LOCAL_NOTIFICATION_ACTION_EVENT"; exports.ON_PUSH_MESSAGE_SENT = "ON_PUSH_MESSAGE_SENT"; exports.ON_PUSH_MESSAGE_SENT_ERROR = "ON_PUSH_MESSAGE_SENT_ERROR"; exports.ON_PUSH_MESSAGE_SENT_DELIVERED = "ON_PUSH_MESSAGE_SENT_DELIVERED";
Java:
function onPushMessageSentDelivered(result) { window.registerHMSEvent(exports.ON_PUSH_MESSAGE_SENT_DELIVERED, result); } exports.onPushMessageSentDelivered = onPushMessageSentDelivered;
Please note that the event initialization needs to be performed during app development. Otherwise, the event listening will fail. For more details, please refer to eventListeners.js in the demo.
If the callback has been triggered in Java, but is not received in JavaScript, check whether the event initialization is performed.
In doing so, when an event is triggered in Android, JavaScript will be able to receive and process the message. You can also refer to this process to add an event.
5. Summary
The description above illustrates how the plugin implements the JavaScript-Java communications. The methods of most kits can be called in a similar manner. However, Map Kit, Ads Kit, and other kits that need to display images or videos (such as maps and native ads) require a different method, which will be introduced in a later article.
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source
BackgroundCordova is an open-source cross-platform development framework that allows you to use HTML and JavaScript to develop apps across multiple platforms, such as Android and iOS. So how exactly does Cordova enable apps to run on different platforms and implement the functions? The abundant plugins in Cordova are the main reason, and free you to focus solely on app functions, without having to interact with the APIs at the OS level. HMS Core provides a set of Cordova-related plugins, which enable you to integrate kits with greater ease and efficiency.
IntroductionHere, I'll use the Cordova plugin in HUAWEI Push Kit as an example to demonstrate how to call Java APIs in JavaScript through JavaScript-Java messaging. The following implementation principles can be applied to all other kits, except for Map Kit and Ads Kit (which will be detailed later), and help you master troubleshooting solutions.
Basic Structure of CordovaWhen you call style='mso-bidi-font-weight:normal'>loadUrl in MainActivity, CordovaWebView will be initialized and Cordova starts up. In this case, style='mso-bidi-font-weight:normal'>CordovaWebView will create style='mso-bidi-font-weight:normal'>PluginManager, NativeToJsMessageQueue, as well as ExposedJsApi of JavascriptInterface. style='mso-bidi-font-weight:normal'>ExposedJsApi and NativeToJsMessageQueue will play a role in the subsequent communication.
During the plugin loading, all plugins in the configuration file will be read when the PluginManager object is created, and plugin mappings will be created. When the plugin is called for the first time, instantiation is conducted and related functions are executed.
A message can be returned from Java to JavaScript in synchronous or asynchronous mode. In Cordova, set async in the method to distinguish the two modes.
In synchronous mode, Cordova obtains data from the header of the NativeToJsMessageQueue queue, finds the message request based on callbackID, and returns the data to the success method of the request.
In asynchronous mode, Cordova calls the loop method to continuously obtain data from the NativeToJsMessageQueue queue, finds the message request, and returns the data to the success method of the request.
In the Cordova plugin of Push Kit, the synchronization mode is used.
Plugin Call
You may still be unclear on how the process works, based on the description above, so I've provided the following procedure:
1. Install the plugin.Run the cordova plugin add @hmscore/cordova-plugin-hms-push command to install the latest plugin. After the command is executed, the plugin information is added to the plugins directory.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The plugin.xml file records all information to be used, such as JavaScript and Android classes. During the plugin initialization, the classes will be loaded to Cordova. If a method or API is not configured in the file, it is unable to be used.
2. Create a message mapping.The plugin provides the methods for creating mappings for the following messages:
(1) HmsMessaging
In the HmsPush.js file, call the runHmsMessaging API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.
The message will be transferred to the HmsPushMessaging class. The execute method in HmsPushMessaging can transfer the message to a method for processing based on the action type in the message.
Code:
public void execute(String action, final JSONArray args, final CallbackContext callbackContext)
throws JSONException {
hmsLogger.startMethodExecutionTimer(action);
switch (action) {
case "isAutoInitEnabled":
isAutoInitEnabled(callbackContext);
break;
case "setAutoInitEnabled":
setAutoInitEnabled(args.getBoolean(1), callbackContext);
break;
case "turnOffPush":
turnOffPush(callbackContext);
break;
case "turnOnPush":
turnOnPush(callbackContext);
break;
case "subscribe":
subscribe(args.getString(1), callbackContext);
break;
(2) HmsInstanceId
In the HmsPush.js file, call the runHmsInstance API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.
The message will be transferred to the HmsPushInstanceId class. The execute method in HmsPushInstanceId can transfer the message to a method for processing based on the action type in the message.
Code:
public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException {
if (!action.equals("init"))
hmsLogger.startMethodExecutionTimer(action);
switch (action) {
case "init":
Log.i("HMSPush", "HMSPush initialized ");
break;
case "enableLogger":
enableLogger(callbackContext);
break;
case "disableLogger":
disableLogger(callbackContext);
break;
case "getToken":
getToken(args.length() > 1 ? args.getString(1) : Core.HCM, callbackContext);
break;
case "getAAID":
getAAID(callbackContext);
break;
case "getCreationTime":
getCreationTime(callbackContext);
break;
Similarly, the processing method returns the result to JavaScript. The result will be written to the nativeToJsMessageQueue queue.
Code:
callBack.sendPluginResult(new PluginResult(PluginResult.Status.OK,autoInit));
(3) localNotification
In the HmsLocalNotification.js file, call the run API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.
The message will be transferred to the HmsLocalNotification class. The execute method in HmsLocalNotification can transfer the message to a method for processing based on the action type in the message.
Code:
public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException {
switch (action) {
case "localNotification":
localNotification(args, callbackContext);
break;
case "localNotificationSchedule":
localNotificationSchedule(args.getJSONObject(1), callbackContext);
break;
case "cancelAllNotifications":
cancelAllNotifications(callbackContext);
break;
case "cancelNotifications":
cancelNotifications(callbackContext);
break;
case "cancelScheduledNotifications":
cancelScheduledNotifications(callbackContext);
break;
case "cancelNotificationsWithId":
cancelNotificationsWithId(args.getJSONArray(1), callbackContext);
break;
Call sendPluginResult to return the result. However, for localNotification, the result will be returned after the notification is sent.
3. Perform message push event callback.In addition to the method calling, message push involves listening for many events, for example, receiving common messages, data messages, and tokens.
The callback process starts from Android.
In Android, the callback method is defined in HmsPushMessageService.java.
Based on the SDK requirements, you can opt to redefine certain callback methods, such as onMessageReceived, onDeletedMessages, and onNewToken.
When an event is triggered, an event notification is sent to JavaScript.
Code:
public static void runJS(final CordovaPlugin plugin, final String jsCode) {
if (plugin == null)
return;
Log.d(TAG, "runJS()");
plugin.cordova.getActivity().runOnUiThread(() -> {
CordovaWebViewEngine engine = plugin.webView.getEngine();
if (engine == null) {
plugin.webView.loadUrl("javascript:" + jsCode);
} else {
engine.evaluateJavascript(jsCode, (result) -> {
});
}
});
}
Each event is defined and registered in HmsPushEvent.js.
Code:
exports.REMOTE_DATA_MESSAGE_RECEIVED = "REMOTE_DATA_MESSAGE_RECEIVED";
exports.TOKEN_RECEIVED_EVENT = "TOKEN_RECEIVED_EVENT";
exports.ON_TOKEN_ERROR_EVENT = "ON_TOKEN_ERROR_EVENT";
exports.NOTIFICATION_OPENED_EVENT = "NOTIFICATION_OPENED_EVENT";
exports.LOCAL_NOTIFICATION_ACTION_EVENT = "LOCAL_NOTIFICATION_ACTION_EVENT";
exports.ON_PUSH_MESSAGE_SENT = "ON_PUSH_MESSAGE_SENT";
exports.ON_PUSH_MESSAGE_SENT_ERROR = "ON_PUSH_MESSAGE_SENT_ERROR";
exports.ON_PUSH_MESSAGE_SENT_DELIVERED = "ON_PUSH_MESSAGE_SENT_DELIVERED";
Code:
function onPushMessageSentDelivered(result) {
window.registerHMSEvent(exports.ON_PUSH_MESSAGE_SENT_DELIVERED, result);
}
exports.onPushMessageSentDelivered = onPushMessageSentDelivered;
Please note that the event initialization needs to be performed during app development. Otherwise, the event listening will fail. For more details, please refer to eventListeners.js in the demo. If the callback has been triggered in Java, but is not received in JavaScript, check whether the event initialization is performed. In doing so, when an event is triggered in Android, JavaScript will be able to receive and process the message. You can also refer to this process to add an event.
SummaryThe description above illustrates how the plugin implements the JavaScript-Java communications. The methods of most kits can be called in a similar manner. However, Map Kit, Ads Kit, and other kits that need to display images or videos (such as maps and native ads) require a different method, which will be introduced in a later article.
ReferencesFor more details, you can go to:
HMS Core official website
HMS Core Cordova Plugin Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download Cordova Plugins
Stack Overflow to solve any integration problems
Thanks for sharing
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Personalized health records and visual tools have been a godsend for digital health management, giving users the tools to conveniently track their health on their mobile phones. From diet to weight and fitness and beyond, storing, managing, and sharing health data has never been easier. Users can track their health over a specific period of time, like a week or a month, to identify potential diseases in a timely manner, and to lead a healthy lifestyle. Moreover, with personalized health records in hand, trips to the doctor now lead to quicker and more accurate diagnoses. Health Kit takes this new paradigm into overdrive, opening up a wealth of capabilities that can endow your health app with nimble, user-friendly features.
With the basic capabilities of Health Kit integrated, your app will be able to obtain users' health data on the cloud from the Huawei Health app, after obtaining users' authorization, and then display the data to users.
Effects
This demo is modified based on the sample code of Health Kit's basic capabilities. You can download the demo and try it out to build your own health app.
PreparationsRegistering an Account and Applying for the HUAWEI ID ServiceHealth Kit uses the HUAWEI ID service and therefore, you need to apply for the HUAWEI ID service first. Skip this step if you have done so for your app.
Applying for the Health Kit ServiceApply for the data read and write scopes for your app. Find the Health Kit service in the Development section on HUAWEI Developers, and apply for the Health Kit service. Select the data scopes required by your app. In the demo, the height and weight data are applied for, which are unrestricted data and will be quickly approved after your application is submitted. If you want to apply for restricted data scopes such as heart rate, blood pressure, blood glucose, and blood oxygen saturation, your application will be manually reviewed.
Integrating the HMS Core SDKBefore getting started, integrate the Health SDK of the basic capabilities into the development environment.
Use Android Studio to open the project, and find and open the build.gradle file in the root directory of the project. Go to allprojects > repositories and buildscript > repositories to add the Maven repository address for the SDK.
Code:
maven {url 'https://developer.huawei.com/repo/'}
Open the app-level build.gradle file and add the following build dependency to the dependencies block.
Code:
implementation 'com.huawei.hms:health:{version}'
Open the modified build.gradle file again. You will find a Sync Now link in the upper right corner of the page. Click Sync Now and wait until the synchronization is complete.
Configuring the Obfuscation Configuration FileBefore building the APK, configure the obfuscation configuration file to prevent the HMS Core SDK from being obfuscated.
Open the obfuscation configuration file proguard-rules.pro in the app's root directory of the project, and add configurations to exclude the HMS Core SDK from obfuscation.
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.huawei.hianalytics.**{*;}
-keep class com.huawei.updatesdk.**{*;}
-keep class com.huawei.hms.**{*;}
Importing the Certificate Fingerprint, Changing the Package Name, and Configuring the JDK Build VersionImport the keystore file generated when the app is created. After the import, open the app-level build.gradle file to view the import result.
Change the app package name to the one you set in applying for the HUAWEI ID Service.
Open the app-level build.gradle file and add the compileOptions configuration to the android block as follows:
Code:
compileOptions {
sourceCompatibility = '1.8'
targetCompatibility = '1.8'
}
Main Implementation Code1. Start the screen for login and authorization.
Code:
/**
* Add scopes that you are going to apply for and obtain the authorization intent.
*/
private void requestAuth() {
// Add scopes that you are going to apply for. The following is only an example.
// You need to add scopes for your app according to your service needs.
String[] allScopes = Scopes.getAllScopes();
// Obtain the authorization intent.
// True indicates that the Huawei Health app authorization process is enabled; False otherwise.
Intent intent = mSettingController.requestAuthorizationIntent(allScopes, true);
// The authorization screen is displayed.
startActivityForResult(intent, REQUEST_AUTH);
}
2. Call com.huawei.hms.hihealth. Then call readLatestData() of the DataController class to read the latest health-related data, including height, weight, heart rate, blood pressure, blood glucose, and blood oxygen.
Code:
/**
* Read the latest data according to the data type.
*
* @param view (indicating a UI object)
*/
public void readLatestData(View view) {
// 1. Call the data controller using the specified data type (DT_INSTANTANEOUS_HEIGHT) to query data.
// Query the latest data of this data type.
List<DataType> dataTypes = new ArrayList<>();
dataTypes.add(DataType.DT_INSTANTANEOUS_HEIGHT);
dataTypes.add(DataType.DT_INSTANTANEOUS_BODY_WEIGHT);
dataTypes.add(DataType.DT_INSTANTANEOUS_HEART_RATE);
dataTypes.add(DataType.DT_INSTANTANEOUS_STRESS);
dataTypes.add(HealthDataTypes.DT_INSTANTANEOUS_BLOOD_PRESSURE);
dataTypes.add(HealthDataTypes.DT_INSTANTANEOUS_BLOOD_GLUCOSE);
dataTypes.add(HealthDataTypes.DT_INSTANTANEOUS_SPO2);
Task<Map<DataType, SamplePoint>> readLatestDatas = dataController.readLatestData(dataTypes);
// 2. Calling the data controller to query the latest data is an asynchronous operation.
// Therefore, a listener needs to be registered to monitor whether the data query is successful or not.
readLatestDatas.addOnSuccessListener(new OnSuccessListener<Map<DataType, SamplePoint>>() {
@Override
public void onSuccess(Map<DataType, SamplePoint> samplePointMap) {
logger("Success read latest data from HMS core");
if (samplePointMap != null) {
for (DataType dataType : dataTypes) {
if (samplePointMap.containsKey(dataType)) {
showSamplePoint(samplePointMap.get(dataType));
handleData(dataType);
} else {
logger("The DataType " + dataType.getName() + " has no latest data");
}
}
}
}
});
readLatestDatas.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
logger(errorCode + ": " + errorMsg);
}
});
}
The DataType object contains the specific data type and data value. You can obtain the corresponding data by parsing the object.
ConclusionPersonal health records make it much easier for users to stay informed about their health. The health records help track health data over specific periods of time, such as week-by-week or month-by-month, providing invaluable insight, to make proactive health a day-to-day reality. When developing a health app, integrating data-related capabilities can help streamline the process, allowing you to focus your energy on app design and user features, to bring users a smart handy health assistant.
ReferenceHUAWEI Developers
HMS Core Health Kit Development Guide
Integrating the HMS Core SDK
Audio is the soul of media, and for mobile apps in particular, it engages with users more, adds another level of immersion, and enriches content.
This is a major driver of my obsession for developing audio-related functions. In my recent post that tells how I developed a portrait retouching function for a live-streaming app, I mentioned that I wanted to create a solution that can retouch music. I know that a technology called spatial audio can help with this, and — guess what — I found a synonymous capability in HMS Core Audio Editor Kit, which can be integrated independently, or used together with other capabilities in the UI SDK of this kit.
I chose to integrate the UI SDK into my demo first, which is loaded with not only the kit's capabilities, but also a ready-to-use UI. This allows me to give the spatial audio capability a try and frees me from designing the UI. Now let's dive into the development procedure of the demo.
Development ProcedurePreparations1. Prepare the development environment, which has requirements on both software and hardware. These are:
Software requirements:
JDK version: 1.8 or later
Android Studio version: 3.X or later
minSdkVersion: 24 or later
targetSdkVersion: 33 (recommended)
compileSdkVersion: 30 (recommended)
Gradle version: 4.6 or later (recommended)
Hardware requirements: a phone running EMUI 5.0 or later, or a phone running Android whose version ranges from Android 7.0 to Android 13.
2. Configure app information in a platform called AppGallery Connect, and go through the process of registering as a developer, creating an app, generating a signing certificate fingerprint, configuring the signing certificate fingerprint, enabling the kit, and managing the default data processing location.
3. Integrate the HMS Core SDK.
4. Add necessary permissions in the AndroidManifest.xml file, including the vibration permission, microphone permission, storage write permission, storage read permission, Internet permission, network status access permission, and permission to obtaining the changed network connectivity state.
When the app's Android SDK version is 29 or later, add the following attribute to the application element, which is used for obtaining the external storage permission.
Code:
<application
android:requestLegacyExternalStorage="true"
…… >
SDK Integration1. Initialize the UI SDK and set the app authentication information. If the information is not set, this may affect some functions of the SDK.
Code:
// Obtain the API key from the agconnect-services.json file.
// It is recommended that the key be stored on cloud, which can be obtained when the app is running.
String api_key = AGConnectInstance.getInstance().getOptions().getString("client/api_key");
// Set the API key.
HAEApplication.getInstance().setApiKey(api_key);
2. Create AudioFilePickerActivity, which is a customized activity used for audio file selection.
Code:
/**
* Customized activity, used for audio file selection.
*/
public class AudioFilePickerActivity extends AppCompatActivity {
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
performFileSearch();
}
private void performFileSearch() {
// Select multiple audio files.
registerForActivityResult(new ActivityResultContracts.GetMultipleContents(), new ActivityResultCallback<List<Uri>>() {
@Override
public void onActivityResult(List<Uri> result) {
handleSelectedAudios(result);
finish();
}
}).launch("audio/*");
}
/**
* Process the selected audio files, turning the URIs into paths as needed.
*
* @param uriList indicates the selected audio files.
*/
private void handleSelectedAudios(List<Uri> uriList) {
// Check whether the audio files exist.
if (uriList == null || uriList.size() == 0) {
return;
}
ArrayList<String> audioList = new ArrayList<>();
for (Uri uri : uriList) {
// Obtain the real path.
String filePath = FileUtils.getRealPath(this, uri);
audioList.add(filePath);
}
// Return the audio file path to the audio editing UI.
Intent intent = new Intent();
// Use HAEConstant.AUDIO_PATH_LIST that is provided by the SDK.
intent.putExtra(HAEConstant.AUDIO_PATH_LIST, audioList);
// Use HAEConstant.RESULT_CODE as the result code.
this.setResult(HAEConstant.RESULT_CODE, intent);
finish();
}
}
The FileUtils utility class is used for obtaining the real path, which is detailed here. Below is the path to this class.
Code:
app/src/main/java/com/huawei/hms/audioeditor/demo/util/FileUtils.java
3. Add the action value to AudioFilePickerActivity in AndroidManifest.xml. The SDK would direct to a screen according to this action.
Code:
<activity
android:name=".AudioFilePickerActivity"
android:exported="false">
<intent-filter>
<action android:name="com.huawei.hms.audioeditor.chooseaudio" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</activity>
4. Launch the audio editing screen via either:
Mode 1: Launch the screen without input parameters. In this mode, the default configurations of the SDK are used.
Code:
HAEUIManager.getInstance().launchEditorActivity(this);
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Audio editing screens
Mode 2: Launch the audio editing screen with input parameters. This mode lets you set the menu list and customize the path for an output file. On top of this, the mode also allows for specifying the input audio file paths, setting the draft mode, and more.
Launch the screen with the menu list and customized output file path:
Code:
// List of level-1 menus. Below are just some examples:
ArrayList<Integer> menuList = new ArrayList<>();
// Add audio.
menuList.add(MenuCommon.MAIN_MENU_ADD_AUDIO_CODE);
// Record audio.
menuList.add(MenuCommon.MAIN_MENU_AUDIO_RECORDER_CODE);
// List of level-2 menus, which are displayed after audio files are input and selected.
ArrayList<Integer> secondMenuList = new ArrayList<>();
// Split audio.
secondMenuList.add(MenuCommon.EDIT_MENU_SPLIT_CODE);
// Delete audio.
secondMenuList.add(MenuCommon.EDIT_MENU_DEL_CODE);
// Adjust the volume.
secondMenuList.add(MenuCommon.EDIT_MENU_VOLUME2_CODE);
// Customize the output file path.
String exportPath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_MUSIC).getPath() + "/";
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the level-1 menus.
.setCustomMenuList(menuList)
// Set the level-2 menus.
.setSecondMenuList(secondMenuList)
// Set the output file path.
.setExportPath(exportPath);
// Launch the audio editing screen with the menu list and customized output file path.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
Level-1 menus
Level-2 menus
Launch the screen with the specified input audio file paths:
Code:
// Set the input audio file paths.
ArrayList<AudioInfo> audioInfoList = new ArrayList<>();
// Example of an audio file path:
String audioPath = "/storage/emulated/0/Music/Dream_It_Possible.flac";
// Create an instance of AudioInfo and pass the audio file path.
AudioInfo audioInfo = new AudioInfo(audioPath);
// Set the audio name.
audioInfo.setAudioName("Dream_It_Possible");
audioInfoList.add(audioInfo);
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the input audio file paths.
.setFilePaths(audioInfoList);
// Launch the audio editing screen with the specified input audio file paths.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
Launch the screen with drafts:
Code:
// Obtain the draft list. For example:
List<DraftInfo> draftList = HAEUIManager.getInstance().getDraftList();
// Specify the first draft in the draft list.
String draftId = null;
if (!draftList.isEmpty()) {
draftId = draftList.get(0).getDraftId();
}
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the draft ID, which can be null.
.setDraftId(draftId)
// Set the draft mode. NOT_SAVE is the default value, which indicates not to save a project as a draft.
.setDraftMode(AudioEditorLaunchOption.DraftMode.SAVE_DRAFT);
// Launch the audio editing screen with drafts.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
And just like that, SDK integration is complete, and the prototype of the audio editing app I want is ready to use.
Not bad. It has all the necessary functions of an audio editing app, and best of all, it's pretty easy to develop, thanks to the all-in-one and ready-to-use SDK.
Anyway, I tried the spatial audio function preset in the SDK and I found I could effortlessly add more width to a song. However, I also want a customized UI for my app, instead of simply using the one provided by the UI SDK. So my next step is to create a demo with the UI that I have designed and the spatial audio function.
AfterthoughtsTruth to be told, the integration process wasn't as smooth as it seemed. I encountered two issues, but luckily, after doing some of my own research and contacting the kit's technical support team, I was able to fix the issues.
The first issue I came across was that after touching the Add effects and AI dubbing buttons, the UI displayed The token has expired or is invalid, and the Android Studio console printed the HAEApplication: please set your app apiKey log. The reason for this was that the app's authentication information was not configured. There are two ways of configuring this. The first was introduced in the first step of SDK Integration of this post, while the second was to use the app's access token, which had the following code:
Code:
HAEApplication.getInstance().setAccessToken("your access token");
The second issue — which is actually another result of unconfigured app authentication information — is the Something went wrong error displayed on the screen after an operation. To solve it, first make sure that the app authentication information is configured. Once this is done, go to AppGallery Connect to check whether Audio Editor Kit has been enabled for the app. If not, enable it. Note that because of caches (of either the mobile phone or server), it may take a while before the kit works for the app.
Also, in the Preparations part, I skipped the step for configuring obfuscation scripts before adding necessary permissions. This step is, according to technical support, necessary for apps that aim to be officially released. The app I have covered in this post is just a demo, so I just skipped this step.
TakeawayNo app would be complete with audio, and with spatial audio, you can deliver an even more immersive audio experience to your users.
Developing a spatial audio function for a mobile app can be a piece of cake thanks to HMS Core Audio Editor Kit. The spatial audio capability can be integrated either independently or together with other capabilities via the UI SDK, which delivers a ready-to-use UI, so that you can skip the tricky bits and focus more on what matters to users.