I have written series of articles on Quick App. If you are new to Quick App refer my previous articles.
Quick App set up
A Novice Journey Towards Quick App ( Part 2 )
A Novice Journey Towards Quick App ( Part 3 )
Multilingual
When it comes to multilingual we all will have question why should provide multilingual support for application. Let’s clear about it.
How does multilingual supports to get your business out in your neighboring countries? Sounds like a success right? You will ultimately gain more customers, get wider ground covered, revenues skyrocketed and your business grows even larger.
Everybody says that English is universal language in business, always it makes us to forget that not everyone is expert in English.
Advantages of Multilingual
1. Build the new relationship with users.
In any new relationship mother tongue is very important. It will have the different feel. If your application supports regional language users connects so easily.
2. Grow your application reputation globally.
If quick app supports multiple languages, people from different countries and different users for application, so that application reputation grows globally.
Now let’s understand how Quick supports multilingual.
Day to day throughout the world users of quick app are increasing rapidly. Make sure app can be used globally and Huawei quick app supports for multilingual. Quick apps can be properly displayed in different languages, layout adaption in different scripts writing direction is supported.
There are some languages layout supports from left to right like English, Kannda, Hindi, Telugu, Tamil etc. there are some languages which supports from right to left like Urdu.
English, Hindi language has left to right alignment and Urdu has right to left alignment.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How to get the layout direction?
Step 1: Add the following configuration to the features attribute in the manifest.json file.
Code:
{
"name": "system.configuration"
}
Step 2: Importing the module: Add the following configuration to the <script> on the page where the interface will be called.
Code:
import configuration from '@system.configuration'
Step 3: Call the configuration.getLayoutDirection() in onInit() to obtain the current script writing direction.
Code:
onInit: function () {
const dir = configuration.getLayoutDirection()
if (dir === "ltr") {
// set style from left-to-right
} else if (dir === "rtl") {
// Set style to right-to-left
}
}
Step 4: Add new language json file in the i18n. It is same as the adding strings file with different languages in android.
What will happen when configuration changes?
When the app configuration changes, for example, the language, country, region, or script writing direction changes. onConfigurationChanged() callback will be called, check whether types contains layoutDirection. If so, the system language changes. In this case, the component layout needs to be adapted based on the current script writing direction. For details about onConfigurationChanged, refer to Life Cycle Interface.
Code:
onConfigurationChanged(e) {
var types = e.types;
if (types.includes('layoutDirection')) {
var direction = configuration.getLayoutDirection()
// Customize the component layout based on the current script writing direction.
}
} }
Setting the Component Display Direction
You can specify the component display direction by setting the dir attribute or style of a component. For details, refer to the description about the dir attribute in Common Attributes and Common Styles. The setting effect of dir for the following components is special.
Code:
<template>
<div>
<div style="width: 600px;padding:50px">
<text style="width: 200px;">show dialog!text>
<image src="{{nextImg}}">image>
div>
div>
template>
<style>
@import "../Common/common.css";
style>
<script>
import configuration from '@system.configuration'
export default {
data: {
nextImg: "/Common/next.png"
},
onInit: function () {
const dir = configuration.getLayoutDirection()
if (dir === "ltr") {
// When the script writing direction is left-to-right, you need to set the attribute or style of the component. For example, you can configure an image as being displayed in the command mode.
this.nextImg = "/Common/next.png";
} else if (dir === "rtl") {
// When the script writing direction is right-to-left, you need to set the attribute or style of the component. For example, you can mirror an image.
this.nextImg = "/Common/next_mirror.png";
}
},
onConfigurationChanged(e) {
var that = this;
var types = e.types;
if (types.includes('layoutDirection')) {
var dir = configuration.getLayoutDirection()
// You can perform customization based on the script writing direction.
if (dir === "ltr") {
// When the script writing direction is left-to-right, you need to set the attribute or style of the component. For example, you can configure an image as being displayed in the common mode.
that.nextImg = "/Common/next.png";
} else if (dir === "rtl") {
// When the script writing direction is right-to-left, you need to set the attribute or style of the component. For example, you can mirror an image.
that.nextImg = "/Common/next_mirror.png";
}
}
}
}
script>
Deeplink
Standard links are provided in the quick apps so that user can tap on the link and open the Quick App. Basically Quick App center is installed you can use the deep links.
Supported deep link formats
1. hap://app/<package>/[path][?key=value]
2. https://hapjs.org/app/<package>/[path][?key=value]
3. hwfastapp://<package>/[path][?key=value]
package: app package name, which is mandatory.
path: path of an in-app page. This parameter is optional. The default value is the home page.
key-value: parameter to be passed to the page. This parameter is optional. Multiple key-value pairs are allowed. The passed parameter values may be obtained by other apps. Therefore, you are advised not to transfer data with high security sensitivity.
Follow the steps
Step 1: Add the following configuration to the features attribute in the manifest.json file.
Code:
{
"name": "system.router"
}
Step 2: Add the following configuration to the <script> on the page where the interface will be called.
Code:
import router from '@system.router'
Step 3: Call deep link in quick app to open another quick app.
Code:
import router from '@system.router'
router.push({
uri: 'hap://app/com.example.quickapp/page?key=value'
})
Step 4: Design web page to load the quick app.
Code:
<html>
<head>
<meta charset="UTF-8">
<title></title>
</head>
<body>
Open Quick App.
</body>
</html>
Step 5: Receive parameters in other page.
Code:
export default {
data:{
name: ""// The output parameter is empty.
}
onInit(){
this.$page.setTitleBar({text:'A'});
this.name; // Use this.name in onInit to receive output parameters.
}
}
Conclusion
In this article, we have learnt how to support multilingual and deep link in Quick App. In upcoming days I will come up with new article.
Reference
Deep link
Multilingual
Related
This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
HMS In-App Messaging is a tool to send relevant messages to target users actively using our app to encourage them to use key app functions. For example, we can send in-app messages to encourage users to subscribe to certain products, provide tips on passing a game level, or recommend activities of a restaurant.
In-App Messaging also allow us to customize our messages, the way it will be sent and also define events for triggering message sending to our users at the right moment.
Today I am going to server you a recipe to integrate In-App Messaging in your apps within 10 minutes.
Key Ingredients Needed
· 1 Android App Project, which supports Android 4.2 and later version.
· 1 SHA Key
· 1 Huawei Developer Account
· 1 Huawei phone with HMS 4.0.0.300 or later
Preparation Needed
· First we need to create an app or project in the Huawei app gallery connect.
· Provide the SHA Key and App Package name of the android project in App Information Section.
· Provide storage location in convention section under project setting.
· Enable App Messaging setting in Manage APIs section.
· After completing all the above points we need to download the agconnect-services.json from App Information Section. Copy and paste the json file in the app folder of the android project.
· Copy and paste the below maven url inside the repositories of buildscript and allprojects respectively (project build.gradle file)
Code:
maven { url 'http://developer.huawei.com/repo/' }
· Copy and paste the below class path inside the dependency section of project build.gradle file.
Code:
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
· Copy and paste the below plugin in the app build.gradle file
Code:
apply plugin: 'com.huawei.agconnect'
· Copy and paste below library in the app build.gradle file dependencies section
Code:
implementation 'com.huawei.agconnect:agconnect-appmessaging:1.3.1.300'
· Put the below permission in AndroidManifest file.
a) android.permission.INTERNET
b) android.permission.ACCESS_NETWORK_STATE
· Now Sync the gradle.
Android Code
1. First we need AAID for later use in sending In-App Messages. To obtain AAID, we will use getAAID() method.
2. Add following code in your project to obtain AAID:
Code:
HmsInstanceId inst = HmsInstanceId.getInstance(this);
Task<AAIDResult> idResult = inst.getAAID();
idResult.addOnSuccessListener(new OnSuccessListener<AAIDResult>() {
@Override
public void onSuccess(AAIDResult aaidResult) {
String aaid = aaidResult.getId();
Log.d(TAG, "getAAID success:" + aaid );
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.d(TAG, "getAAID failure:" + e);
}
});
3. To initialize the AGConnectAppMessaging instance we will use.
Code:
AGConnectAppMessaging appMessaging = AGConnectAppMessaging.getInstance();
4. To allow data synchronization from the AppGallery Connect server we will use.
Code:
appMessaging.setFetchMessageEnable(true);
5. To enable message display.
Code:
appMessaging.setDisplayEnable(true);
6. To specify that the in-app message data must be obtained from the AppGallery Connect server by force we will use.
Code:
appMessaging.setForceFetch();
Since we are using a test device to demonstrate the use of In-App Messaging, so we use setForceFetch API. The setForceFetch API can be used only for message testing. Also In-app messages can only be displayed to users who have installed our officially released app version.
Till now we added libraries, wrote the code in android studio using java etc. in the heated pan and the result will look like this:
Code:
public class MainActivity extends AppCompatActivity {
private String TAG = "MainActivity";
private AGConnectAppMessaging appMessaging;
private HiAnalyticsInstance instance;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
HmsInstanceId inst = HmsInstanceId.getInstance(this);
Task<AAIDResult> idResult = inst.getAAID();
idResult.addOnSuccessListener(new OnSuccessListener<AAIDResult>() {
@Override
public void onSuccess(AAIDResult aaidResult) {
String aaid = aaidResult.getId();
Log.d(TAG, "getAAID success:" + aaid );
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.d(TAG, "getAAID failure:" + e);
}
});
HiAnalyticsTools.enableLog();
instance = HiAnalytics.getInstance(this);
instance.setAnalyticsEnabled(true);
instance.regHmsSvcEvent();
inAppMessaging();
}
private void inAppMessaging() {
appMessaging = AGConnectAppMessaging.getInstance();
appMessaging.setFetchMessageEnable(true);
appMessaging.setDisplayEnable(true);
appMessaging.setForceFetch();
}
}
App Gallery Connect
Before we start using one of our key ingredients i.e. Huawei App Gallery Connect using Huawei Developer Account to serve In-App Messages, I would like to inform you that we can server the main dish into three different flavour or types:
A. Pop-up Message Flavour
B. Banner Message Flavour
C. An Image Message Flavour
In-App Message Using Pop-up Flavour
Let’s serve the main dish using Pop-up Message Flavour.
Step 1: Go to Huawei App Gallery Connect
Step 2: Select My projects.
Step 3: After selecting my projects, select the project which we have create earlier. It will look like this:
Step 4: After selecting the project, we will select App Messaging from the menu. It will look like this:
Step 5: Select New button to create new In-App Message to send to the device.
Step 6: Provide the Name and Description. It will look like this:
Step 7: In the Set style and content section, we have to select the type (which is Pop-up), provide a title, a message body, and choose colour for title text, body text and the background of the message. It will look like this:
Step 8: After providing the details in set style and content section, we will move on to the Image section. We will provide two image here for portrait and landscape mode of the app. Remember for portrait the image aspect ratio should be 3:2 (300x200) and for landscape the image aspect ratio should be either 1:1 (100x100) or 3:2 (300x200). It will look like this:
https://communityfile-dre.op.hicloud.com/FileServer/getFile/cmtybbs/001/647/156/2640091000001647156.20200512212645.95630192255042904397797509198276:50510525101557:2800:FA1CFD7E5EA43100725399BB4787A4B9FB44B8E02776FF83D09816D732E0AA63.png
Step 9: We can also provide a button in the Pop-up message using Button Section. The button contains an action. This Action contains two option. We can provide user with Disable Message action or redirect user to particular URL. The section will look like this:
Step 10: We will now move to the next section i.e Select Sending Target section. Here we can click New condition to add a condition for matching target users. Condition types include app version, OS version, language, country/region, audience, user attributes, last interaction, and initial startup.
Note: In this article, we didn’t used any condition. But you are free to use any condition to send targeted In-App Messages.
The section will look like this:
Step 11: The next section is the Set Sending Time section.
a) We can schedule a date and time to send message.
b) We can also provide an End date and time to stop sending message.
c) We can also display message on an events by using trigger event functionality. For example, we can display a discount of an item in a shopping app. A trigger event is required for each in-app message.
d) Also we can flexibly set the frequency for displaying the message.
This is how it will look:
Step 12: The last section is the Set Conversion Event section. As off now we will keep it as none. It will look like this:
Step 13: Click Save in the upper right corner to complete message creation. Also we can click Preview to preview the display effect of your message on a mobile phone or tablet.
Note: Do not hit the publish button yet. Just save it.
Step 14: In-app messages can only be displayed to users who have installed our officially released app version. App Messaging allows us to test an in-app message when our app is still under tests. To that we need to find the message that you need to test, and click Test in the Operation column as it is shown below:
Step 15: Click Add test user and enter the AAID of the test device in the text box. Also run the app in order to find AAID of the test device in the logcat of the Android Studio.
Step 16: Finally we will publish the message. We will select publish in the operation column. It will look like this:
The Dish
More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201327829880650040&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
“Native advertising”, a term first coined by Fred Wilson at the Online Media, Marketing, and Advertising Conference in 2011.Native advertising is a form of paid media where the ad experience follows the natural form and function of the user experience in which it is placed.
Huawei Ads Kit offers variety of different ads format to reach the consumer marketing goals in today’s competitive world.
Huawei Ads Kit demonstrate an easy and efficient way to implement the native ads within your application.
Prerequisite
1. Must have a Huawei Developer Account
2. Must have a Huawei phone with HMS 4.0.0.300 or later
3. React Native environment with Android Studio, Node Js and Visual Studio code.
Major Dependencies
1. React Native CLI : 2.0.1
2. Gradle Version: 6.0.1
3. Gradle Plugin Version: 3.5.2
4. React Native Site Kit SDK : 4.0.4
5. React-native-hms-site gradle dependency
6. AGCP gradle dependency
Development Overview
Preparation
1. Create an app or project in the Huawei app gallery connect.
2. Provide the SHA Key and App Package name of the project in App Information Section and enable the required API.
3. Create a react native project, use the below command
“react-native init project name”
4. Download the React Native Ads Kit SDK and paste it under Node Modules directory of React Native project.
Tips
1. agconnect-services.json is not required for integrating the hms-ads-sdk.
2. Run below command under project directory using CLI if you cannot find node modules.
Code:
“npm install” & “npm link”
Integration
1. Configure android level build.gradle
Add to buildscript/repositories and allprojects/repositories
Code:
maven {url 'http://developer.huawei.com/repo/'}
2. Configure app level build.gradle. (Add to dependencies)
Code:
Implementation project (“: react-native-hms-ads”)
3. Linking the HMS Ads Kit Sdk.
Run below command in the project directory
Code:
react-native link react-native-hms-ads
Development Process
Once sdk is integrated and ready to use, add following code to your App.js file which will import the API’s present.
Adding a Native Ad
Configuring Properties
Executing Commands
Testing
HMS Native Ads aligned with the application and device layout seamlessly, however these can be customised as well.
Adding a Native Ad
HMSNative module is added to the application to work with native ads components. Height can also be customized with the help of ‘style’.
Code:
import {
HMSNative,
} from'react-native-hms-ads';
style={{height:322}}/>
Configuring Properties
Native Ads properties can be configured in various ways
1. For handling different media types, create different ads slot id for different media type ads.
2. Custom listeners can also be added to listen different events
3. Specific native ads can be implemented and requested to target specific audience.
4. Position of the ads component can customized and adjusted
5. View options for the texts on the native ad components can also be changed.
Import below modules for customizing the native ads as required.
Code:
import {
HMSNative,
NativeMediaTypes,
ContentClassification,
Gender,
NonPersonalizedAd,
TagForChild,
UnderAge,
ChoicesPosition,
Direction,
AudioFocusType,
ScaleType
} from'react-native-hms-ads';
Note: Developers can use publisher services to create the Ad Slot ID. Refer this article to know the process for creating the slot Id’s
Create a function in the app.js file as below and add the ad slot id.
Code:
const Native = () => {x
let nativeAdIds = {};
nativeAdIds[NativeMediaTypes.VIDEO] = 'testy63txaom86';
nativeAdIds[NativeMediaTypes.IMAGE_SMALL] = 'testb65czjivt9';
nativeAdIds[NativeMediaTypes.IMAGE_LARGE] = 'testu7m3hc4gvm';
Code:
//Setting up the media type
const [displayForm, setDisplayForm] = useState({
mediaType: NativeMediaTypes.VIDEO,
adId: nativeAdIds.video,
});
Code:
//Add below code for different configuration
nativeConfig={{
choicesPosition: ChoicesPosition.BOTTOM_RIGHT,
mediaDirection: Direction.ANY,
// mediaAspect: 2,
// requestCustomDislikeThisAd: false,
// requestMultiImages: false,
// returnUrlsForImages: false,
// adSize: {
// height: 100,
// width: 100,
// },
videoConfiguration: {
audioFocusType: AudioFocusType.NOT_GAIN_AUDIO_FOCUS_ALL,
// clickToFullScreenRequested: true,
// customizeOperateRequested: true,
startMuted: true,
},
}}
viewOptions={{
showMediaContent: false,
mediaImageScaleType: ScaleType.FIT_CENTER,
// adSourceTextStyle: {color: 'red'},
// adFlagTextStyle: {backgroundColor: 'red', fontSize: 10},
// titleTextStyle: {color: 'red'},
descriptionTextStyle: {visibility: false},
callToActionStyle: {color: 'black', fontSize: 12},
}}
Executing Commands
To load the ad on the button click, loadAd() api is used.
Code:
title="Load"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.loadAd();
}
}}
To dislike the ad on the button click, dislikeAd() api is used.
Code:
title="Dislike"
color="orange"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.dislikeAd('Because I dont like it');
}
}}
To allow the click on the ad setAllowCustomclick() api is used on te button click.
Code:
title="Allow custom click"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.setAllowCustomClick();
}
}}
To Report an ad impression on the button click.
Code:
title="Record impression"
color="red"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.recordImpressionEvent({
impressed: true,
isUseful: 'nope',
});
}
}}
To navigate through the ‘why’page, gotowhyThisAdPage() api is used.
Code:
title="Go to Why Page"
color="purple"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.gotoWhyThisAdPage();
}
}}
To record an event on the button click recordClickEvent() api is used.
Code:
title="Record click event"
color="green"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.recordClickEvent();
}
}}
Below listeners are implemented to start, play and stop the video ads.
Code:
onVideoStart={(e) => toast('HMSNative onVideoStart')}
onVideoPlay={(e) => toast('HMSNative onVideoPlay')}
onVideoEnd={(e) => toast('HMSNative onVideoEnd')}
Testing
Run the below command to build the project
Code:
React-native run-android
After successful build, run the below command in the android directory of the project to create the signed apk.
Code:
gradlew assembleRelease
Results
Conclusion
Adding Native ads seem very easy. Stay tuned for more ads activity.
1. Intro
If you're looking to develop a customized and cost-effective deep learning model, you'd be remiss not to try the recently released custom model service in HUAWEI ML Kit. This service gives you the tools to manage the size of your model, and provides simple APIs for you to perform inference. The following is a demonstration of how you can run your model on a device at a minimal cost.
The service allows you to pre-train image classification models, and the steps below show you the process for training and using a custom model.
2. Implementation
a. Install HMS Toolkit from Android Studio Marketplace. After the installation, restart Android Studio.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
b. Transfer learning by using AI Create.
Basic configuration
* Note: First install a Python IDE.
AI Create uses MindSpore as the training framework and MindSpore Lite as the inference framework. Follows the steps below to complete the basic configuration.
i) In Coding Assistant, go to AI > AI Create.
ii) Select Image or Text, and click on Confirm.
iii) Restart the IDE. Select Image or Text, and click on Confirm. The MindSpore tool will be automatically installed.
HMS Toolkit allows you to generate an API or demo project in one click, enabling you to quickly verify and call the image classification model in your app.
Before using the transfer learning function for image classification, prepare image resources for training as needed.
*Note: The images should be clear and placed in different folders by category.
Model training
Train the image classification model pre-trained in ML Kit to learn hundreds of images in specific fields (such as vehicles and animals) in a matter of minutes. A new model will then be generated, which will be capable of automatically classifying images. Follow the steps below for model training.
i) In Coding Assistant, go to AI > AI Create > Image.
ii) Set the following parameters and click on Confirm.
Operation type: Select New Model.
Model Deployment Location: Select Deployment Cloud.
iii) Drag or add the image folders to the Please select train image folder area.
iv) Set Output model file path and train parameters.
v) Retain the default values of the train parameters as shown below:
Iteration count: 100
Learning rate: 0.01
vi) Click on Create Model to start training and to generate an image classification model.
After the model is generated, view the model learning results (training precision and verification precision), corresponding learning parameters, and training data.
Model verification
After the model training is complete, you can verify the model by adding the image folders in the Please select test image folder area under Add test image. The tool will automatically use the trained model to perform the test and display the test results.
Click on Generate Demo to have HMS Toolkit generate a demo project, which automatically integrates the trained model. You can directly run and build the demo project to generate an APK file, and run the file on a simulator or real device to verify image classification performance.
c. Use the model.
Upload the model
The image classification service classifies elements in images into logical categories, such as people, objects, environments, activities, or artwork, to define image themes and application scenarios. The service supports both on-device and on-cloud recognition modes, and offers the pre-trained model capability.
To upload your model to the cloud, perform the following steps:
i) Sign in to AppGallery Connect and click on My projects.
ii) Go to ML Kit > Custom ML, to have the model upload page display. On this page, you can also upgrade existing models.
Load the remote model
Before loading a remote model, check whether the remote model has been downloaded. Load the local model if the remote model has not been downloaded.
Code:
localModel = new MLCustomLocalModel.Factory("localModelName")
.setAssetPathFile("assetpathname")
.create();
remoteModel =new MLCustomRemoteModel.Factory("yourremotemodelname").create();
MLLocalModelManager.getInstance()
// Check whether the remote model exists.
.isModelExist(remoteModel)
.addOnSuccessListener(new OnSuccessListener<Boolean>() {
@Override
public void onSuccess(Boolean isDownloaded) {
MLModelExecutorSettings settings;
// If the remote model exists, load it first. Otherwise, load the existing local model.
if (isDownloaded) {
settings = new MLModelExecutorSettings.Factory(remoteModel).create();
} else {
settings = new MLModelExecutorSettings.Factory(localModel).create();
}
final MLModelExecutor modelExecutor = MLModelExecutor.getInstance(settings);
executorImpl(modelExecutor, bitmap);
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Exception handling.
}
});
Perform inference using the model inference engine
Set the input and output formats, input image data to the inference engine, and use the loaded modelExecutor(MLModelExecutor) to perform the inference.
Code:
private void executorImpl(final MLModelExecutor modelExecutor, Bitmap bitmap){
// Prepare input data.
final Bitmap inputBitmap = Bitmap.createScaledBitmap(srcBitmap, 224, 224, true);
final float[][][][] input = new float[1][224][224][3];
for (int i = 0; i < 224; i++) {
for (int j = 0; j < 224; j++) {
int pixel = inputBitmap.getPixel(i, j);
input[batchNum][j][i][0] = (Color.red(pixel) - 127) / 128.0f;
input[batchNum][j][i][1] = (Color.green(pixel) - 127) / 128.0f;
input[batchNum][j][i][2] = (Color.blue(pixel) - 127) / 128.0f;
}
}
MLModelInputs inputs = null;
try {
inputs = new MLModelInputs.Factory().add(input).create();
// If the model requires multiple inputs, you need to call add() for multiple times so that image data can be input to the inference engine at a time.
} catch (MLException e) {
// Handle the input data formatting exception.
}
// Perform inference. You can use addOnSuccessListener to listen for successful inference which is processed in the onSuccess callback. In addition, you can use addOnFailureListener to listen for inference failure which is processed in the onFailure callback.
modelExecutor.exec(inputs, inOutSettings).addOnSuccessListener(new OnSuccessListener<MLModelOutputs>() {
@Override
public void onSuccess(MLModelOutputs mlModelOutputs) {
float[][] output = mlModelOutputs.getOutput(0);
// The inference result is in the output array and can be further processed.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Inference exception.
}
});
}
3. Summary
By utilizing Huawei's deep learning framework, you'll be able to create and use a deep learning model for your app by following just a few steps!
Furthermore, the custom model service for ML Kit is compatible with all mainstream model inference platforms and frameworks on the market, including MindSpore, TensorFlow Lite, Caffe, and Onnx. Different models can be converted into .ms format, and run perfectly within the on-device inference framework.
Custom models can be deployed to the device in a smaller size after being quantized and compressed. To further reduce the APK size, you can host your models on the cloud. With this service, even a novice in the field of deep learning is able to quickly develop an AI-driven app which serves a specific purpose.
Learn More
For more information, please visit HUAWEI Developers.
For detailed instructions, please visit Development Guide.
You can join the HMS Core developer discussion on Reddit.
You can download the demo and sample code from GitHub.
To solve integration problems, please go to Stack Overflow.
It seems there are many new fuctions on ML Kit. Wonderful!
Hi, have you tried searching for public APIs?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Article Introduction
In this article, we will show how to integrate Huawei ML Kit (Real-time Language Detection and Real-time Language Translation) in iOS using native language (Swift). The use case has been created to make Smart Translator supporting more than 38 languages with HMS open capabilities.
Huawei ML Kit (Real-time Language Detection)
The real-time language detection service can detect the language of text. Both single-language text and multi-language text are supported. ML Kit detects languages in text and returns the language codes (the BCP-47 standard is used for Traditional Chinese, and the ISO 639-1 standard is used for other languages) and their respective confidences or the language code with the highest confidence. Currently, the real-time language detection service supports 109 languages.
Huawei ML Kit (Real-time Language Translation)
The real-time translation service can translate text from the source language into the target language through the server on the cloud. Currently, real-time translation supports 38 languages.
For this article, we implemented cloud based real-time Language Detection and real-time Language Translation for iOS with native Swift language.
Pre-Requisites
Before getting started, following are the requirements:
Xcode (During this tutorial, we used latest version 12.4)
iOS 9.0 or later (ML Kit supports iOS 9.0 and above)
Apple Developer Account
iOS device for testing
Development
Following are the major steps of development for this article:
Step 1: Importing the SDK in Pod Mode.
1.1: Check whether Cocoapods has been installed:
Code:
gem -v
If not, run the following commands to install Cocoapods:
Code:
sudo gem install cocoapods
pod setup
1.2: Run the pod init command in the root directory of the Xcode project and add the current version number to the generated Podfile file.
Code:
pod "ViewAnimator" # ViewAnimator for cool animations
pod 'lottie-ios' # Lottie for Animation
pod 'MLTranslate', '~>2.0.5.300' # Real-time translation
pod 'MLLangDetection', '~>2.0.5.300' # Real-Time Language Detection
1.3: Run the following command in the same directory of the Podfile file to integrate the HMS Core Scan SDK:
Code:
pod install
If you have used Cocoapods, run the following command to update Cocoapods:
Code:
pod update
1.4: After the execution is successful, open the project directory, find the .xcworkspace file, and execute it.
Step 2: Generating Supported Language JSON.
Since our main goal is Smart Translator, we restricted real-time language detection to 38 languages and generated a JSON file locally to avoid API creation and API calling. In real world scenario, an API can be developed or Huawei ML Kit can be used to get all the supported languages.
Step 3: Building Layout.
We used Auto Layout. Auto Layout defines your user interface using a series of constraints. Constraints typically represent a relationship between two views. Auto Layout then calculates the size and location of each view based on these constraints. This produces layouts that dynamically respond to both internal and external changes.
In this article, we also used Lottie animation for splash animation and for the loading animation when user translate anything. We also used ViewAnimator library to load History UITableView items.
Code:
func showAppIntro(){
DispatchQueue.main.async {
self.animationView.animation = Animation.named("intro_animation")
self.animationView.contentMode = .scaleAspectFit
self.animationView.play(fromFrame: AnimationFrameTime.init(30), toFrame: AnimationFrameTime.init(256), loopMode: .playOnce) { (completed) in
// Let's open Other Screen once the animation is completed
self.performSegue(withIdentifier: "goToServiceIntro", sender: nil)
}
}
}
func showLoader(){
DispatchQueue.main.async {
self.animationView.isHidden = false
self.animationLoader.play()
}
}
func hideLoader(){
DispatchQueue.main.async {
self.animationView.isHidden = true
self.animationLoader.stop()
}
}
func loadAnimateTableView(){
let fromAnimation = AnimationType.vector(CGVector(dx: 30, dy: 0))
let zoomAnimation = AnimationType.zoom(scale: 0.2)
self.historyList.append(contentsOf: AppUtils.getTranslationHistory())
self.historyTableView.reloadData()
UIView.animate(views: self.historyTableView.visibleCells,
animations: [fromAnimation, zoomAnimation], delay: 0.5)
}
Step 4: Integrating ML Kit
By default, Auto is selected which will detect the entered language using ML Kit Real-time Language Detection APIs. User can also swap the languages if auto is not selected. Once user enter the text and press enter, the ML Kit Real-time Language Translation APIs are called and display the result in the other box.
Code:
// This extension is responsible for MLLangDetect and MLTranslate related functions
extension HomeViewController {
func autoDetectLanguage(enteredText: String){
if enteredText.count > 1 {
self.txtLblResult.text = "" // Reset the translated text
self.showLoader()
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
self.mlRemoteLangDetect?.syncFirstBestDetect(enteredText, addOnSuccessListener: { (lang) in
// Get the Language that user entered, incase unable to identify, please change auto to your language
let detectedLanguage = AppUtils.getSelectedLanguage(langCode: lang)
if detectedLanguage == nil {
self.hideLoader()
self.displayResponse(message: "Oops! We are not able to detect your language 🧐 Please select your language from the list for better results 😉")
return // No Need to run the remaining code
}
self.langFrom = detectedLanguage!
// Once we detect the language, let's add Auto suffix to let user know that it's automatically detected
let langName = "\(String(describing: self.langFrom!.langName)) - Auto"
self.langFrom!.langName = langName
// Let's update the buttons titles
self.setButtonsTitle()
// Let's do the translation now
self.translateText(enteredText: enteredText)
}, addOnFilureListener: { (exception) in
self.hideLoader()
self.displayResponse(message: "Oops! We are unable to process your request at the moment 😞")
})
}
}
}
func translateText(enteredText: String){
// Let's Init the translator with selected languages
self.initLangTranslate()
if enteredText.count > 1 {
self.txtLblResult.text = "" // Reset the translated text
self.showLoader()
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
MLRemoteTranslator.sharedInstance().syncTranslate(enteredText) { (translatedText) in
self.txtLblResult.text = translatedText
self.saveTranslationHistory() // This function will save translation history
self.hideLoader()
} addOnFailureListener: { (exception) in
self.hideLoader()
self.displayResponse(message: "Oops! We are unable to process your request at the moment 😞")
}
}
} else {
self.hideLoader()
self.displayResponse(message: "Please write something 🧐")
}
}
func saveTranslationHistory(){
AppUtils.saveData(fromText: edtTxtMessage.text!, toText: txtLblResult.text!, fromLang: self.langFrom!.langName, toLang: self.langTo!.langName)
}
}
Step 5: Save translation History locally on the device.
After getting the result, we call helper functions to save data and retrieve it using NSUserDefault when needed. We also provide an option to delete all data in the History Screen.
Code:
static func saveData(fromText: String, toText: String, fromLang: String, toLang: String){
var history = self.getTranslationHistory()
history.insert(TranslationHistoryModel.init(dateTime: getCurrentDateTime(), fromText: fromText, toText: toText, fromLang: fromLang, toLang: toLang), at: 0)
do {
let encodedData = try NSKeyedArchiver.archivedData(withRootObject: history, requiringSecureCoding: false)
UserDefaults.standard.set(encodedData, forKey: "TranslationHistory")
UserDefaults.standard.synchronize()
} catch {
print(error)
}
}
static func clearHistory(){
let history: [TranslationHistoryModel] = []
do {
let encodedData = try NSKeyedArchiver.archivedData(withRootObject: history, requiringSecureCoding: false)
UserDefaults.standard.set(encodedData, forKey: "TranslationHistory")
UserDefaults.standard.synchronize()
} catch {
print(error)
}
}
static func getTranslationHistory() -> [TranslationHistoryModel]{
let decoded = UserDefaults.standard.data(forKey: "TranslationHistory")
if decoded != nil {
do {
let result = try NSKeyedUnarchiver.unarchiveTopLevelObjectWithData(decoded!) as? [TranslationHistoryModel]
if result != nil {
return result!
} else {
return []
}
} catch {
print(error)
return []
}
} else {
return []
}
}
Step 6: Displaying History in UITableView.
We then add all the items in UITableView
Code:
// This extension is responsible for UITableView related things
extension HistoryViewController: UITableViewDelegate, UITableViewDataSource {
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return self.historyList.count
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "HistoryTableViewCell", for: indexPath) as! HistoryTableViewCell
let entity: TranslationHistoryModel = self.historyList[indexPath.row]
cell.txtLblFrom.text = entity.fromLang
cell.txtLblFromText.text = entity.fromText
cell.txtLblTo.text = entity.toLang
cell.txtLblToText.text = entity.toText
cell.txtDateTime.text = entity.dateTime
cell.index = indexPath.row
cell.setCellBackground()
return cell
}
}
Step 7: Initiate MLTranslate and MLLangDetect with API KEY.
This is very important step. We have to add the following line of code in the AppDelegate.swift
Code:
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
// Setting API key so that we can use ML Kit for translation
MLTranslateApplication.sharedInstance().setApiKey(AppUtils.API_KEY)
MLLangDetectApplication.sharedInstance().setApiKey(AppUtils.API_KEY)
return true
}
Step 8: Run the application.
We have added all the required code. Now, just build the project, run the application and test on any iOS phone. In this demo, we used iPhone 11 Pro Max for testing purposes.
Conclusion
Whenever user travel to a new place, country or region, he can use this app to translate text from their native language to the visited place spoken language. Once they are done with translation, they can also check the translated history or show it to someone so that they can communicate with the locals easily and conveniently with Smart Translator.
Using ML Kit, developers can develop different iOS applications with auto detect option to improve the UI/UX. ML Kit is a on-device and on-cloud open capability offered by Huawei which can be combined with other functionalities to offer innovative services to the end users.
Tips and Tricks
Before calling the ML Kit, make sure the required agconnect-services.plist is added to the project and ML Kit APIs are enabled from the AGConnect console.
ML Kit must be initiated with the API Key in the AppDelegate.swift.
There are no special permissions needed for this app. However, make sure that the device is connected to Internet and have active connection.
Always use animation libraries like Lottie or ViewAnimator to enhance UI/UX in your application.
References
Huawei ML Kit Official Documentation: click here
Huawei ML Kit FAQs: click here
Lottie iOS Documentation: click here
Github Code Link: click here
Original Source
Audio is the soul of media, and for mobile apps in particular, it engages with users more, adds another level of immersion, and enriches content.
This is a major driver of my obsession for developing audio-related functions. In my recent post that tells how I developed a portrait retouching function for a live-streaming app, I mentioned that I wanted to create a solution that can retouch music. I know that a technology called spatial audio can help with this, and — guess what — I found a synonymous capability in HMS Core Audio Editor Kit, which can be integrated independently, or used together with other capabilities in the UI SDK of this kit.
I chose to integrate the UI SDK into my demo first, which is loaded with not only the kit's capabilities, but also a ready-to-use UI. This allows me to give the spatial audio capability a try and frees me from designing the UI. Now let's dive into the development procedure of the demo.
Development ProcedurePreparations1. Prepare the development environment, which has requirements on both software and hardware. These are:
Software requirements:
JDK version: 1.8 or later
Android Studio version: 3.X or later
minSdkVersion: 24 or later
targetSdkVersion: 33 (recommended)
compileSdkVersion: 30 (recommended)
Gradle version: 4.6 or later (recommended)
Hardware requirements: a phone running EMUI 5.0 or later, or a phone running Android whose version ranges from Android 7.0 to Android 13.
2. Configure app information in a platform called AppGallery Connect, and go through the process of registering as a developer, creating an app, generating a signing certificate fingerprint, configuring the signing certificate fingerprint, enabling the kit, and managing the default data processing location.
3. Integrate the HMS Core SDK.
4. Add necessary permissions in the AndroidManifest.xml file, including the vibration permission, microphone permission, storage write permission, storage read permission, Internet permission, network status access permission, and permission to obtaining the changed network connectivity state.
When the app's Android SDK version is 29 or later, add the following attribute to the application element, which is used for obtaining the external storage permission.
Code:
<application
android:requestLegacyExternalStorage="true"
…… >
SDK Integration1. Initialize the UI SDK and set the app authentication information. If the information is not set, this may affect some functions of the SDK.
Code:
// Obtain the API key from the agconnect-services.json file.
// It is recommended that the key be stored on cloud, which can be obtained when the app is running.
String api_key = AGConnectInstance.getInstance().getOptions().getString("client/api_key");
// Set the API key.
HAEApplication.getInstance().setApiKey(api_key);
2. Create AudioFilePickerActivity, which is a customized activity used for audio file selection.
Code:
/**
* Customized activity, used for audio file selection.
*/
public class AudioFilePickerActivity extends AppCompatActivity {
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
performFileSearch();
}
private void performFileSearch() {
// Select multiple audio files.
registerForActivityResult(new ActivityResultContracts.GetMultipleContents(), new ActivityResultCallback<List<Uri>>() {
@Override
public void onActivityResult(List<Uri> result) {
handleSelectedAudios(result);
finish();
}
}).launch("audio/*");
}
/**
* Process the selected audio files, turning the URIs into paths as needed.
*
* @param uriList indicates the selected audio files.
*/
private void handleSelectedAudios(List<Uri> uriList) {
// Check whether the audio files exist.
if (uriList == null || uriList.size() == 0) {
return;
}
ArrayList<String> audioList = new ArrayList<>();
for (Uri uri : uriList) {
// Obtain the real path.
String filePath = FileUtils.getRealPath(this, uri);
audioList.add(filePath);
}
// Return the audio file path to the audio editing UI.
Intent intent = new Intent();
// Use HAEConstant.AUDIO_PATH_LIST that is provided by the SDK.
intent.putExtra(HAEConstant.AUDIO_PATH_LIST, audioList);
// Use HAEConstant.RESULT_CODE as the result code.
this.setResult(HAEConstant.RESULT_CODE, intent);
finish();
}
}
The FileUtils utility class is used for obtaining the real path, which is detailed here. Below is the path to this class.
Code:
app/src/main/java/com/huawei/hms/audioeditor/demo/util/FileUtils.java
3. Add the action value to AudioFilePickerActivity in AndroidManifest.xml. The SDK would direct to a screen according to this action.
Code:
<activity
android:name=".AudioFilePickerActivity"
android:exported="false">
<intent-filter>
<action android:name="com.huawei.hms.audioeditor.chooseaudio" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</activity>
4. Launch the audio editing screen via either:
Mode 1: Launch the screen without input parameters. In this mode, the default configurations of the SDK are used.
Code:
HAEUIManager.getInstance().launchEditorActivity(this);
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Audio editing screens
Mode 2: Launch the audio editing screen with input parameters. This mode lets you set the menu list and customize the path for an output file. On top of this, the mode also allows for specifying the input audio file paths, setting the draft mode, and more.
Launch the screen with the menu list and customized output file path:
Code:
// List of level-1 menus. Below are just some examples:
ArrayList<Integer> menuList = new ArrayList<>();
// Add audio.
menuList.add(MenuCommon.MAIN_MENU_ADD_AUDIO_CODE);
// Record audio.
menuList.add(MenuCommon.MAIN_MENU_AUDIO_RECORDER_CODE);
// List of level-2 menus, which are displayed after audio files are input and selected.
ArrayList<Integer> secondMenuList = new ArrayList<>();
// Split audio.
secondMenuList.add(MenuCommon.EDIT_MENU_SPLIT_CODE);
// Delete audio.
secondMenuList.add(MenuCommon.EDIT_MENU_DEL_CODE);
// Adjust the volume.
secondMenuList.add(MenuCommon.EDIT_MENU_VOLUME2_CODE);
// Customize the output file path.
String exportPath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_MUSIC).getPath() + "/";
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the level-1 menus.
.setCustomMenuList(menuList)
// Set the level-2 menus.
.setSecondMenuList(secondMenuList)
// Set the output file path.
.setExportPath(exportPath);
// Launch the audio editing screen with the menu list and customized output file path.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
Level-1 menus
Level-2 menus
Launch the screen with the specified input audio file paths:
Code:
// Set the input audio file paths.
ArrayList<AudioInfo> audioInfoList = new ArrayList<>();
// Example of an audio file path:
String audioPath = "/storage/emulated/0/Music/Dream_It_Possible.flac";
// Create an instance of AudioInfo and pass the audio file path.
AudioInfo audioInfo = new AudioInfo(audioPath);
// Set the audio name.
audioInfo.setAudioName("Dream_It_Possible");
audioInfoList.add(audioInfo);
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the input audio file paths.
.setFilePaths(audioInfoList);
// Launch the audio editing screen with the specified input audio file paths.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
Launch the screen with drafts:
Code:
// Obtain the draft list. For example:
List<DraftInfo> draftList = HAEUIManager.getInstance().getDraftList();
// Specify the first draft in the draft list.
String draftId = null;
if (!draftList.isEmpty()) {
draftId = draftList.get(0).getDraftId();
}
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the draft ID, which can be null.
.setDraftId(draftId)
// Set the draft mode. NOT_SAVE is the default value, which indicates not to save a project as a draft.
.setDraftMode(AudioEditorLaunchOption.DraftMode.SAVE_DRAFT);
// Launch the audio editing screen with drafts.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
And just like that, SDK integration is complete, and the prototype of the audio editing app I want is ready to use.
Not bad. It has all the necessary functions of an audio editing app, and best of all, it's pretty easy to develop, thanks to the all-in-one and ready-to-use SDK.
Anyway, I tried the spatial audio function preset in the SDK and I found I could effortlessly add more width to a song. However, I also want a customized UI for my app, instead of simply using the one provided by the UI SDK. So my next step is to create a demo with the UI that I have designed and the spatial audio function.
AfterthoughtsTruth to be told, the integration process wasn't as smooth as it seemed. I encountered two issues, but luckily, after doing some of my own research and contacting the kit's technical support team, I was able to fix the issues.
The first issue I came across was that after touching the Add effects and AI dubbing buttons, the UI displayed The token has expired or is invalid, and the Android Studio console printed the HAEApplication: please set your app apiKey log. The reason for this was that the app's authentication information was not configured. There are two ways of configuring this. The first was introduced in the first step of SDK Integration of this post, while the second was to use the app's access token, which had the following code:
Code:
HAEApplication.getInstance().setAccessToken("your access token");
The second issue — which is actually another result of unconfigured app authentication information — is the Something went wrong error displayed on the screen after an operation. To solve it, first make sure that the app authentication information is configured. Once this is done, go to AppGallery Connect to check whether Audio Editor Kit has been enabled for the app. If not, enable it. Note that because of caches (of either the mobile phone or server), it may take a while before the kit works for the app.
Also, in the Preparations part, I skipped the step for configuring obfuscation scripts before adding necessary permissions. This step is, according to technical support, necessary for apps that aim to be officially released. The app I have covered in this post is just a demo, so I just skipped this step.
TakeawayNo app would be complete with audio, and with spatial audio, you can deliver an even more immersive audio experience to your users.
Developing a spatial audio function for a mobile app can be a piece of cake thanks to HMS Core Audio Editor Kit. The spatial audio capability can be integrated either independently or together with other capabilities via the UI SDK, which delivers a ready-to-use UI, so that you can skip the tricky bits and focus more on what matters to users.