Introduction
If are you new to this application, please follow my previous articles
Pygmy collection application Part 1 (Account kit)
Intermediate: Pygmy Collection Application Part 2 (Ads Kit)
Intermediate: Pygmy Collection Application Part 3 (Crash service)
Intermediate: Pygmy Collection Application Part 4 (Analytics Kit Custom Events)
Intermediate: Pygmy Collection Application Part 5 (Safety Detect)
Intermediate: Pygmy Collection Application Part 6 (Room database)
Intermediate: Pygmy Collection Application Part 7 (Document Skew correction Huawei HiAI)
Intermediate: Pygmy Collection Application Part 8 (Scan kit customized view)
Intermediate: Pygmy Collection Application Part 9 (Cloud Testing)
Click to expand...
Click to collapse
In this article, I will explain what is Huawei Remote configuration? How does Huawei Remote Configuration work in Android? At the end of this tutorial, we will create the Huawei Remote Configuration Android Pygmy collection application.
In this example, I am Enabling/Disabling Export report to PDF using remote configuration. When Export PDF is enabled, user can Export report otherwise user can’t export report to PDF format instead user can just see the report in the application. And also generate PDF every 12 hour using android work manager.
What is Huawei Remote Configuration?
Huawei Remote Configuration is cloud service. It changes the behaviour and appearance of your app without publishing an app update on App Gallery for all active users. Basically, Remote Configuration allows you to maintain parameters on the cloud, based on these parameters we control the behaviour and appearance of your app. In the festival scenario, we can define parameters with the text, colour and images for a theme which can be fetched using Remote Configuration.
How does Huawei Remote Configuration work?
Huawei Remote Configuration is a cloud service that allows you to change the behaviour and appearance of your app without requiring users to download an app update. When using Remote Configuration, you can create in-app default values that controls the behaviour and appearance of your app. Then, you can later use the Huawei console or the Remote Configuration to override in-app default values for all app users or for segments of your user base. Your app controls when updates are applied, and it can frequently check for updates and apply them with a negligible impact on performance.
In Remote Configuration, we can create in-app default values that controls the behaviour and appearance (such as text, color and image etc.) in the app. Later on, with the help of Huawei Remote Configuration, we can fetch parameters from the Huawei remote configuration and override the default value.
Integration of Remote configuration
1. Configure application on the AGC.
2. Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already developer ignore this step.
Step 2: Create an app, refer to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on current location.
Step 4: Enabling Remote configuration. Open AppGallery connect, choose Grow > Remote confihuration
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
Follow the steps
Step 1: Create flutter application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level gradle dependencies. Choose inside project Android > app > build.gradle
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
implementation 'com.huawei.agconnect:agconnect-remoteconfig:1.6.1.300'
// For WorkManager.
implementation 'android.arch.work:work-runtime:1.0.1'
Root level gradle dependencies
Code:
maven { url ‘https://developer.huawei.com/repo/’ }
classpath ‘com.huawei.agconnect:agcp:1.4.1.300’
Add the below permissions in Android Manifest file.
XML:
<uses-permission android:name=”android.permission.INTERNET” />
<uses-permission android:name=”android.permission.ACCESS_NETWORK_STATE”/>
To achieve Remote configuration service example, follow the steps.
1. AGC Configuration
2. Build Flutter application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate Huawei Remote configuration Service.
3. Navigate to Grow > Remote configuration > Enable
Step 2: Build Android application
In this example, I am Enabling/Disabling Export report to PDF feature from remote configuration. When export PDF feature is enabled, user can export report to the PDF. Otherwise user can’t export report.
Basically, Huawei Remote Configuration has three different configurations as explained below.
Default Configuration: In this configuration default values defined in your app, if no matching key found on remote configuration sever than default value is copied the in active configuration and returned to the client.
Java:
AGConnectConfig config = AGConnectConfig.getInstance();
Map<String, Object> map = new HashMap<>();
map.put(“test1”, “test1”);
map.put(“test2”, “true”);
map.put(“test3”, 123);
map.put(“test4”, 123.456);
map.put(“test5”, “test-test”);
config.applyDefault(map);
Fetched Configuration: Most recent configuration that fetched from the server but not activated yet. We need to activate these configurations parameters, then all value copied in active configuration.
Java:
Config.fetch().addOnSuccessListener(new OnSuccessListener<ConfigValues>() {
@Override
public void onSuccess(ConfigValues configValues) {
config.apply(configValues);
// Use the configured values.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
}
});
Active Configuration: It directly accessible from your app. It contains values either default and fetched.
Fetch Parameter value
After default parameter values are set or parameter values are fetched from Remote Configuration, you can call AGCRemoteConfig.getValue to obtain the parameter values through key values to use in your app.
Java:
Config.fetch().addOnSuccessListener(new OnSuccessListener<ConfigValues>() {
@Override
public void onSuccess(ConfigValues configValues) {
config.apply(configValues);
isPDFGenerateEnabled = configValues.getValueAsBoolean(“enable_pdf_generate”);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
}
});
Resetting Parameter Values
You can clear all existing parameter using below function.
Java:
Config.clearAll();
What all can be done using Huawei remote configuration
Displaying Different Content to Different Users: Remote Configuration can work with HUAWEI Analytics to personalize content displayed to different audiences. For example, officeworkers and students will see different products and UI layouts in an app.
Adapting the App Theme by Time: You can set time conditions, different app colors, and various materials in Remote Configuration to change the app theme for specific situations. For example, during the graduation season, you can adapt your app to the graduation theme to attract more users.
Releasing New Functions by User Percentage: Releasing new functions to all users at the same time will be risky. Remote Configuration enables new function release by user percentage for you to slowly increase the target user scope, effectively helping you to improve your app based on the feedback from users already exposed to the new functions.
Features of Remote configuration
1. Add parameters
2. Add conditions
1. Adding Parameters: In this you can add parameter with value as many as you want. Later you can also change the value that will be automatically reflected in the app. After adding all the required parameters, let’s release the parameter.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2. Adding condition: This feature helps developer to add the conditions based on the below parameters. And conditions can be released.
App Version
OS version
Language
Country/Region
Audience
User Attributes
Predictions
User Percentage
Time
App Version: Condition can be applied on app versions. Which has four operator Include, Exclude, Equal, Include regular expression. Based on these four operators you can add conditions.
OS Version: Using the developer can add condition based on android OS version.
Language: Developer can add the condition based on the language.
Country/Region: Developer can add condition based on the country or region.
User percentage: Developer can roll feature to users based on the percentage of the users between 1-100%.
Time: Developer can use time condition to enable or disable some feature based on time. For example if the feature has to enable on particular day.
After adding required condition, release all the added conditions.
Java:
If (isPDFGenerateEnabled){
exportReport.setVisibility(View.VISIBLE);
}else {
exportReport.setVisibility(View.GONE);
}
exportReport.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (isPDFGenerateEnabled) {
try {
List<DailyCollectionEntity> entities = PygmyDatabase.getDatabase(getActivity()).dailyCollectionDao().findCollectionsByDate(formattedDate);
List<String> headers = new ArrayList<>();
headers.add(“Shop Name”);
headers.add(“Account Number”);
headers.add(“Account Holder Number”);
headers.add(“Today’s Balance”);
headers.add(“Total Balance”);
if (entities != null && entities.size() > 0) {
String path = getStorageDir(“Date_” + formattedDate + “.pdf”);
PDFUtility.createPdf(getContext(), TodayCollectionFragment.this::onPDFDocumentClose, getTableData(entities, formattedDate), headers, path, true);
}
} catch (Exception e) {
AGConnectCrash.getInstance().recordException€;
e.printStackTrace();
Log.e(“TAG”, “Error Creating Pdf”);
Toast.makeText(getActivity(), “Error Creating Pdf”, Toast.LENGTH_SHORT).show();
}
} else {
Toast.makeText(getActivity(), “PDF Generation is disabled”, Toast.LENGTH_SHORT).show();
}
}
});
Now create work manager.
Java:
import android.content.Context;
import androidx.annotation.NonNull;
import androidx.work.Worker;
import androidx.work.WorkerParameters;
import com.shea.pygmycollection.utils.FileUtils;
public class GenerateReportToPDFWorkManager extends Worker {
private static final String TAB = GenerateReportToPDFWorkManager.class.getSimpleName();
public GenerateReportToPDFWorkManager(@NonNull Context context, @NonNull WorkerParameters workerParams) {
super(context, workerParams);
}
@NonNull
@Override
public Result doWork() {
FileUtils.generatePDF();
return Result.success();
}
}Copy codeCopy code
Now build the periodic work manager.
private PeriodicWorkRequest mPeriodicWorkRequest;
// Generate pdf every 12 hour
mPeriodicWorkRequest = new PeriodicWorkRequest.Builder(GenerateReportToPDFWorkManager.class,
12, TimeUnit.HOURS)
.addTag("periodicWorkRequest")
.build();
WorkManager.getInstance().enqueue(mPeriodicWorkRequest);
Result
Tips and Tricks
Add the dependencies properly.
Add internet permission.
If required add condition properly.
Conclusion
In this article, we have learnt integration of Huawei Remote configuration, how to add the parameters, how to add the Conditions, how to release parameters and conditions and how to fetch the remote data in application and how to clear the data in Android Pygmy collection application.
Reference
Huawei Remote Configuration
Happy coding
Related
More information like this, you can visit HUAWEI Developer Forum
Introduction
Online food ordering is process to deliver food from restaurants. In this article will do how to Integrate Analytics, App-Messaging in food applications.
Steps
1. Create App in Android.
2. Configure App in AGC.
3. Integrate the SDK in our new Android project.
4. Integrate the dependencies.
5. Sync project.
Analytics Module
Huawei Analytics will helps to understand how people using mobile application. Analytics model help you gain a deeper insight into your user, products and contents.
1. Collect and report custom events.
2. Set a maximum of 25 user attributes
3. Automate event collection and session calculation with predefined event ID’s and parameter.
Use Cases
1. Analyze user behaviour using both predefined and custom events.
2. Use audience building to tailor your marketing activity to your users’ behaviour and preferences.
3. Use dashboards and analytics to measure your marketing activity and identify areas to improve.
Function Restrictions
Device restrictions: The following automatically collected events of analytics kit depend on HMS Core (APK). Therefore, which are not supported on third-party devices where HMS Core is not installed including but not limited to Oppo, Vivo, Xiaomi, Samsung and Oneplus.
Event quantity restriction: Each app can analyze a maximum of 500 events.
Event parameter restrictions: You can define a maximum of 25 parameters for each event, and maximum of 100 event parameter for each app.
AGC Configuration
1. Create an app adding an app to the project.
2. Enable Analytics Choose My Projects> Huawei Analytics
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
3. Enable Analytics API Choose My Projects > Project settings > Manage APIs
Integration
Create Application in Android Studio.
App level gradle dependencies.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Gradle dependencies
Code:
implementation 'com.huawei.hms:hianalytics:5.0.3.300'
Root level gradle dependencies
Code:
maven {url 'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
1. Create Instance for HiAnalyticsInstance in the onCreate method
Code:
HiAnalyticsTools.enableLog();
HiAnalyticsInstance instance = HiAnalytics.getInstance(this);
2. Create method for report an event.
Code:
private void reportevent() {
Bundle bundle = new Bundle();
bundle.putString("Name", signInResult.getUser().getDisplayName());
mInstance.onEvent("SIGNIN", bundle);
}
Output
App Messaging Module
Huawei App messaging is one of the power full tool to promote information to audience. This will help when you are trying to get more attention to your application. We can target active users to encourage them to use this app.
App Messaging allows to customize your messages visuals and the way they will be sent, and define events for triggering message sending to your users at the right moment.
Message Types
1. Pop-up Message: Title, body and contain an image, and up to two buttons.
2. Banner Message: Message is displayed on top of the screen, containing a thumbnail and the message title and body. user can tap the banner to access specified page.
3. Image: It contains only image. An image message can be a poster well designed to promote an activity. User can tap the image to access the activity details.
AGC Configuration
1. Enable App Messaging Api Choose My Projects > Project settings > Manage APIs
2. Enable App Messaging Choose My Projects > Growing > App Messaging Click Enable Now.
3. Click New button and enter the required information.
4. Enter required details, and then click Next button.
5. In Select Sending Target node, click New condition for matching target users.
Select conditions in Select drop-down, and then click Next
6. In Set Sending Time node, select Started and Ended drop-downs.
Select required options in Trigger event and Frequency limit, and then click Next
7. Once all the required information filled, and then click Save button
8. Navigate to Operation tab, and then click Test option to perform the App Messaging publishing
Integration
Gradle dependencies
Code:
implementation 'com.huawei.agconnect:agconnect-appmessaging:1.4.0.300'
1. Generate AAID, this AAID will help you to sending App-Messages.
Code:
private void generateAAID(){
HmsInstanceId inst = HmsInstanceId.getInstance(this);
Task<AAIDResult> idResult = inst.getAAID();
idResult.addOnSuccessListener(aaidResult -> Log.d("AAID", "getAAID success:" + aaidResult.getId() ))
.addOnFailureListener(e -> Log.d("AAID", "getAAID failure:" + e));
}
2. For displaying messages we need to call AGConnectAppMessaging Instance.
3. setFetchMessageEnable() To allow data synchronization from the app gallery connect server.
4. setDisplayEnable() To Enable message display.
5. setForceFetch() To specify that the in-app message data must be obtained from App Gallery connect server.
Code:
private void initializeInAppMessage(){
AGConnectAppMessaging appMessaging = AGConnectAppMessaging.getInstance();
appMessaging.setFetchMessageEnable(true);
appMessaging.setDisplayEnable(true);
appMessaging.setForceFetch();
}
Result:
Conclusion
In this Article, I have explained how to integrate App Messaging and Analytics Kit on food application.
Reference:
Analytics kit:
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/introduction-0000001050745149
App Messaging:
https://developer.huawei.com/consumer/en/doc/development/AppGallery-connect-Guides/agc-appmessage-introduction
Introduction
In this article, we will learn how to add ADS into our Unity Game. Get paid to show relevant ads from over a million advertisers with HMS ADS in our Unity Games. Ads are an effective and easy way to earn revenue from your games. Ads Kit is a smart monetization platform for apps that helps you to maximize revenue from ads and in-app purchases. Thousands of Apps use HMS ADS Kit to generate a reliable revenue stream.
All you need to do is add the kit to your unity game, to place ads with just a few lines of code.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Implementation Steps
1. Creation of our App in App Gallery Connect
2. Evil Mind plugin Integration
3. Unity Configuration
4. Creation of the scene
5. Coding
6. Configuration for project execution
7. Final result
App Gallery Connect Configuration
Creating a new App in the App Gallery connect console is a fairly simple procedure but requires paying attention to certain important aspects.
Once inside the console we must create a project and to this project we must add an App.
When creating our App we will find the following form. It is important to take into account that the category of the App must be Game.
Once the App is created, it will be necessary for us to activate the Account Kit in our APIs, we can also activate the Game service if we wish.
HUAWEI Push Kit establishes a messaging channel from the cloud to devices. By integrating Push Kit, you can send messages to your apps on users' devices in real time.
Once this configuration is completed, it will be necessary to add the SHA-256 fingerprint, for this we can use the keytool command, but for this we must first create a keystore and for this we can use Unity.
Once the keystore is created we must open the console, go to the path where the keystore is located and execute the following code. Remember that to use this command you must have the Java JDK installed.
Once inside the route
Keytool -list -v -keystore yournamekey.keystore
This will give us all the information in our keystore, we obtain the SHA256 and add it to the App.
Unity Evil Mind Plugin configuration
We have concluded the creation of the App, the project and now we know how we can create a keystore and obtain the sha in Unity.
In case you have not done it now we must create our project in Unity once the project is created we must obtain the project package which we will use to connect our project with the AGC SDK. first of all let's go download the Evil Mind plugin.
GitHub - EvilMindDevs/hms-unity-plugin
Contribute to EvilMindDevs/hms-unity-plugin development by creating an account on GitHub.
github.com
In the link you can find the package to import it to Unity, to import it we must follow the following steps.
Download the .unity package and then import it into Unity using the package importer.
Once imported, you will have the Huawei option in the toolbar, we click on the option and add the data
from our App Gallery Console
Once we have imported the plugin we will have to add the necessary data from our App Gallery App and place it within the
required fields of the Unity plugin. Well, now we have our App Gallery App connected to the Unity project.
Now we can add the Push Notifications prefab to our scene remember that to do this we must create a new scene,
for this example we can use the scene that the plugin provides.
Unity Configuration
We have to remember that when we are using Unity is important to keep in mind that configurations needs to be done within the Engine so the apk runs correctly. In this section i want to detail some important configurations that we have to change for our Engine.
Switch Plaform.- Usually Unity will show us as default platform the PC, MAC so we have to change it for Android like in this picture.
Scripting Backend.- In order to run correctly the Scripting Backend must be change to IL2CPP, by default U
nity will have Mono as Scripting Backend so its important to chenge this information.
Minimun API Level.- Other configuration that needs to be done is Minimun API, we have to set it to API Level 21. otherwise the build wont work.
Creation of the scene
Within this step we must Create a new Scene From Scratch where we will have to add the following elements.
AdsManager.-The prefab that comes with the plugin
Canvas.- Where we will show the buttons to trigger Banners
EventSystem.- To Add input for Android Handheld
AdsManager.-Where the script to control the Ads Instances.
Coding
Lets check the Code of the Prefab that comes with the plugin. In this article i want to use Interstitial Ads so, first lets review what are the characteristics of these ads. Interstitial ads are full-screen ads that cover the interface of an app. Such an ad is displayed when a user starts, pauses, or exits an app, without disrupting the user's experience.
The code of the Instersticial Ad has the way to get an Instance of the Object.
Code:
public static InterstitalAdManager GetInstance(string name = "AdsManager") => GameObject.Find(name).GetComponent<InterstitalAdManager>();
As well we have a get Set to add the token Id of the Test or release Ad.
Code:
public string AdId
{
get => mAdId;
set
{
Debug.Log($"[HMS] InterstitalAdManager: Set interstitial ad ID: {value}");
mAdId = value;
LoadNextInterstitialAd();
}
}
This paramaters needs to be assign in the Script that controls the Ads behavior. Finally another important code that we have in this script is the listener of the Interstitial Ad. Methods to recognize Ad behaviour like Click, Close and Fail can be listen in this Section.
Code:
private class InterstitialAdListener : IAdListener
{
private readonly InterstitalAdManager mAdsManager;
public InterstitialAdListener(InterstitalAdManager adsManager)
{
mAdsManager = adsManager;
}
public void OnAdClicked()
{
Debug.Log("[HMS] AdsManager OnAdClicked");
mAdsManager.OnAdClicked?.Invoke();
}
public void OnAdClosed()
{
Debug.Log("[HMS] AdsManager OnAdClosed");
mAdsManager.OnAdClosed?.Invoke();
mAdsManager.LoadNextInterstitialAd();
}
public void OnAdFailed(int reason)
{
Debug.Log("[HMS] AdsManager OnAdFailed");
mAdsManager.OnAdFailed?.Invoke(reason);
}
public void OnAdImpression()
{
Debug.Log("[HMS] AdsManager OnAdImpression");
mAdsManager.OnAdImpression?.Invoke();
}
public void OnAdLeave()
{
Debug.Log("[HMS] AdsManager OnAdLeave");
mAdsManager.OnAdLeave?.Invoke();
}
public void OnAdLoaded()
{
Debug.Log("[HMS] AdsManager OnAdLoaded");
mAdsManager.OnAdLoaded?.Invoke();
}
public void OnAdOpened()
{
Debug.Log("[HMS] AdsManager OnAdOpened");
mAdsManager.OnAdOpened?.Invoke();
}
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topic/0201461409178140023
very useful writeup
For some apps, it's necessary to have a function called sound detection that can recognize sounds like knocks on the door, rings of the doorbell, and car horns. Developing such a function can be costly for small- and medium-sized developers, so what should they do in this situation?
There's no need to worry about if you have the sound detection service in HUAWEI ML Kit. Integrating its SDK into your app is simple, and you can equip it with the sound detection function that can work well even when the device does not connect to the network.
Introduction to Sound Detection in HUAWEI ML Kit
This service can detect sound events online by real-time recording. The detected sound events can help you perform subsequent actions. Currently, the following types of sound events are supported: laughter, child crying, snoring, sneezing, shouting, cat meowing, dog barking, running water (such as from taps, streams, and ocean waves), car horns, doorbells, knocking on doors, fire alarms (including smoke alarms), and sirens (such as those from fire trucks, ambulances, police cars, and air defenses).
Preparations
Configuring the Development Environment
Create an app in AppGallery Connect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For details, see Getting Started with Android.
Enable ML Kit.
Click here to get more information.
After the app is created, an agconnect-services.json file will be automatically generated. Download it and copy it to the root directory of your project.
Configure the Huawei Maven repository address.
To learn more, click here.
Integrate the sound detection SDK.
It is recommended to integrate the SDK in full SDK mode. Add build dependencies for the SDK in the app-level build.gradle file.
Code:
// Import the sound detection package.
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-sdk:2.1.0.300'
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-model:2.1.0.300'
Add the AppGallery Connect plugin configuration as needed using either of the following methods:
Method 1: Add the following information under the declaration in the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block:
Code:
plugins {
id 'com.android.application'
id 'com.huawei.agconnect'
}
Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file. After a user installs your app from HUAWEI AppGallery, the machine learning model is automatically updated to the user's device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "sounddect"/>
For details, go to Integrating the Sound Detection SDK.
Development Procedure
Obtain the microphone permission. If the app does not have this permission, error 12203 will be reported.
(Mandatory) Apply for the static permission.
Code:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
(Mandatory) Apply for the dynamic permission.
ActivityCompat.requestPermissions(
this, new String[]{Manifest.permission.RECORD_AUDIO
}, 1);
Create an MLSoundDector object.
Code:
private static final String TAG = "MLSoundDectorDemo";
// Object of sound detection.
private MLSoundDector mlSoundDector;
// Create an MLSoundDector object and configure the callback.
private void initMLSoundDector(){
mlSoundDector = MLSoundDector.createSoundDector();
mlSoundDector.setSoundDectListener(listener);
}
Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
// Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
Code:
private MLSoundDectListener listener = new MLSoundDectListener() {
@Override
public void onSoundSuccessResult(Bundle result) {
// Processing logic when the detection is successful. The detection result ranges from 0 to 12, corresponding to the 13 sound types whose names start with SOUND_EVENT_TYPE. The types are defined in MLSoundDectConstants.java.
int soundType = result.getInt(MLSoundDector.RESULTS_RECOGNIZED);
Log.d(TAG,"Detection success:"+soundType);
}
@Override
public void onSoundFailResult(int errCode) {
// Processing logic for detection failure. The possible cause is that your app does not have the microphone permission (Manifest.permission.RECORD_AUDIO).
Log.d(TAG,"Detection failure"+errCode);
}
};
Note: The code above prints the type of the detected sound as an integer. In the actual situation, you can convert the integer into a data type that users can understand.
Definition for the types of detected sounds:
Code:
<string-array name="sound_dect_voice_type">
<item>laughter</item>
<item>baby crying sound</item>
<item>snore</item>
<item>sneeze</item>
<item>shout</item>
<item>cat's meow</item>
<item>dog's bark</item>
<item>running water</item>
<item>car horn sound</item>
<item>doorbell sound</item>
<item>knock</item>
<item>fire alarm sound</item>
<item>alarm sound</item>
</string-array>
Start and stop sound detection.
Code:
@Override
public void onClick(View v) {
switch (v.getId()){
case R.id.btn_start_detect:
if (mlSoundDector != null){
boolean isStarted = mlSoundDector.start(this); // context: Context.
// If the value of isStared is true, the detection is successfully started. If the value of isStared is false, the detection fails to be started. (The possible cause is that the microphone is occupied by the system or another app.)
if (isStarted){
Toast.makeText(this,"The detection is successfully started.", Toast.LENGTH_SHORT).show();
}
}
break;
case R.id.btn_stop_detect:
if (mlSoundDector != null){
mlSoundDector.stop();
}
break;
}
}
Call destroy() to release resources when the sound detection page is closed.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
if (mlSoundDector != null){
mlSoundDector.destroy();
}
}
Testing the App
Using the knock as an example, the output result of sound detection is expected to be 10.
Tap Start detecting and simulate a knock on the door. If you get logs as follows in Android Studio's console, they indicates that the integration of the sound detection SDK is successful.
More Information
Sound detection belongs to one of the six capability categories of ML Kit, which are related to text, language/voice, image, face/body, natural language processing, and custom model.
Sound detection is a service in the language/voice-related category.
Interested in other categories? Feel free to have a look at the HUAWEI ML Kit document.
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Emojis are a must-have tool in today's online communications as they help add color to text-based chatting and allow users to better express the emotions behind their words. Since the number of preset emojis is always limited, many apps now allow users to create their own custom emojis to keep things fresh and exciting.
For example, in a social media app, users who do not want to show their faces when making video calls can use an animated character to protect their privacy, with their facial expressions applied to the character; in a live streaming or e-commerce app, virtual streamers with realistic facial expressions are much more likely to attract watchers; in a video or photo shooting app, users can control the facial expressions of an animated character when taking a selfie, and then share the selfie via social media; and in an educational app for kids, a cute animated character with detailed facial expressions will make online classes much more fun and engaging for students.
I myself am developing such a messaging app. When chatting with friends and wanting to express themselves in ways other than words, users of my app can take a photo to create an emoji of themselves, or of an animated character they have selected. The app will then identify users' facial expressions, and apply their facial expressions to the emoji. In this way, users are able to create an endless amount of unique emojis. During the development of my app, I used the capabilities provided by HMS Core AR Engine to track users' facial expressions and convert the facial expressions into parameters, which greatly reduced the development workload. Now I will show you how I managed to do this.
ImplementationAR Engine provides apps with the ability to track and recognize facial expressions in real time, which can then be converted into facial expression parameters and used to accurately control the facial expressions of virtual characters.
Currently, AR Engine provides 64 facial expressions, including eyelid, eyebrow, eyeball, mouth, and tongue movements. It supports 21 eye-related movements, including eyeball movement and opening and closing the eyes; 28 mouth movements, including opening the mouth, puckering, pulling, or licking the lips, and moving the tongue; as well as 5 eyebrow movements, including raising or lowering the eyebrows.
DemoFacial expression based emoji
Development ProcedureRequirements on the Development EnvironmentJDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
Test device: see Software and Hardware Requirements of AR Engine Features
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations1. Before getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
2. Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development1. Check whether AR Engine has been installed on the current device. If yes, your app can run properly. If not, you need to prompt the user to install it, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Create an AR scene. AR Engine supports five scenes, including motion tracking (ARWorldTrackingConfig), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
The following takes creating a face tracking scene by calling ARFaceTrackingConfig as an example.
Code:
// Create an ARSession object.
mArSession = new ARSession(this);
// Select a specific Config to initialize the ARSession object based on the application scenario.
ARFaceTrackingConfig config = new ARFaceTrackingConfig(mArSession);
Set scene parameters using the config.setXXX method.
Code:
// Set the camera opening mode, which can be external or internal. The external mode can only be used in ARFace. Therefore, you are advised to use the internal mode.
mArConfig.setImageInputMode(ARConfigBase.ImageInputMode.EXTERNAL_INPUT_ALL);
3. Set the AR scene parameters for face tracking and start face tracking.
Code:
mArSession.configure(mArConfig);
mArSession.resume();
4. Initialize the FaceGeometryDisplay class to obtain the facial geometric data and render the data on the screen.
Code:
public class FaceGeometryDisplay {
// Initialize the OpenGL ES rendering related to face geometry, including creating the shader program.
void init(Context context) {...
}
}
5. Initialize the onDrawFrame method in the FaceGeometryDisplay class, and call face.getFaceGeometry() to obtain the face mesh.
Code:
public void onDrawFrame(ARCamera camera, ARFace face) {
ARFaceGeometry faceGeometry = face.getFaceGeometry();
updateFaceGeometryData(faceGeometry);
updateModelViewProjectionData(camera, face);
drawFaceGeometry();
faceGeometry.release();
}
6. Initialize updateFaceGeometryData() in the FaceGeometryDisplay class.
Pass the face mesh data for configuration and set facial expression parameters using OpenGl ES.
Code:
private void updateFaceGeometryData (ARFaceGeometry faceGeometry) {
FloatBuffer faceVertices = faceGeometry.getVertices();
FloatBuffer textureCoordinates =faceGeometry.getTextureCoordinates();
// Obtain an array consisting of face mesh texture coordinates, which is used together with the vertex data returned by getVertices() during rendering.
}
7. Initialize the FaceRenderManager class to manage facial data rendering.
Code:
public class FaceRenderManager implements GLSurfaceView.Renderer {
public FaceRenderManager(Context context, Activity activity) {
mContext = context;
mActivity = activity;
}
// Set ARSession to obtain the latest data.
public void setArSession(ARSession arSession) {
if (arSession == null) {
LogUtil.error(TAG, "Set session error, arSession is null!");
return;
}
mArSession = arSession;
}
// Set ARConfigBase to obtain the configuration mode.
public void setArConfigBase(ARConfigBase arConfig) {
if (arConfig == null) {
LogUtil.error(TAG, "setArFaceTrackingConfig error, arConfig is null.");
return;
}
mArConfigBase = arConfig;
}
// Set the camera opening mode.
public void setOpenCameraOutsideFlag(boolean isOpenCameraOutsideFlag) {
isOpenCameraOutside = isOpenCameraOutsideFlag;
}
...
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
mFaceGeometryDisplay.init(mContext);
}
}
8. Implement the face tracking effect by calling methods like setArSession and setArConfigBase of FaceRenderManager in FaceActivity.
Code:
public class FaceActivity extends BaseActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
mFaceRenderManager = new FaceRenderManager(this, this);
mFaceRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mFaceRenderManager.setTextView(mTextView);
glSurfaceView.setRenderer(mFaceRenderManager);
glSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
}
ConclusionEmojis allow users to express their moods and excitement in a way words can't. Instead of providing users with a selection of the same old boring preset emojis that have been used a million times, you can now make your app more fun by allowing users to create emojis themselves! Users can easily create an emoji with their own smiles, simply by facing the camera, selecting an animated character they love, and smiling. With such an ability to customize emojis, users will be able to express their feelings in a more personalized and interesting manner. If you have any interest in developing such an app, AR Engine is a great choice for you. With accurate facial tracking capabilities, it is able to identify users' facial expressions in real time, convert the facial expressions into parameters, and then apply them to virtual characters. Integrating the capability can help you considerably streamline your app development process, leaving you with more time to focus on how to provide more interesting features to users and improve your app's user experience.
ReferenceAR Engine Sample Code
Face Tracking Capability
Influencers have become increasingly important, as more and more consumers choose to purchase items online – whether on Amazon, Taobao, or one of the many other prominent e-commerce platforms. Brands and merchants have spent a lot of money finding influencers to promote their products through live streams and consumer interactions, and many purchases are made on the recommendation of a trusted influencer.
However, employing a public-facing influencer can be costly and risky. Many brands and merchants have opted instead to host live streams with their own virtual characters. This gives them more freedom to showcase their products, and widens the pool of potential on camera talent. For consumers, virtual characters can add fun and whimsy to the shopping experience.
E-commerce platforms have begun to accommodate the preference for anonymous livestreaming, by offering a range of important capabilities, such as those that allow for automatic identification, skeleton point-based motion tracking in real time (as shown in the gif), facial expression and gesture identification, copying of traits to virtual characters, a range of virtual character models for users to choose from, and natural real-world interactions.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Building these capabilities comes with its share of challenges. For example, after finally building a model that is able to translate the person's every gesture, expression, and movement into real-time parameters and then applying them to the virtual character, you can find out that the virtual character can't be blocked by real bodies during the livestream, which gives it a fake, ghost-like form. This is a problem I encountered when I developed my own e-commerce app, and it occurred because I did not occlude the bodies that appeared behind and in front of the virtual character. Fortunately I was able to find an SDK that helped me solve this problem — HMS Core AR Engine.
This toolkit provides a range of capabilities that make it easy to incorporate AR-powered features into apps. From hit testing and movement tracking, to environment mesh, and image tracking, it's got just about everything you need. The human body occlusion capability was exactly what I needed at the time.
Now I'll show you how I integrated this toolkit into my app, and how helpful it's been for.
First I registered for an account on the HUAWEI Developers website, downloaded the AR Engine SDK, and followed the step-by-step development guide to integrate the SDK. The integration process was quite simple and did not take too long. Once the integration was successful, I ran the demo on a test phone, and was amazed to see how well it worked. During livestreams my app was able to recognize and track the areas where I was located within the image, with an accuracy of up to 90%, and provided depth-related information about the area. Better yet, it was able to identify and track the profile of up to two people, and output the occlusion information and skeleton points corresponding to the body profiles in real time. With this capability, I was able to implement a lot of engaging features, for example, changing backgrounds, hiding virtual characters behind real people, and even a feature that allows the audience to interact with the virtual character through special effects. All of these features have made my app more immersive and interactive, which makes it more attractive to potential shoppers.
DemoAs shown in the gif below, the person blocks the virtual panda when walking in front of it.
How to DevelopPreparationsRegistering as a developerBefore getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
Creating an appCreate a project and create an app under the project. Pay attention to the following parameter settings:
Platform: Select Android.
Device: Select Mobile phone.
App category: Select App or Game.
Integrating the AR Engine SDKBefore development, integrate the AR Engine SDK via the Maven repository into your development environment.
Configuring the Maven repository address for the AR Engine SDKThe procedure for configuring the Maven repository address in Android Studio is different for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
Adding build dependenciesOpen the build.gradle file in the app directory of your project.
Add a build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}'
}
Open the modified build.gradle file again. You will find a Sync Now link in the upper right corner of the page. Click Sync Now and wait until synchronization is complete.
Developing Your AppChecking the AvailabilityCheck whether AR Engine has been installed on the current device. If so, the app can run properly. If not, the app prompts the user to install AR Engine, for example, by redirecting the user to AppGallery. The code is as follows:
Code:
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
Create a BodyActivity object to display body bones and output human body features, for AR Engine to recognize human body.
Code:
Public class BodyActivity extends BaseActivity{
Private BodyRendererManager mBodyRendererManager;
Protected void onCreate(){
// Initialize surfaceView.
mSurfaceView = findViewById();
// Context for keeping the OpenGL ES running.
mSurfaceView.setPreserveEGLContextOnPause(true);
// Set the OpenGL ES version.
mSurfaceView.setEGLContextClientVersion(2);
// Set the EGL configuration chooser, including for the number of bits of the color buffer and the number of depth bits.
mSurfaceView.setEGLConfigChooser(……);
mBodyRendererManager = new BodyRendererManager(this);
mSurfaceView.setRenderer(mBodyRendererManager);
mSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
Protected void onResume(){
// Initialize ARSession to manage the entire running status of AR Engine.
If(mArSession == null){
mArSession = new ARSession(this.getApplicationContext());
mArConfigBase = new ARBodyTrackingConfig(mArSession);
mArConfigBase.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE_MASK);
mArConfigBase.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS
mArSession.configure(mArConfigBase);
}
// Pass the required parameters to setBodyMask.
mBodyRendererManager.setBodyMask(((mArConfigBase.getEnableItem() & ARConfigBase.ENABLE_MASK) != 0) && mIsBodyMaskEnable);
sessionResume(mBodyRendererManager);
}
}
Create a BodyRendererManager object to render the personal data obtained by AR Engine.
Code:
Public class BodyRendererManager extends BaseRendererManager{
Public void drawFrame(){
// Obtain the set of all traceable objects of the specified type.
Collection<ARBody> bodies = mSession.getAllTrackables(ARBody.class);
for (ARBody body : bodies) {
if (body.getTrackingState() != ARTrackable.TrackingState.TRACKING){
continue;
}
mBody = body;
hasBodyTracking = true;
}
// Update the body recognition information displayed on the screen.
StringBuilder sb = new StringBuilder();
updateMessageData(sb, mBody);
Size textureSize = mSession.getCameraConfig().getTextureDimensions();
if (mIsWithMaskData && hasBodyTracking && mBackgroundDisplay instanceof BodyMaskDisplay) {
((BodyMaskDisplay) mBackgroundDisplay).onDrawFrame(mArFrame, mBody.getMaskConfidence(),
textureSize.getWidth(), textureSize.getHeight());
}
// Display the updated body information on the screen.
mTextDisplay.onDrawFrame(sb.toString());
for (BodyRelatedDisplay bodyRelatedDisplay : mBodyRelatedDisplays) {
bodyRelatedDisplay.onDrawFrame(bodies, mProjectionMatrix);
} catch (ArDemoRuntimeException e) {
LogUtil.error(TAG, "Exception on the ArDemoRuntimeException!");
} catch (ARFatalException | IllegalArgumentException | ARDeadlineExceededException |
ARUnavailableServiceApkTooOldException t) {
Log(…);
}
}
// Update gesture-related data for display.
Private void updateMessageData(){
if (body == null) {
return;
}
float fpsResult = doFpsCalculate();
sb.append("FPS=").append(fpsResult).append(System.lineSeparator());
int bodyAction = body.getBodyAction();
sb.append("bodyAction=").append(bodyAction).append(System.lineSeparator());
}
}
Customize the camera preview class, which is used to implement human body drawing based on certain confidence.
Code:
Public class BodyMaskDisplay implements BaseBackGroundDisplay{}
Obtain skeleton data and pass the data to the OpenGL ES, which renders the data and displays it on the screen.
Code:
public class BodySkeletonDisplay implements BodyRelatedDisplay {
Obtain skeleton point connection data and pass it to OpenGL ES for rendering the data and display it on the screen.
Code:
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {}
ConclusionTrue-to-life AR live-streaming is now an essential feature in e-commerce apps, but developing this capability from scratch can be costly and time-consuming. AR Engine SDK is the best and most convenient SDK I've encountered, and it's done wonders for my app, by recognizing individuals within images with accuracy as high as 90%, and providing the detailed information required to support immersive, real-world interactions. Try it out on your own app to add powerful and interactive features that will have your users clamoring to shop more!
ReferencesAR Engine Development Guide
Sample Code
API Reference