Audio is the soul of media, and for mobile apps in particular, it engages with users more, adds another level of immersion, and enriches content.
This is a major driver of my obsession for developing audio-related functions. In my recent post that tells how I developed a portrait retouching function for a live-streaming app, I mentioned that I wanted to create a solution that can retouch music. I know that a technology called spatial audio can help with this, and — guess what — I found a synonymous capability in HMS Core Audio Editor Kit, which can be integrated independently, or used together with other capabilities in the UI SDK of this kit.
I chose to integrate the UI SDK into my demo first, which is loaded with not only the kit's capabilities, but also a ready-to-use UI. This allows me to give the spatial audio capability a try and frees me from designing the UI. Now let's dive into the development procedure of the demo.
Development ProcedurePreparations1. Prepare the development environment, which has requirements on both software and hardware. These are:
Software requirements:
JDK version: 1.8 or later
Android Studio version: 3.X or later
minSdkVersion: 24 or later
targetSdkVersion: 33 (recommended)
compileSdkVersion: 30 (recommended)
Gradle version: 4.6 or later (recommended)
Hardware requirements: a phone running EMUI 5.0 or later, or a phone running Android whose version ranges from Android 7.0 to Android 13.
2. Configure app information in a platform called AppGallery Connect, and go through the process of registering as a developer, creating an app, generating a signing certificate fingerprint, configuring the signing certificate fingerprint, enabling the kit, and managing the default data processing location.
3. Integrate the HMS Core SDK.
4. Add necessary permissions in the AndroidManifest.xml file, including the vibration permission, microphone permission, storage write permission, storage read permission, Internet permission, network status access permission, and permission to obtaining the changed network connectivity state.
When the app's Android SDK version is 29 or later, add the following attribute to the application element, which is used for obtaining the external storage permission.
Code:
<application
android:requestLegacyExternalStorage="true"
…… >
SDK Integration1. Initialize the UI SDK and set the app authentication information. If the information is not set, this may affect some functions of the SDK.
Code:
// Obtain the API key from the agconnect-services.json file.
// It is recommended that the key be stored on cloud, which can be obtained when the app is running.
String api_key = AGConnectInstance.getInstance().getOptions().getString("client/api_key");
// Set the API key.
HAEApplication.getInstance().setApiKey(api_key);
2. Create AudioFilePickerActivity, which is a customized activity used for audio file selection.
Code:
/**
* Customized activity, used for audio file selection.
*/
public class AudioFilePickerActivity extends AppCompatActivity {
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
performFileSearch();
}
private void performFileSearch() {
// Select multiple audio files.
registerForActivityResult(new ActivityResultContracts.GetMultipleContents(), new ActivityResultCallback<List<Uri>>() {
@Override
public void onActivityResult(List<Uri> result) {
handleSelectedAudios(result);
finish();
}
}).launch("audio/*");
}
/**
* Process the selected audio files, turning the URIs into paths as needed.
*
* @param uriList indicates the selected audio files.
*/
private void handleSelectedAudios(List<Uri> uriList) {
// Check whether the audio files exist.
if (uriList == null || uriList.size() == 0) {
return;
}
ArrayList<String> audioList = new ArrayList<>();
for (Uri uri : uriList) {
// Obtain the real path.
String filePath = FileUtils.getRealPath(this, uri);
audioList.add(filePath);
}
// Return the audio file path to the audio editing UI.
Intent intent = new Intent();
// Use HAEConstant.AUDIO_PATH_LIST that is provided by the SDK.
intent.putExtra(HAEConstant.AUDIO_PATH_LIST, audioList);
// Use HAEConstant.RESULT_CODE as the result code.
this.setResult(HAEConstant.RESULT_CODE, intent);
finish();
}
}
The FileUtils utility class is used for obtaining the real path, which is detailed here. Below is the path to this class.
Code:
app/src/main/java/com/huawei/hms/audioeditor/demo/util/FileUtils.java
3. Add the action value to AudioFilePickerActivity in AndroidManifest.xml. The SDK would direct to a screen according to this action.
Code:
<activity
android:name=".AudioFilePickerActivity"
android:exported="false">
<intent-filter>
<action android:name="com.huawei.hms.audioeditor.chooseaudio" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</activity>
4. Launch the audio editing screen via either:
Mode 1: Launch the screen without input parameters. In this mode, the default configurations of the SDK are used.
Code:
HAEUIManager.getInstance().launchEditorActivity(this);
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Audio editing screens
Mode 2: Launch the audio editing screen with input parameters. This mode lets you set the menu list and customize the path for an output file. On top of this, the mode also allows for specifying the input audio file paths, setting the draft mode, and more.
Launch the screen with the menu list and customized output file path:
Code:
// List of level-1 menus. Below are just some examples:
ArrayList<Integer> menuList = new ArrayList<>();
// Add audio.
menuList.add(MenuCommon.MAIN_MENU_ADD_AUDIO_CODE);
// Record audio.
menuList.add(MenuCommon.MAIN_MENU_AUDIO_RECORDER_CODE);
// List of level-2 menus, which are displayed after audio files are input and selected.
ArrayList<Integer> secondMenuList = new ArrayList<>();
// Split audio.
secondMenuList.add(MenuCommon.EDIT_MENU_SPLIT_CODE);
// Delete audio.
secondMenuList.add(MenuCommon.EDIT_MENU_DEL_CODE);
// Adjust the volume.
secondMenuList.add(MenuCommon.EDIT_MENU_VOLUME2_CODE);
// Customize the output file path.
String exportPath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_MUSIC).getPath() + "/";
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the level-1 menus.
.setCustomMenuList(menuList)
// Set the level-2 menus.
.setSecondMenuList(secondMenuList)
// Set the output file path.
.setExportPath(exportPath);
// Launch the audio editing screen with the menu list and customized output file path.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
Level-1 menus
Level-2 menus
Launch the screen with the specified input audio file paths:
Code:
// Set the input audio file paths.
ArrayList<AudioInfo> audioInfoList = new ArrayList<>();
// Example of an audio file path:
String audioPath = "/storage/emulated/0/Music/Dream_It_Possible.flac";
// Create an instance of AudioInfo and pass the audio file path.
AudioInfo audioInfo = new AudioInfo(audioPath);
// Set the audio name.
audioInfo.setAudioName("Dream_It_Possible");
audioInfoList.add(audioInfo);
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the input audio file paths.
.setFilePaths(audioInfoList);
// Launch the audio editing screen with the specified input audio file paths.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
Launch the screen with drafts:
Code:
// Obtain the draft list. For example:
List<DraftInfo> draftList = HAEUIManager.getInstance().getDraftList();
// Specify the first draft in the draft list.
String draftId = null;
if (!draftList.isEmpty()) {
draftId = draftList.get(0).getDraftId();
}
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the draft ID, which can be null.
.setDraftId(draftId)
// Set the draft mode. NOT_SAVE is the default value, which indicates not to save a project as a draft.
.setDraftMode(AudioEditorLaunchOption.DraftMode.SAVE_DRAFT);
// Launch the audio editing screen with drafts.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
And just like that, SDK integration is complete, and the prototype of the audio editing app I want is ready to use.
Not bad. It has all the necessary functions of an audio editing app, and best of all, it's pretty easy to develop, thanks to the all-in-one and ready-to-use SDK.
Anyway, I tried the spatial audio function preset in the SDK and I found I could effortlessly add more width to a song. However, I also want a customized UI for my app, instead of simply using the one provided by the UI SDK. So my next step is to create a demo with the UI that I have designed and the spatial audio function.
AfterthoughtsTruth to be told, the integration process wasn't as smooth as it seemed. I encountered two issues, but luckily, after doing some of my own research and contacting the kit's technical support team, I was able to fix the issues.
The first issue I came across was that after touching the Add effects and AI dubbing buttons, the UI displayed The token has expired or is invalid, and the Android Studio console printed the HAEApplication: please set your app apiKey log. The reason for this was that the app's authentication information was not configured. There are two ways of configuring this. The first was introduced in the first step of SDK Integration of this post, while the second was to use the app's access token, which had the following code:
Code:
HAEApplication.getInstance().setAccessToken("your access token");
The second issue — which is actually another result of unconfigured app authentication information — is the Something went wrong error displayed on the screen after an operation. To solve it, first make sure that the app authentication information is configured. Once this is done, go to AppGallery Connect to check whether Audio Editor Kit has been enabled for the app. If not, enable it. Note that because of caches (of either the mobile phone or server), it may take a while before the kit works for the app.
Also, in the Preparations part, I skipped the step for configuring obfuscation scripts before adding necessary permissions. This step is, according to technical support, necessary for apps that aim to be officially released. The app I have covered in this post is just a demo, so I just skipped this step.
TakeawayNo app would be complete with audio, and with spatial audio, you can deliver an even more immersive audio experience to your users.
Developing a spatial audio function for a mobile app can be a piece of cake thanks to HMS Core Audio Editor Kit. The spatial audio capability can be integrated either independently or together with other capabilities via the UI SDK, which delivers a ready-to-use UI, so that you can skip the tricky bits and focus more on what matters to users.
Related
This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
HMS In-App Messaging is a tool to send relevant messages to target users actively using our app to encourage them to use key app functions. For example, we can send in-app messages to encourage users to subscribe to certain products, provide tips on passing a game level, or recommend activities of a restaurant.
In-App Messaging also allow us to customize our messages, the way it will be sent and also define events for triggering message sending to our users at the right moment.
Today I am going to server you a recipe to integrate In-App Messaging in your apps within 10 minutes.
Key Ingredients Needed
· 1 Android App Project, which supports Android 4.2 and later version.
· 1 SHA Key
· 1 Huawei Developer Account
· 1 Huawei phone with HMS 4.0.0.300 or later
Preparation Needed
· First we need to create an app or project in the Huawei app gallery connect.
· Provide the SHA Key and App Package name of the android project in App Information Section.
· Provide storage location in convention section under project setting.
· Enable App Messaging setting in Manage APIs section.
· After completing all the above points we need to download the agconnect-services.json from App Information Section. Copy and paste the json file in the app folder of the android project.
· Copy and paste the below maven url inside the repositories of buildscript and allprojects respectively (project build.gradle file)
Code:
maven { url 'http://developer.huawei.com/repo/' }
· Copy and paste the below class path inside the dependency section of project build.gradle file.
Code:
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
· Copy and paste the below plugin in the app build.gradle file
Code:
apply plugin: 'com.huawei.agconnect'
· Copy and paste below library in the app build.gradle file dependencies section
Code:
implementation 'com.huawei.agconnect:agconnect-appmessaging:1.3.1.300'
· Put the below permission in AndroidManifest file.
a) android.permission.INTERNET
b) android.permission.ACCESS_NETWORK_STATE
· Now Sync the gradle.
Android Code
1. First we need AAID for later use in sending In-App Messages. To obtain AAID, we will use getAAID() method.
2. Add following code in your project to obtain AAID:
Code:
HmsInstanceId inst = HmsInstanceId.getInstance(this);
Task<AAIDResult> idResult = inst.getAAID();
idResult.addOnSuccessListener(new OnSuccessListener<AAIDResult>() {
@Override
public void onSuccess(AAIDResult aaidResult) {
String aaid = aaidResult.getId();
Log.d(TAG, "getAAID success:" + aaid );
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.d(TAG, "getAAID failure:" + e);
}
});
3. To initialize the AGConnectAppMessaging instance we will use.
Code:
AGConnectAppMessaging appMessaging = AGConnectAppMessaging.getInstance();
4. To allow data synchronization from the AppGallery Connect server we will use.
Code:
appMessaging.setFetchMessageEnable(true);
5. To enable message display.
Code:
appMessaging.setDisplayEnable(true);
6. To specify that the in-app message data must be obtained from the AppGallery Connect server by force we will use.
Code:
appMessaging.setForceFetch();
Since we are using a test device to demonstrate the use of In-App Messaging, so we use setForceFetch API. The setForceFetch API can be used only for message testing. Also In-app messages can only be displayed to users who have installed our officially released app version.
Till now we added libraries, wrote the code in android studio using java etc. in the heated pan and the result will look like this:
Code:
public class MainActivity extends AppCompatActivity {
private String TAG = "MainActivity";
private AGConnectAppMessaging appMessaging;
private HiAnalyticsInstance instance;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
HmsInstanceId inst = HmsInstanceId.getInstance(this);
Task<AAIDResult> idResult = inst.getAAID();
idResult.addOnSuccessListener(new OnSuccessListener<AAIDResult>() {
@Override
public void onSuccess(AAIDResult aaidResult) {
String aaid = aaidResult.getId();
Log.d(TAG, "getAAID success:" + aaid );
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.d(TAG, "getAAID failure:" + e);
}
});
HiAnalyticsTools.enableLog();
instance = HiAnalytics.getInstance(this);
instance.setAnalyticsEnabled(true);
instance.regHmsSvcEvent();
inAppMessaging();
}
private void inAppMessaging() {
appMessaging = AGConnectAppMessaging.getInstance();
appMessaging.setFetchMessageEnable(true);
appMessaging.setDisplayEnable(true);
appMessaging.setForceFetch();
}
}
App Gallery Connect
Before we start using one of our key ingredients i.e. Huawei App Gallery Connect using Huawei Developer Account to serve In-App Messages, I would like to inform you that we can server the main dish into three different flavour or types:
A. Pop-up Message Flavour
B. Banner Message Flavour
C. An Image Message Flavour
In-App Message Using Pop-up Flavour
Let’s serve the main dish using Pop-up Message Flavour.
Step 1: Go to Huawei App Gallery Connect
Step 2: Select My projects.
Step 3: After selecting my projects, select the project which we have create earlier. It will look like this:
Step 4: After selecting the project, we will select App Messaging from the menu. It will look like this:
Step 5: Select New button to create new In-App Message to send to the device.
Step 6: Provide the Name and Description. It will look like this:
Step 7: In the Set style and content section, we have to select the type (which is Pop-up), provide a title, a message body, and choose colour for title text, body text and the background of the message. It will look like this:
Step 8: After providing the details in set style and content section, we will move on to the Image section. We will provide two image here for portrait and landscape mode of the app. Remember for portrait the image aspect ratio should be 3:2 (300x200) and for landscape the image aspect ratio should be either 1:1 (100x100) or 3:2 (300x200). It will look like this:
https://communityfile-dre.op.hicloud.com/FileServer/getFile/cmtybbs/001/647/156/2640091000001647156.20200512212645.95630192255042904397797509198276:50510525101557:2800:FA1CFD7E5EA43100725399BB4787A4B9FB44B8E02776FF83D09816D732E0AA63.png
Step 9: We can also provide a button in the Pop-up message using Button Section. The button contains an action. This Action contains two option. We can provide user with Disable Message action or redirect user to particular URL. The section will look like this:
Step 10: We will now move to the next section i.e Select Sending Target section. Here we can click New condition to add a condition for matching target users. Condition types include app version, OS version, language, country/region, audience, user attributes, last interaction, and initial startup.
Note: In this article, we didn’t used any condition. But you are free to use any condition to send targeted In-App Messages.
The section will look like this:
Step 11: The next section is the Set Sending Time section.
a) We can schedule a date and time to send message.
b) We can also provide an End date and time to stop sending message.
c) We can also display message on an events by using trigger event functionality. For example, we can display a discount of an item in a shopping app. A trigger event is required for each in-app message.
d) Also we can flexibly set the frequency for displaying the message.
This is how it will look:
Step 12: The last section is the Set Conversion Event section. As off now we will keep it as none. It will look like this:
Step 13: Click Save in the upper right corner to complete message creation. Also we can click Preview to preview the display effect of your message on a mobile phone or tablet.
Note: Do not hit the publish button yet. Just save it.
Step 14: In-app messages can only be displayed to users who have installed our officially released app version. App Messaging allows us to test an in-app message when our app is still under tests. To that we need to find the message that you need to test, and click Test in the Operation column as it is shown below:
Step 15: Click Add test user and enter the AAID of the test device in the text box. Also run the app in order to find AAID of the test device in the logcat of the Android Studio.
Step 16: Finally we will publish the message. We will select publish in the operation column. It will look like this:
The Dish
More information like this, you can visit HUAWEI Developer Forum
Introduction
In-App Messaging helps you engage your app's active users by sending targeted, contextual messages that encourage to use key app features. For example, you could send an in-app message to get users to subscribe, watch a video, complete a level, or buy an item. You can customize messages as cards, banners, modals, or images, and set up triggers so that they appear exactly when benefit your users most.
Usecase
1. Add following image on Huawei App gallery connect console and google firebase console to be shown as in-app message in our app.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2. App installed on Huawei device will fetch in-app message (image) from Huawei App gallery connect.
3. App installed on GMS devices will fetch in-app message (image) from Google firebase console.
Huawei In-App Messaging
Prerequisite
1. You must have Huawei developer account.
2. Huawei phone with HMS 4.0.0.300 or later
3. Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 installed.
Setup:
1. Create a project in android studio.
2. Get the SHA-256 Key.
3. Create an app in the Huawei app gallery connect.
4. Enable account kit setting in Manage APIs section.
5. Provide the SHA-256 Key in App Information Section.
6. Provide storage location.
7. After completing all the above points we need to download the agconnect-services.json from App Information Section. Place the json file in the app folder of the android project.
8. Enter the below maven url inside the repositories of buildscript and allprojects respectively (project build.gradle file )
Code:
maven { url 'http://developer.huawei.com/repo/' }
9. Enter the below plugin in the app build.gradle file
Code:
apply plugin: 'com.huawei.agconnect'
dependencies {
implementation 'com.huawei.agconnect:agconnect-appmessaging:1.4.0.300'
}
10. Put the below permission in AndroidManifest file.
a) android.permission.INTERNET
b) android.permission.ACCESS_NETWORK_STATE
11. Now Sync the gradle.
Implemetation
1. We need AAID for later use in sending In-App Messages. To obtain AAID, we will use getAAID() method.
2. Add following code in your project to obtain AAID
Code:
HmsInstanceId inst = HmsInstanceId.getInstance(this);
Task<AAIDResult> idResult = inst.getAAID();
idResult.addOnSuccessListener(new OnSuccessListener<AAIDResult>() {
@Override
public void onSuccess(AAIDResult aaidResult) {
String aaid = aaidResult.getId();
Log.d(TAG, "getAAID success:" + aaid );
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.d(TAG, "getAAID failure:" + e);
}
});
3. To initialize the AGConnectAppMessaging instance we will use.
Code:
AGConnectAppMessaging appMessaging = AGConnectAppMessaging.getInstance();
4. To allow data synchronization from the AppGallery Connect server we will use.
Code:
appMessaging.setFetchMessageEnable(true);
5. To enable message display.
Code:
appMessaging.setDisplayEnable(true);
6. To specify that the in-app message data must be obtained from the AppGallery Connect server by force we will use.
Code:
appMessaging.setForceFetch();
Since we are using a test device to demonstrate the use of In-App Messaging, so we use setForceFetch API. The setForceFetch API can be used only for message testing. Also In-app messages can only be displayed to users who have installed our officially released app version.
Creating Image In-App Messaging on App Gallery connect
1. Sign in to AppGallery Connect and select My projects.
2. Select your project from the project list.
3. Navigate Growing > App Messaging and click New.
4. Set Name and Description for your in-app message.
5. Provide the in-app message Type. For image in-app message, select type as Image.
6. Provide image URL.
7. Provide the action for image tapping.
8. Click next to move on to the next section, that is, Select Sending Target section. Here we can define Condition for matching target users. In this article, we did not use any condition.
9. The next section is the Set Sending Time section.
We can schedule a date and time to send in-app message.
We can also provide an End date and time to stop sending message.
We can also display message on an events by using trigger event functionality. For example, we can display a discount of an item in a shopping app. A trigger event is required for each in-app message.
Also we can flexibly set the frequency for displaying the in-app message.
10. Click Next, find Set conversation events. It associates the display or tap of the app message with a conversion event. The conversion relationship will be displayed in statistics. As off now we will keep it as none.
11. Click Save in the upper right corner to complete message creation.
12. In-app messages can only be displayed to users who have installed our officially released app version. App Messaging allows us to test an in-app message when our app is still under tests. Find the message that you need to test, and click Test in the Operation column as shown below:
13. Provide the AAID of the test device in the text box. Click Save.
14. Click Publish.
Google firebase In-App Messaging
Steps:
1. To enable Firebase in your app, add following lines in your root-level build.gradle file.
Code:
buildscript {
repositories {
// Check that you have the following line (if not, add it):
google() // Google's Maven repository
}
dependencies {
// ...
// Add the following line:
classpath 'com.google.gms:google-services:4.3.3' // Google Services plugin
}
}
allprojects {
// ...
repositories {
// Check that you have the following line (if not, add it):
google() // Google's Maven repository
// ...
}
}
2. In your module (app-level) Gradle file (usually app/build.gradle), apply the Google Services Gradle plugin:
Code:
apply plugin: 'com.android.application'
// Add the following line:
apply plugin: 'com.google.gms.google-services' // Google Services plugin
android {
// ...
}
3. To your module (app-level) Gradle file (usually app/build.gradle), add the dependencies.
Code:
dependencies {
// Add the Firebase SDK for Google Analytics
implementation 'com.google.firebase:firebase-analytics:17.4.4'
implementation 'com.google.firebase:firebase-auth:19.3.2'
}
4. That testing device is determined by a FirebaseInstallations ID, or FID. Find your testing app's FID by checking the Logcat in Android Studio for the following `Info` level log:
Code:
I/FIAM.Headless: Starting InAppMessaging runtime with Installation ID YOUR_INSTALLATION_ID
We will add this FID while composing in-app message on firebase console.
5. Set up your new campaign in the Firebase consoles Firebase In-App Messaging page.
6. Select your project and click Create a new campaign.
7. In this article, we will use image in-app message. Select message layout as Image only. Provide Image URL and click Next.
8. Provide campaign name and description and define your target user. Here we have not defined any target user.
9. Click Next to define campaign start time, end time, trigger event and frequency limit of in-app message.
10. Click Next and provide conditional events and additional options. These are optional. Click Save as draft.
11. Select Test on Device, and add the earlier generated FID.
12. Run the application on GMS installed device, you will see Google in-app message.
To check if device has HMS or GMS installed
Code:
public class Utils {
public static boolean isSignedIn = false;
public static boolean isGooglePlayServicesAvailable(Activity activity) {
GoogleApiAvailability googleApiAvailability = GoogleApiAvailability.getInstance();
int status = googleApiAvailability.isGooglePlayServicesAvailable(activity);
if(status != ConnectionResult.SUCCESS) {
if(googleApiAvailability.isUserResolvableError(status)) {
googleApiAvailability.getErrorDialog(activity, status, 2404).show();
}
return false;
}
return true;
}
public static boolean isHuaweiMobileServicesAvailable(Context context) {
if (HuaweiApiAvailability.getInstance().isHuaweiMobileServicesAvailable(context) == ConnectionResult.SUCCESS){
return true;
}
return false;
}
public static boolean isDeviceHuaweiManufacturer () {
String manufacturer = Build.MANUFACTURER;
Log.d("Device : " , manufacturer);
if (manufacturer.toLowerCase().contains("huawei")) {
return true;
}
return false;
}
}
References:
https://developer.huawei.com/consumer/en/doc/development/AppGallery-connect-Guides/agc-appmessage-introduction
https://firebase.google.com/docs/in-app-messaging/get-started?platform=android
More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201327829880650040&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
“Native advertising”, a term first coined by Fred Wilson at the Online Media, Marketing, and Advertising Conference in 2011.Native advertising is a form of paid media where the ad experience follows the natural form and function of the user experience in which it is placed.
Huawei Ads Kit offers variety of different ads format to reach the consumer marketing goals in today’s competitive world.
Huawei Ads Kit demonstrate an easy and efficient way to implement the native ads within your application.
Prerequisite
1. Must have a Huawei Developer Account
2. Must have a Huawei phone with HMS 4.0.0.300 or later
3. React Native environment with Android Studio, Node Js and Visual Studio code.
Major Dependencies
1. React Native CLI : 2.0.1
2. Gradle Version: 6.0.1
3. Gradle Plugin Version: 3.5.2
4. React Native Site Kit SDK : 4.0.4
5. React-native-hms-site gradle dependency
6. AGCP gradle dependency
Development Overview
Preparation
1. Create an app or project in the Huawei app gallery connect.
2. Provide the SHA Key and App Package name of the project in App Information Section and enable the required API.
3. Create a react native project, use the below command
“react-native init project name”
4. Download the React Native Ads Kit SDK and paste it under Node Modules directory of React Native project.
Tips
1. agconnect-services.json is not required for integrating the hms-ads-sdk.
2. Run below command under project directory using CLI if you cannot find node modules.
Code:
“npm install” & “npm link”
Integration
1. Configure android level build.gradle
Add to buildscript/repositories and allprojects/repositories
Code:
maven {url 'http://developer.huawei.com/repo/'}
2. Configure app level build.gradle. (Add to dependencies)
Code:
Implementation project (“: react-native-hms-ads”)
3. Linking the HMS Ads Kit Sdk.
Run below command in the project directory
Code:
react-native link react-native-hms-ads
Development Process
Once sdk is integrated and ready to use, add following code to your App.js file which will import the API’s present.
Adding a Native Ad
Configuring Properties
Executing Commands
Testing
HMS Native Ads aligned with the application and device layout seamlessly, however these can be customised as well.
Adding a Native Ad
HMSNative module is added to the application to work with native ads components. Height can also be customized with the help of ‘style’.
Code:
import {
HMSNative,
} from'react-native-hms-ads';
style={{height:322}}/>
Configuring Properties
Native Ads properties can be configured in various ways
1. For handling different media types, create different ads slot id for different media type ads.
2. Custom listeners can also be added to listen different events
3. Specific native ads can be implemented and requested to target specific audience.
4. Position of the ads component can customized and adjusted
5. View options for the texts on the native ad components can also be changed.
Import below modules for customizing the native ads as required.
Code:
import {
HMSNative,
NativeMediaTypes,
ContentClassification,
Gender,
NonPersonalizedAd,
TagForChild,
UnderAge,
ChoicesPosition,
Direction,
AudioFocusType,
ScaleType
} from'react-native-hms-ads';
Note: Developers can use publisher services to create the Ad Slot ID. Refer this article to know the process for creating the slot Id’s
Create a function in the app.js file as below and add the ad slot id.
Code:
const Native = () => {x
let nativeAdIds = {};
nativeAdIds[NativeMediaTypes.VIDEO] = 'testy63txaom86';
nativeAdIds[NativeMediaTypes.IMAGE_SMALL] = 'testb65czjivt9';
nativeAdIds[NativeMediaTypes.IMAGE_LARGE] = 'testu7m3hc4gvm';
Code:
//Setting up the media type
const [displayForm, setDisplayForm] = useState({
mediaType: NativeMediaTypes.VIDEO,
adId: nativeAdIds.video,
});
Code:
//Add below code for different configuration
nativeConfig={{
choicesPosition: ChoicesPosition.BOTTOM_RIGHT,
mediaDirection: Direction.ANY,
// mediaAspect: 2,
// requestCustomDislikeThisAd: false,
// requestMultiImages: false,
// returnUrlsForImages: false,
// adSize: {
// height: 100,
// width: 100,
// },
videoConfiguration: {
audioFocusType: AudioFocusType.NOT_GAIN_AUDIO_FOCUS_ALL,
// clickToFullScreenRequested: true,
// customizeOperateRequested: true,
startMuted: true,
},
}}
viewOptions={{
showMediaContent: false,
mediaImageScaleType: ScaleType.FIT_CENTER,
// adSourceTextStyle: {color: 'red'},
// adFlagTextStyle: {backgroundColor: 'red', fontSize: 10},
// titleTextStyle: {color: 'red'},
descriptionTextStyle: {visibility: false},
callToActionStyle: {color: 'black', fontSize: 12},
}}
Executing Commands
To load the ad on the button click, loadAd() api is used.
Code:
title="Load"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.loadAd();
}
}}
To dislike the ad on the button click, dislikeAd() api is used.
Code:
title="Dislike"
color="orange"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.dislikeAd('Because I dont like it');
}
}}
To allow the click on the ad setAllowCustomclick() api is used on te button click.
Code:
title="Allow custom click"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.setAllowCustomClick();
}
}}
To Report an ad impression on the button click.
Code:
title="Record impression"
color="red"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.recordImpressionEvent({
impressed: true,
isUseful: 'nope',
});
}
}}
To navigate through the ‘why’page, gotowhyThisAdPage() api is used.
Code:
title="Go to Why Page"
color="purple"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.gotoWhyThisAdPage();
}
}}
To record an event on the button click recordClickEvent() api is used.
Code:
title="Record click event"
color="green"
onPress={() => {
if (adNativeElement !== null) {
adNativeElement.recordClickEvent();
}
}}
Below listeners are implemented to start, play and stop the video ads.
Code:
onVideoStart={(e) => toast('HMSNative onVideoStart')}
onVideoPlay={(e) => toast('HMSNative onVideoPlay')}
onVideoEnd={(e) => toast('HMSNative onVideoEnd')}
Testing
Run the below command to build the project
Code:
React-native run-android
After successful build, run the below command in the android directory of the project to create the signed apk.
Code:
gradlew assembleRelease
Results
Conclusion
Adding Native ads seem very easy. Stay tuned for more ads activity.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Article Introduction
In this article, we will show how to integrate Huawei ML Kit (Real-time Language Detection and Real-time Language Translation) in iOS using native language (Swift). The use case has been created to make Smart Translator supporting more than 38 languages with HMS open capabilities.
Huawei ML Kit (Real-time Language Detection)
The real-time language detection service can detect the language of text. Both single-language text and multi-language text are supported. ML Kit detects languages in text and returns the language codes (the BCP-47 standard is used for Traditional Chinese, and the ISO 639-1 standard is used for other languages) and their respective confidences or the language code with the highest confidence. Currently, the real-time language detection service supports 109 languages.
Huawei ML Kit (Real-time Language Translation)
The real-time translation service can translate text from the source language into the target language through the server on the cloud. Currently, real-time translation supports 38 languages.
For this article, we implemented cloud based real-time Language Detection and real-time Language Translation for iOS with native Swift language.
Pre-Requisites
Before getting started, following are the requirements:
Xcode (During this tutorial, we used latest version 12.4)
iOS 9.0 or later (ML Kit supports iOS 9.0 and above)
Apple Developer Account
iOS device for testing
Development
Following are the major steps of development for this article:
Step 1: Importing the SDK in Pod Mode.
1.1: Check whether Cocoapods has been installed:
Code:
gem -v
If not, run the following commands to install Cocoapods:
Code:
sudo gem install cocoapods
pod setup
1.2: Run the pod init command in the root directory of the Xcode project and add the current version number to the generated Podfile file.
Code:
pod "ViewAnimator" # ViewAnimator for cool animations
pod 'lottie-ios' # Lottie for Animation
pod 'MLTranslate', '~>2.0.5.300' # Real-time translation
pod 'MLLangDetection', '~>2.0.5.300' # Real-Time Language Detection
1.3: Run the following command in the same directory of the Podfile file to integrate the HMS Core Scan SDK:
Code:
pod install
If you have used Cocoapods, run the following command to update Cocoapods:
Code:
pod update
1.4: After the execution is successful, open the project directory, find the .xcworkspace file, and execute it.
Step 2: Generating Supported Language JSON.
Since our main goal is Smart Translator, we restricted real-time language detection to 38 languages and generated a JSON file locally to avoid API creation and API calling. In real world scenario, an API can be developed or Huawei ML Kit can be used to get all the supported languages.
Step 3: Building Layout.
We used Auto Layout. Auto Layout defines your user interface using a series of constraints. Constraints typically represent a relationship between two views. Auto Layout then calculates the size and location of each view based on these constraints. This produces layouts that dynamically respond to both internal and external changes.
In this article, we also used Lottie animation for splash animation and for the loading animation when user translate anything. We also used ViewAnimator library to load History UITableView items.
Code:
func showAppIntro(){
DispatchQueue.main.async {
self.animationView.animation = Animation.named("intro_animation")
self.animationView.contentMode = .scaleAspectFit
self.animationView.play(fromFrame: AnimationFrameTime.init(30), toFrame: AnimationFrameTime.init(256), loopMode: .playOnce) { (completed) in
// Let's open Other Screen once the animation is completed
self.performSegue(withIdentifier: "goToServiceIntro", sender: nil)
}
}
}
func showLoader(){
DispatchQueue.main.async {
self.animationView.isHidden = false
self.animationLoader.play()
}
}
func hideLoader(){
DispatchQueue.main.async {
self.animationView.isHidden = true
self.animationLoader.stop()
}
}
func loadAnimateTableView(){
let fromAnimation = AnimationType.vector(CGVector(dx: 30, dy: 0))
let zoomAnimation = AnimationType.zoom(scale: 0.2)
self.historyList.append(contentsOf: AppUtils.getTranslationHistory())
self.historyTableView.reloadData()
UIView.animate(views: self.historyTableView.visibleCells,
animations: [fromAnimation, zoomAnimation], delay: 0.5)
}
Step 4: Integrating ML Kit
By default, Auto is selected which will detect the entered language using ML Kit Real-time Language Detection APIs. User can also swap the languages if auto is not selected. Once user enter the text and press enter, the ML Kit Real-time Language Translation APIs are called and display the result in the other box.
Code:
// This extension is responsible for MLLangDetect and MLTranslate related functions
extension HomeViewController {
func autoDetectLanguage(enteredText: String){
if enteredText.count > 1 {
self.txtLblResult.text = "" // Reset the translated text
self.showLoader()
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
self.mlRemoteLangDetect?.syncFirstBestDetect(enteredText, addOnSuccessListener: { (lang) in
// Get the Language that user entered, incase unable to identify, please change auto to your language
let detectedLanguage = AppUtils.getSelectedLanguage(langCode: lang)
if detectedLanguage == nil {
self.hideLoader()
self.displayResponse(message: "Oops! We are not able to detect your language 🧐 Please select your language from the list for better results 😉")
return // No Need to run the remaining code
}
self.langFrom = detectedLanguage!
// Once we detect the language, let's add Auto suffix to let user know that it's automatically detected
let langName = "\(String(describing: self.langFrom!.langName)) - Auto"
self.langFrom!.langName = langName
// Let's update the buttons titles
self.setButtonsTitle()
// Let's do the translation now
self.translateText(enteredText: enteredText)
}, addOnFilureListener: { (exception) in
self.hideLoader()
self.displayResponse(message: "Oops! We are unable to process your request at the moment 😞")
})
}
}
}
func translateText(enteredText: String){
// Let's Init the translator with selected languages
self.initLangTranslate()
if enteredText.count > 1 {
self.txtLblResult.text = "" // Reset the translated text
self.showLoader()
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
MLRemoteTranslator.sharedInstance().syncTranslate(enteredText) { (translatedText) in
self.txtLblResult.text = translatedText
self.saveTranslationHistory() // This function will save translation history
self.hideLoader()
} addOnFailureListener: { (exception) in
self.hideLoader()
self.displayResponse(message: "Oops! We are unable to process your request at the moment 😞")
}
}
} else {
self.hideLoader()
self.displayResponse(message: "Please write something 🧐")
}
}
func saveTranslationHistory(){
AppUtils.saveData(fromText: edtTxtMessage.text!, toText: txtLblResult.text!, fromLang: self.langFrom!.langName, toLang: self.langTo!.langName)
}
}
Step 5: Save translation History locally on the device.
After getting the result, we call helper functions to save data and retrieve it using NSUserDefault when needed. We also provide an option to delete all data in the History Screen.
Code:
static func saveData(fromText: String, toText: String, fromLang: String, toLang: String){
var history = self.getTranslationHistory()
history.insert(TranslationHistoryModel.init(dateTime: getCurrentDateTime(), fromText: fromText, toText: toText, fromLang: fromLang, toLang: toLang), at: 0)
do {
let encodedData = try NSKeyedArchiver.archivedData(withRootObject: history, requiringSecureCoding: false)
UserDefaults.standard.set(encodedData, forKey: "TranslationHistory")
UserDefaults.standard.synchronize()
} catch {
print(error)
}
}
static func clearHistory(){
let history: [TranslationHistoryModel] = []
do {
let encodedData = try NSKeyedArchiver.archivedData(withRootObject: history, requiringSecureCoding: false)
UserDefaults.standard.set(encodedData, forKey: "TranslationHistory")
UserDefaults.standard.synchronize()
} catch {
print(error)
}
}
static func getTranslationHistory() -> [TranslationHistoryModel]{
let decoded = UserDefaults.standard.data(forKey: "TranslationHistory")
if decoded != nil {
do {
let result = try NSKeyedUnarchiver.unarchiveTopLevelObjectWithData(decoded!) as? [TranslationHistoryModel]
if result != nil {
return result!
} else {
return []
}
} catch {
print(error)
return []
}
} else {
return []
}
}
Step 6: Displaying History in UITableView.
We then add all the items in UITableView
Code:
// This extension is responsible for UITableView related things
extension HistoryViewController: UITableViewDelegate, UITableViewDataSource {
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return self.historyList.count
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCell(withIdentifier: "HistoryTableViewCell", for: indexPath) as! HistoryTableViewCell
let entity: TranslationHistoryModel = self.historyList[indexPath.row]
cell.txtLblFrom.text = entity.fromLang
cell.txtLblFromText.text = entity.fromText
cell.txtLblTo.text = entity.toLang
cell.txtLblToText.text = entity.toText
cell.txtDateTime.text = entity.dateTime
cell.index = indexPath.row
cell.setCellBackground()
return cell
}
}
Step 7: Initiate MLTranslate and MLLangDetect with API KEY.
This is very important step. We have to add the following line of code in the AppDelegate.swift
Code:
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
// Setting API key so that we can use ML Kit for translation
MLTranslateApplication.sharedInstance().setApiKey(AppUtils.API_KEY)
MLLangDetectApplication.sharedInstance().setApiKey(AppUtils.API_KEY)
return true
}
Step 8: Run the application.
We have added all the required code. Now, just build the project, run the application and test on any iOS phone. In this demo, we used iPhone 11 Pro Max for testing purposes.
Conclusion
Whenever user travel to a new place, country or region, he can use this app to translate text from their native language to the visited place spoken language. Once they are done with translation, they can also check the translated history or show it to someone so that they can communicate with the locals easily and conveniently with Smart Translator.
Using ML Kit, developers can develop different iOS applications with auto detect option to improve the UI/UX. ML Kit is a on-device and on-cloud open capability offered by Huawei which can be combined with other functionalities to offer innovative services to the end users.
Tips and Tricks
Before calling the ML Kit, make sure the required agconnect-services.plist is added to the project and ML Kit APIs are enabled from the AGConnect console.
ML Kit must be initiated with the API Key in the AppDelegate.swift.
There are no special permissions needed for this app. However, make sure that the device is connected to Internet and have active connection.
Always use animation libraries like Lottie or ViewAnimator to enhance UI/UX in your application.
References
Huawei ML Kit Official Documentation: click here
Huawei ML Kit FAQs: click here
Lottie iOS Documentation: click here
Github Code Link: click here
Original Source
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Personalized health records and visual tools have been a godsend for digital health management, giving users the tools to conveniently track their health on their mobile phones. From diet to weight and fitness and beyond, storing, managing, and sharing health data has never been easier. Users can track their health over a specific period of time, like a week or a month, to identify potential diseases in a timely manner, and to lead a healthy lifestyle. Moreover, with personalized health records in hand, trips to the doctor now lead to quicker and more accurate diagnoses. Health Kit takes this new paradigm into overdrive, opening up a wealth of capabilities that can endow your health app with nimble, user-friendly features.
With the basic capabilities of Health Kit integrated, your app will be able to obtain users' health data on the cloud from the Huawei Health app, after obtaining users' authorization, and then display the data to users.
Effects
This demo is modified based on the sample code of Health Kit's basic capabilities. You can download the demo and try it out to build your own health app.
PreparationsRegistering an Account and Applying for the HUAWEI ID ServiceHealth Kit uses the HUAWEI ID service and therefore, you need to apply for the HUAWEI ID service first. Skip this step if you have done so for your app.
Applying for the Health Kit ServiceApply for the data read and write scopes for your app. Find the Health Kit service in the Development section on HUAWEI Developers, and apply for the Health Kit service. Select the data scopes required by your app. In the demo, the height and weight data are applied for, which are unrestricted data and will be quickly approved after your application is submitted. If you want to apply for restricted data scopes such as heart rate, blood pressure, blood glucose, and blood oxygen saturation, your application will be manually reviewed.
Integrating the HMS Core SDKBefore getting started, integrate the Health SDK of the basic capabilities into the development environment.
Use Android Studio to open the project, and find and open the build.gradle file in the root directory of the project. Go to allprojects > repositories and buildscript > repositories to add the Maven repository address for the SDK.
Code:
maven {url 'https://developer.huawei.com/repo/'}
Open the app-level build.gradle file and add the following build dependency to the dependencies block.
Code:
implementation 'com.huawei.hms:health:{version}'
Open the modified build.gradle file again. You will find a Sync Now link in the upper right corner of the page. Click Sync Now and wait until the synchronization is complete.
Configuring the Obfuscation Configuration FileBefore building the APK, configure the obfuscation configuration file to prevent the HMS Core SDK from being obfuscated.
Open the obfuscation configuration file proguard-rules.pro in the app's root directory of the project, and add configurations to exclude the HMS Core SDK from obfuscation.
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.huawei.hianalytics.**{*;}
-keep class com.huawei.updatesdk.**{*;}
-keep class com.huawei.hms.**{*;}
Importing the Certificate Fingerprint, Changing the Package Name, and Configuring the JDK Build VersionImport the keystore file generated when the app is created. After the import, open the app-level build.gradle file to view the import result.
Change the app package name to the one you set in applying for the HUAWEI ID Service.
Open the app-level build.gradle file and add the compileOptions configuration to the android block as follows:
Code:
compileOptions {
sourceCompatibility = '1.8'
targetCompatibility = '1.8'
}
Main Implementation Code1. Start the screen for login and authorization.
Code:
/**
* Add scopes that you are going to apply for and obtain the authorization intent.
*/
private void requestAuth() {
// Add scopes that you are going to apply for. The following is only an example.
// You need to add scopes for your app according to your service needs.
String[] allScopes = Scopes.getAllScopes();
// Obtain the authorization intent.
// True indicates that the Huawei Health app authorization process is enabled; False otherwise.
Intent intent = mSettingController.requestAuthorizationIntent(allScopes, true);
// The authorization screen is displayed.
startActivityForResult(intent, REQUEST_AUTH);
}
2. Call com.huawei.hms.hihealth. Then call readLatestData() of the DataController class to read the latest health-related data, including height, weight, heart rate, blood pressure, blood glucose, and blood oxygen.
Code:
/**
* Read the latest data according to the data type.
*
* @param view (indicating a UI object)
*/
public void readLatestData(View view) {
// 1. Call the data controller using the specified data type (DT_INSTANTANEOUS_HEIGHT) to query data.
// Query the latest data of this data type.
List<DataType> dataTypes = new ArrayList<>();
dataTypes.add(DataType.DT_INSTANTANEOUS_HEIGHT);
dataTypes.add(DataType.DT_INSTANTANEOUS_BODY_WEIGHT);
dataTypes.add(DataType.DT_INSTANTANEOUS_HEART_RATE);
dataTypes.add(DataType.DT_INSTANTANEOUS_STRESS);
dataTypes.add(HealthDataTypes.DT_INSTANTANEOUS_BLOOD_PRESSURE);
dataTypes.add(HealthDataTypes.DT_INSTANTANEOUS_BLOOD_GLUCOSE);
dataTypes.add(HealthDataTypes.DT_INSTANTANEOUS_SPO2);
Task<Map<DataType, SamplePoint>> readLatestDatas = dataController.readLatestData(dataTypes);
// 2. Calling the data controller to query the latest data is an asynchronous operation.
// Therefore, a listener needs to be registered to monitor whether the data query is successful or not.
readLatestDatas.addOnSuccessListener(new OnSuccessListener<Map<DataType, SamplePoint>>() {
@Override
public void onSuccess(Map<DataType, SamplePoint> samplePointMap) {
logger("Success read latest data from HMS core");
if (samplePointMap != null) {
for (DataType dataType : dataTypes) {
if (samplePointMap.containsKey(dataType)) {
showSamplePoint(samplePointMap.get(dataType));
handleData(dataType);
} else {
logger("The DataType " + dataType.getName() + " has no latest data");
}
}
}
}
});
readLatestDatas.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
logger(errorCode + ": " + errorMsg);
}
});
}
The DataType object contains the specific data type and data value. You can obtain the corresponding data by parsing the object.
ConclusionPersonal health records make it much easier for users to stay informed about their health. The health records help track health data over specific periods of time, such as week-by-week or month-by-month, providing invaluable insight, to make proactive health a day-to-day reality. When developing a health app, integrating data-related capabilities can help streamline the process, allowing you to focus your energy on app design and user features, to bring users a smart handy health assistant.
ReferenceHUAWEI Developers
HMS Core Health Kit Development Guide
Integrating the HMS Core SDK