What is Huawei Image kit?
Nowadays, image editing is a must in image related applications like social media/network apps. To editing images Huawei offers the Image Kit. Huawei Image Kit provides Image Vision service for 24 unique color filters and Image Render service for five basic animations and nine advance animations. Huawei describes the image kit as “HUAWEI Image Kit incorporates powerful scene-specific smart design and animation production functions into your app, giving it the power of efficient image content reproduction while providing a better image editing experience for your users.”. Also all APIs provided by the Image Kit are free of charge.
Restrictions
Image kit supports Huawei devices with HMS Core version 4.0.2 or later and with Android 8 or later. Also Image Kit 1.0.3 version can be used on non-Huawei mobile devices if you add the fallback-SDK dependency.
Image Vision service
Filter: The image size is not greater than 15 MB, the image resolution is not greater than 8000 x 8000, and the aspect ratio is between 1:3 and 3:1.
Smart layout: The aspect ratio is 9:16, and the image size is not greater than 10 MB. If the aspect ratio is not 9:16, the image will be cropped to the aspect ratio of 9:16.
Image tagging: The recommended image resolution is 224 x 224 and the aspect ratio is 1:1.
Image cropping: The recommended image resolution is greater than 800 x 800.
A larger image size can lead to longer parsing and response time as well as higher memory and CPU usage and power consumption.
Image Render service
The recommended image size is not greater than 10 MB. A larger image size can lead to longer parsing and response time as well as higher memory and CPU usage and power consumption, and can even result in frame freezing and black screen. To ensure interactive effects, only full-screen display is supported for animation views; that is, the animation view returned by the Image Render service can only be displayed in full screen. If the performance of a mobile phone is not sufficient for animation recording, set the GIF compression rate to 0.3 or lower and the frame rate to 20 or lower.
For detailed explanations and information on restrictions.
Development Process
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To start our development of image editing app first, we need to integrate the HMS Core. It is mandatory to use HMS kits and services. You can use these guides to complete integration of HMS Core
After we integrated HMS Core to our project we need to declare required permissions to AndroidManifest.xml file
Code:
# Required permissions for Image Vision Service
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/>
# Required permissions for Image Render Service
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/>
And finally we need to add dependencies to our application level gradle file. You can take a look at the version changes for Image Kit here.
Code:
dependencies{
implementation 'com.huawei.hms:image-render:{version}'
implementation 'com.huawei.hms:image-vision:{version}'
}
//Replace {version} with the actual SDK version of the service you use, for example:
//Image Vision: com.huawei.hms:image-vision:1.0.3.301 - Current version of Image Vision
//Image Render: com.huawei.hms:image-render:1.0.3.301 - Current version of Image Render
When we sync project with gradle files, we are ready to start our coding for simple photo editing application.
Step 1: Base Activity
First we will create a base activity to extend both Image Render and Image Vision functionality. Base activity will contain a function to create a JSONObject which we can simply call authJson. This variable will contain some parameters for authentication.
Code:
private var projectDetailsString = "{\"projectId\":\"projectIdTest\",\"appId\":\"appIdTest\",\"authApiKey\":\"authApiKeyTest\",\"clientSecret\":\"clientSecretTest\",\"clientId\":\"clientIdTest\",\"token\":\"tokenTest\"}"
private var authJson: JSONObject? = null
fun initAuthJson() {
try {
authJson = JSONObject(string)
} catch (e: JSONException) {
Log.e(Constants.logTag, e.toString())
}
}
? Step 2: Image Render
We will develop Image Render activity first. For that we need to create a path for our sources. Then ask user for permissions, initialize view and initialize authJson by calling “initAuthJson” function.
Code:
var sourcePath: String? = null
private val sourcePathName = "sources"
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_render)
sourcePath = filesDir.path + File.separator + sourcePathName
initView()
initAuthJson()
initPermission()
}
To use Image Render capabilities we need to initialize it. We can accomplish this by getting an instance of Image Render. If ImageRender instance is successfully obtained, the onSuccess method will be called and we will call “initRenderView” to initialize our views for render. If anything goes wrong the onFailure method will be called and we can log the error.
Code:
var imageRenderAPI: ImageRenderImpl? = null
private fun initImageRender() {
ImageRender.getInstance(this, object : ImageRender.RenderCallBack {
override fun onSuccess(imageRender: ImageRenderImpl?) {
imageRenderAPI = imageRender
initRenderView()
}
override fun onFailure(errorCode: Int) {
Log.e(Constants.logTag, "Error Code: $errorCode")
}
})
}
After a successful attempt we initialized our variable with ImageRender instance and we can start initializing our render view. In “initRenderView” function we will null check our imageRenderAPI variable and if it is good to go we can simply start getting our initialization result. In a successful initialization, result will be “0”. Huawei documentation explain this render view process as “After the getRenderView() API is called, the Image Render service parses the image and script in sourcePath and returns the rendered views to the app. User interaction is supported for advanced animation views, such as particles and ripples. For better interaction effects, it is recommended that the obtained rendered view be displayed in full screen.” Also there is second method to obtain views. If you want an explanation for both method you can take a look at this documentation.
More details. you can check https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202387480145110483&fid=0101187876626530001&channelname=HuoDong59&ha_source=xda
Is there any restriction on image size, what is the min sdk version will support this service.
Related
1. Intro
If you're looking to develop a customized and cost-effective deep learning model, you'd be remiss not to try the recently released custom model service in HUAWEI ML Kit. This service gives you the tools to manage the size of your model, and provides simple APIs for you to perform inference. The following is a demonstration of how you can run your model on a device at a minimal cost.
The service allows you to pre-train image classification models, and the steps below show you the process for training and using a custom model.
2. Implementation
a. Install HMS Toolkit from Android Studio Marketplace. After the installation, restart Android Studio.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
b. Transfer learning by using AI Create.
Basic configuration
* Note: First install a Python IDE.
AI Create uses MindSpore as the training framework and MindSpore Lite as the inference framework. Follows the steps below to complete the basic configuration.
i) In Coding Assistant, go to AI > AI Create.
ii) Select Image or Text, and click on Confirm.
iii) Restart the IDE. Select Image or Text, and click on Confirm. The MindSpore tool will be automatically installed.
HMS Toolkit allows you to generate an API or demo project in one click, enabling you to quickly verify and call the image classification model in your app.
Before using the transfer learning function for image classification, prepare image resources for training as needed.
*Note: The images should be clear and placed in different folders by category.
Model training
Train the image classification model pre-trained in ML Kit to learn hundreds of images in specific fields (such as vehicles and animals) in a matter of minutes. A new model will then be generated, which will be capable of automatically classifying images. Follow the steps below for model training.
i) In Coding Assistant, go to AI > AI Create > Image.
ii) Set the following parameters and click on Confirm.
Operation type: Select New Model.
Model Deployment Location: Select Deployment Cloud.
iii) Drag or add the image folders to the Please select train image folder area.
iv) Set Output model file path and train parameters.
v) Retain the default values of the train parameters as shown below:
Iteration count: 100
Learning rate: 0.01
vi) Click on Create Model to start training and to generate an image classification model.
After the model is generated, view the model learning results (training precision and verification precision), corresponding learning parameters, and training data.
Model verification
After the model training is complete, you can verify the model by adding the image folders in the Please select test image folder area under Add test image. The tool will automatically use the trained model to perform the test and display the test results.
Click on Generate Demo to have HMS Toolkit generate a demo project, which automatically integrates the trained model. You can directly run and build the demo project to generate an APK file, and run the file on a simulator or real device to verify image classification performance.
c. Use the model.
Upload the model
The image classification service classifies elements in images into logical categories, such as people, objects, environments, activities, or artwork, to define image themes and application scenarios. The service supports both on-device and on-cloud recognition modes, and offers the pre-trained model capability.
To upload your model to the cloud, perform the following steps:
i) Sign in to AppGallery Connect and click on My projects.
ii) Go to ML Kit > Custom ML, to have the model upload page display. On this page, you can also upgrade existing models.
Load the remote model
Before loading a remote model, check whether the remote model has been downloaded. Load the local model if the remote model has not been downloaded.
Code:
localModel = new MLCustomLocalModel.Factory("localModelName")
.setAssetPathFile("assetpathname")
.create();
remoteModel =new MLCustomRemoteModel.Factory("yourremotemodelname").create();
MLLocalModelManager.getInstance()
// Check whether the remote model exists.
.isModelExist(remoteModel)
.addOnSuccessListener(new OnSuccessListener<Boolean>() {
@Override
public void onSuccess(Boolean isDownloaded) {
MLModelExecutorSettings settings;
// If the remote model exists, load it first. Otherwise, load the existing local model.
if (isDownloaded) {
settings = new MLModelExecutorSettings.Factory(remoteModel).create();
} else {
settings = new MLModelExecutorSettings.Factory(localModel).create();
}
final MLModelExecutor modelExecutor = MLModelExecutor.getInstance(settings);
executorImpl(modelExecutor, bitmap);
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Exception handling.
}
});
Perform inference using the model inference engine
Set the input and output formats, input image data to the inference engine, and use the loaded modelExecutor(MLModelExecutor) to perform the inference.
Code:
private void executorImpl(final MLModelExecutor modelExecutor, Bitmap bitmap){
// Prepare input data.
final Bitmap inputBitmap = Bitmap.createScaledBitmap(srcBitmap, 224, 224, true);
final float[][][][] input = new float[1][224][224][3];
for (int i = 0; i < 224; i++) {
for (int j = 0; j < 224; j++) {
int pixel = inputBitmap.getPixel(i, j);
input[batchNum][j][i][0] = (Color.red(pixel) - 127) / 128.0f;
input[batchNum][j][i][1] = (Color.green(pixel) - 127) / 128.0f;
input[batchNum][j][i][2] = (Color.blue(pixel) - 127) / 128.0f;
}
}
MLModelInputs inputs = null;
try {
inputs = new MLModelInputs.Factory().add(input).create();
// If the model requires multiple inputs, you need to call add() for multiple times so that image data can be input to the inference engine at a time.
} catch (MLException e) {
// Handle the input data formatting exception.
}
// Perform inference. You can use addOnSuccessListener to listen for successful inference which is processed in the onSuccess callback. In addition, you can use addOnFailureListener to listen for inference failure which is processed in the onFailure callback.
modelExecutor.exec(inputs, inOutSettings).addOnSuccessListener(new OnSuccessListener<MLModelOutputs>() {
@Override
public void onSuccess(MLModelOutputs mlModelOutputs) {
float[][] output = mlModelOutputs.getOutput(0);
// The inference result is in the output array and can be further processed.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Inference exception.
}
});
}
3. Summary
By utilizing Huawei's deep learning framework, you'll be able to create and use a deep learning model for your app by following just a few steps!
Furthermore, the custom model service for ML Kit is compatible with all mainstream model inference platforms and frameworks on the market, including MindSpore, TensorFlow Lite, Caffe, and Onnx. Different models can be converted into .ms format, and run perfectly within the on-device inference framework.
Custom models can be deployed to the device in a smaller size after being quantized and compressed. To further reduce the APK size, you can host your models on the cloud. With this service, even a novice in the field of deep learning is able to quickly develop an AI-driven app which serves a specific purpose.
Learn More
For more information, please visit HUAWEI Developers.
For detailed instructions, please visit Development Guide.
You can join the HMS Core developer discussion on Reddit.
You can download the demo and sample code from GitHub.
To solve integration problems, please go to Stack Overflow.
It seems there are many new fuctions on ML Kit. Wonderful!
Hi, have you tried searching for public APIs?
1. Background
Cordova is an open-source cross-platform development framework that allows you to use HTML and JavaScript to develop apps across multiple platforms, such as Android and iOS. So how exactly does Cordova enable apps to run on different platforms and implement the functions? The abundant plugins in Cordova are the main reason, and free you to focus solely on app functions, without having to interact with the APIs at the OS level.
HMS Core provides a set of Cordova-related plugins, which enable you to integrate kits with greater ease and efficiency.
2. Introduction
Here, I'll use the Cordova plugin in HUAWEI Push Kit as an example to demonstrate how to call Java APIs in JavaScript through JavaScript-Java messaging.
The following implementation principles can be applied to all other kits, except for Map Kit and Ads Kit (which will be detailed later), and help you master troubleshooting solutions.
3. Basic Structure of Cordova
When you call loadUrl in MainActivity, CordovaWebView will be initialized and Cordova starts up. In this case, CordovaWebView will create PluginManager, NativeToJsMessageQueue, as well as ExposedJsApi of JavascriptInterface. ExposedJsApi and NativeToJsMessageQueue will play a role in the subsequent communication.
During the plugin loading, all plugins in the configuration file will be read when the PluginManager object is created, and plugin mappings will be created. When the plugin is called for the first time, instantiation is conducted and related functions are executed.
A message can be returned from Java to JavaScript in synchronous or asynchronous mode. In Cordova, set async in the method to distinguish the two modes.
In synchronous mode, Cordova obtains data from the header of the NativeToJsMessageQueue queue, finds the message request based on callbackID, and returns the data to the success method of the request.
In asynchronous mode, Cordova calls the loop method to continuously obtain data from the NativeToJsMessageQueue queue, finds the message request, and returns the data to the success method of the request.
In the Cordova plugin of Push Kit, the synchronization mode is used.
4. Plugin CallYou may still be unclear on how the process works, based on the description above, so I've provided the following procedure:
1. Install the plugin.
Run the cordova plugin add u/hmscore/cordova-plugin-hms-push command to install the latest plugin. After the command is executed, the plugin information is added to the plugins directory.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The plugin.xml file records all information to be used, such as JavaScript and Android classes. During the plugin initialization, the classes will be loaded to Cordova. If a method or API is not configured in the file, it is unable to be used.
2. Create a message mapping.
The plugin provides the methods for creating mappings for the following messages:
1) HmsMessaging
In the HmsPush.js file, call the runHmsMessaging API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.
The message will be transferred to the HmsPushMessaging class. The execute method in HmsPushMessaging can transfer the message to a method for processing based on the action type in the message.
Code:
public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException { hmsLogger.startMethodExecutionTimer(action); switch (action) { case "isAutoInitEnabled": isAutoInitEnabled(callbackContext); break; case "setAutoInitEnabled": setAutoInitEnabled(args.getBoolean(1), callbackContext); break; case "turnOffPush": turnOffPush(callbackContext); break; case "turnOnPush": turnOnPush(callbackContext); break; case "subscribe": subscribe(args.getString(1), callbackContext); break;
The processing method returns the result to JavaScript. The result will be written to the nativeToJsMessageQueue queue.
Code:
callBack.sendPluginResult(new PluginResult(PluginResult.Status.OK,autoInit));
2) HmsInstanceId
In the HmsPush.js file, call the runHmsInstance API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.
The message will be transferred to the HmsPushInstanceId class. The execute method in HmsPushInstanceId can transfer the message to a method for processing based on the action type in the message.
Code:
public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException { if (!action.equals("init")) hmsLogger.startMethodExecutionTimer(action);
switch (action) { case "init": Log.i("HMSPush", "HMSPush initialized "); break; case "enableLogger": enableLogger(callbackContext); break; case "disableLogger": disableLogger(callbackContext); break; case "getToken": getToken(args.length() > 1 ? args.getString(1) : Core.HCM, callbackContext); break; case "getAAID": getAAID(callbackContext); break; case "getCreationTime": getCreationTime(callbackContext); break;
Similarly, the processing method returns the result to JavaScript. The result will be written to the nativeToJsMessageQueue queue.
Code:
callBack.sendPluginResult(new PluginResult(PluginResult.Status.OK,autoInit));
This process is similar to that for HmsPushMessaging. The main difference is that HmsInstanceId is used for HmsPushInstanceId-related APIs, and HmsMessaging is used for HmsPushMessaging-related APIs.
3) localNotification
In the HmsLocalNotification.js file, call the run API in asynchronous mode to transfer the message to the Android platform. The Android platform returns the result through Promise.
The message will be transferred to the HmsLocalNotification class. The execute method in HmsLocalNotification can transfer the message to a method for processing based on the action type in the message.
Code:
public void execute(String action, final JSONArray args, final CallbackContext callbackContext) throws JSONException { switch (action) { case "localNotification": localNotification(args, callbackContext); break; case "localNotificationSchedule": localNotificationSchedule(args.getJSONObject(1), callbackContext); break; case "cancelAllNotifications": cancelAllNotifications(callbackContext); break; case "cancelNotifications": cancelNotifications(callbackContext); break; case "cancelScheduledNotifications": cancelScheduledNotifications(callbackContext); break; case "cancelNotificationsWithId": cancelNotificationsWithId(args.getJSONArray(1), callbackContext); break;
Call sendPluginResult to return the result. However, for localNotification, the result will be returned after the notification is sent.
3. Perform message push event callback.
In addition to the method calling, message push involves listening for many events, for example, receiving common messages, data messages, and tokens.
The callback process starts from Android.
In Android, the callback method is defined in HmsPushMessageService.java.
Based on the SDK requirements, you can opt to redefine certain callback methods, such as onMessageReceived, onDeletedMessages, and onNewToken.
When an event is triggered, an event notification is sent to JavaScript.
Code:
public static void runJS(final CordovaPlugin plugin, final String jsCode) { if (plugin == null) return; Log.d(TAG, "runJS()");
plugin.cordova.getActivity().runOnUiThread(() -> { CordovaWebViewEngine engine = plugin.webView.getEngine(); if (engine == null) { plugin.webView.loadUrl("javascript:" + jsCode);
} else { engine.evaluateJavascript(jsCode, (result) -> {
}); } }); }
Each event is defined and registered in HmsPushEvent.js.
exports.REMOTE_DATA_MESSAGE_RECEIVED = "REMOTE_DATA_MESSAGE_RECEIVED"; exports.TOKEN_RECEIVED_EVENT = "TOKEN_RECEIVED_EVENT"; exports.ON_TOKEN_ERROR_EVENT = "ON_TOKEN_ERROR_EVENT"; exports.NOTIFICATION_OPENED_EVENT = "NOTIFICATION_OPENED_EVENT"; exports.LOCAL_NOTIFICATION_ACTION_EVENT = "LOCAL_NOTIFICATION_ACTION_EVENT"; exports.ON_PUSH_MESSAGE_SENT = "ON_PUSH_MESSAGE_SENT"; exports.ON_PUSH_MESSAGE_SENT_ERROR = "ON_PUSH_MESSAGE_SENT_ERROR"; exports.ON_PUSH_MESSAGE_SENT_DELIVERED = "ON_PUSH_MESSAGE_SENT_DELIVERED";
Java:
function onPushMessageSentDelivered(result) { window.registerHMSEvent(exports.ON_PUSH_MESSAGE_SENT_DELIVERED, result); } exports.onPushMessageSentDelivered = onPushMessageSentDelivered;
Please note that the event initialization needs to be performed during app development. Otherwise, the event listening will fail. For more details, please refer to eventListeners.js in the demo.
If the callback has been triggered in Java, but is not received in JavaScript, check whether the event initialization is performed.
In doing so, when an event is triggered in Android, JavaScript will be able to receive and process the message. You can also refer to this process to add an event.
5. Summary
The description above illustrates how the plugin implements the JavaScript-Java communications. The methods of most kits can be called in a similar manner. However, Map Kit, Ads Kit, and other kits that need to display images or videos (such as maps and native ads) require a different method, which will be introduced in a later article.
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source
Introduction
In this article, we will learn how to integrate Huawei image category labeling. We will build an application with a smart search feature where using image from gallery or camera to find similar products.
What is Huawei Image category labeling?
Image category labeling identifies image elements such as objects, scenes, and actions, including flower, bird, fish, insect, vehicle and building based on deep learning methods. It identifies 100 different categories of objects, scenes, actions, tags information, and applying cutting-edge intelligent image recognition algorithms for a high degree of accuracy.
Features
Abundant labels: Supports recognition of 280 types of common objects, scenes, and actions.
Multiple-label support: Adds multiple labels with different confidence scores to a single image.
High accuracy: Identifies categories accurately by utilizing industry-leading device-side intelligent image recognition algorithms.
How to integrate Image category labeling
Configure the application on the AGC.
Apply for HiAI Engine Library
Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI ?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development, and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add it to your android project under the libs folder.
Step 7: Add jar files dependences into app build.gradle file.
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}Copy code
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'Copy code
Root level gradle dependencies.
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'Copy code
Step 3: Add permission in AndroidManifest.xml
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.hardware.camera"/>
<uses-permission android:name="android.permission.HARDWARE_TEST.camera.autofocus"/>Copy code
Step 4: Build application.
Perform initialization by the VisionBase static class, to asynchronously obtain a connection to the service.
private void initHuaweiHiAI(Context context){
VisionBase.init(context, new ConnectionCallback(){
@override
public void onServiceConnect(){
Log.i(TAG, "onServiceConnect");
}
@override
public void onServiceDisconnect(){
Log.i(TAG, "onServiceDisconnect");
}
});
}Copy code
Define the labelDetector instance with context as parameter
LabelDetector labelDetector = new LabelDetector(getApplicationContext());Copy code
Place the Bitmap image to be processed in VisionImage
bitmap = ((BitmapDrawable)imageView.getDrawable()).getBitmap();
VisionImage image = (bitmap == null) ? null : VisionImage.fromBitmap(bitmap);Copy code
Define Label, the image label result class.
Label label = new Label();Copy code
Call the detect method to obtain the label detection result.
int resultCode = labelDetector.detect(image, label, null);
Copy code
Category definitions
Find all categories definition here.
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
Min SDK is 21
If you are taking images from a camera or gallery make sure your app has camera and storage permission.
Conclusion
In this article, we have learnt the following concepts.
What is Image category labeling?
Features of Image category labeling
How to integrate Image category labeling using Huawei HiAI
How to Apply Huawei HiAI
Search product by image label
Reference
Image Category Labeling
Find all categories definition here.
Apply for Huawei HiAI
Click to expand...
Click to collapse
Happy coding
can i upload my own product?
ProManojKumar said:
can i upload my own product?
Click to expand...
Click to collapse
Yes you can
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Personalized health records and visual tools have been a godsend for digital health management, giving users the tools to conveniently track their health on their mobile phones. From diet to weight and fitness and beyond, storing, managing, and sharing health data has never been easier. Users can track their health over a specific period of time, like a week or a month, to identify potential diseases in a timely manner, and to lead a healthy lifestyle. Moreover, with personalized health records in hand, trips to the doctor now lead to quicker and more accurate diagnoses. Health Kit takes this new paradigm into overdrive, opening up a wealth of capabilities that can endow your health app with nimble, user-friendly features.
With the basic capabilities of Health Kit integrated, your app will be able to obtain users' health data on the cloud from the Huawei Health app, after obtaining users' authorization, and then display the data to users.
Effects
This demo is modified based on the sample code of Health Kit's basic capabilities. You can download the demo and try it out to build your own health app.
PreparationsRegistering an Account and Applying for the HUAWEI ID ServiceHealth Kit uses the HUAWEI ID service and therefore, you need to apply for the HUAWEI ID service first. Skip this step if you have done so for your app.
Applying for the Health Kit ServiceApply for the data read and write scopes for your app. Find the Health Kit service in the Development section on HUAWEI Developers, and apply for the Health Kit service. Select the data scopes required by your app. In the demo, the height and weight data are applied for, which are unrestricted data and will be quickly approved after your application is submitted. If you want to apply for restricted data scopes such as heart rate, blood pressure, blood glucose, and blood oxygen saturation, your application will be manually reviewed.
Integrating the HMS Core SDKBefore getting started, integrate the Health SDK of the basic capabilities into the development environment.
Use Android Studio to open the project, and find and open the build.gradle file in the root directory of the project. Go to allprojects > repositories and buildscript > repositories to add the Maven repository address for the SDK.
Code:
maven {url 'https://developer.huawei.com/repo/'}
Open the app-level build.gradle file and add the following build dependency to the dependencies block.
Code:
implementation 'com.huawei.hms:health:{version}'
Open the modified build.gradle file again. You will find a Sync Now link in the upper right corner of the page. Click Sync Now and wait until the synchronization is complete.
Configuring the Obfuscation Configuration FileBefore building the APK, configure the obfuscation configuration file to prevent the HMS Core SDK from being obfuscated.
Open the obfuscation configuration file proguard-rules.pro in the app's root directory of the project, and add configurations to exclude the HMS Core SDK from obfuscation.
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.huawei.hianalytics.**{*;}
-keep class com.huawei.updatesdk.**{*;}
-keep class com.huawei.hms.**{*;}
Importing the Certificate Fingerprint, Changing the Package Name, and Configuring the JDK Build VersionImport the keystore file generated when the app is created. After the import, open the app-level build.gradle file to view the import result.
Change the app package name to the one you set in applying for the HUAWEI ID Service.
Open the app-level build.gradle file and add the compileOptions configuration to the android block as follows:
Code:
compileOptions {
sourceCompatibility = '1.8'
targetCompatibility = '1.8'
}
Main Implementation Code1. Start the screen for login and authorization.
Code:
/**
* Add scopes that you are going to apply for and obtain the authorization intent.
*/
private void requestAuth() {
// Add scopes that you are going to apply for. The following is only an example.
// You need to add scopes for your app according to your service needs.
String[] allScopes = Scopes.getAllScopes();
// Obtain the authorization intent.
// True indicates that the Huawei Health app authorization process is enabled; False otherwise.
Intent intent = mSettingController.requestAuthorizationIntent(allScopes, true);
// The authorization screen is displayed.
startActivityForResult(intent, REQUEST_AUTH);
}
2. Call com.huawei.hms.hihealth. Then call readLatestData() of the DataController class to read the latest health-related data, including height, weight, heart rate, blood pressure, blood glucose, and blood oxygen.
Code:
/**
* Read the latest data according to the data type.
*
* @param view (indicating a UI object)
*/
public void readLatestData(View view) {
// 1. Call the data controller using the specified data type (DT_INSTANTANEOUS_HEIGHT) to query data.
// Query the latest data of this data type.
List<DataType> dataTypes = new ArrayList<>();
dataTypes.add(DataType.DT_INSTANTANEOUS_HEIGHT);
dataTypes.add(DataType.DT_INSTANTANEOUS_BODY_WEIGHT);
dataTypes.add(DataType.DT_INSTANTANEOUS_HEART_RATE);
dataTypes.add(DataType.DT_INSTANTANEOUS_STRESS);
dataTypes.add(HealthDataTypes.DT_INSTANTANEOUS_BLOOD_PRESSURE);
dataTypes.add(HealthDataTypes.DT_INSTANTANEOUS_BLOOD_GLUCOSE);
dataTypes.add(HealthDataTypes.DT_INSTANTANEOUS_SPO2);
Task<Map<DataType, SamplePoint>> readLatestDatas = dataController.readLatestData(dataTypes);
// 2. Calling the data controller to query the latest data is an asynchronous operation.
// Therefore, a listener needs to be registered to monitor whether the data query is successful or not.
readLatestDatas.addOnSuccessListener(new OnSuccessListener<Map<DataType, SamplePoint>>() {
@Override
public void onSuccess(Map<DataType, SamplePoint> samplePointMap) {
logger("Success read latest data from HMS core");
if (samplePointMap != null) {
for (DataType dataType : dataTypes) {
if (samplePointMap.containsKey(dataType)) {
showSamplePoint(samplePointMap.get(dataType));
handleData(dataType);
} else {
logger("The DataType " + dataType.getName() + " has no latest data");
}
}
}
}
});
readLatestDatas.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
logger(errorCode + ": " + errorMsg);
}
});
}
The DataType object contains the specific data type and data value. You can obtain the corresponding data by parsing the object.
ConclusionPersonal health records make it much easier for users to stay informed about their health. The health records help track health data over specific periods of time, such as week-by-week or month-by-month, providing invaluable insight, to make proactive health a day-to-day reality. When developing a health app, integrating data-related capabilities can help streamline the process, allowing you to focus your energy on app design and user features, to bring users a smart handy health assistant.
ReferenceHUAWEI Developers
HMS Core Health Kit Development Guide
Integrating the HMS Core SDK
Audio is the soul of media, and for mobile apps in particular, it engages with users more, adds another level of immersion, and enriches content.
This is a major driver of my obsession for developing audio-related functions. In my recent post that tells how I developed a portrait retouching function for a live-streaming app, I mentioned that I wanted to create a solution that can retouch music. I know that a technology called spatial audio can help with this, and — guess what — I found a synonymous capability in HMS Core Audio Editor Kit, which can be integrated independently, or used together with other capabilities in the UI SDK of this kit.
I chose to integrate the UI SDK into my demo first, which is loaded with not only the kit's capabilities, but also a ready-to-use UI. This allows me to give the spatial audio capability a try and frees me from designing the UI. Now let's dive into the development procedure of the demo.
Development ProcedurePreparations1. Prepare the development environment, which has requirements on both software and hardware. These are:
Software requirements:
JDK version: 1.8 or later
Android Studio version: 3.X or later
minSdkVersion: 24 or later
targetSdkVersion: 33 (recommended)
compileSdkVersion: 30 (recommended)
Gradle version: 4.6 or later (recommended)
Hardware requirements: a phone running EMUI 5.0 or later, or a phone running Android whose version ranges from Android 7.0 to Android 13.
2. Configure app information in a platform called AppGallery Connect, and go through the process of registering as a developer, creating an app, generating a signing certificate fingerprint, configuring the signing certificate fingerprint, enabling the kit, and managing the default data processing location.
3. Integrate the HMS Core SDK.
4. Add necessary permissions in the AndroidManifest.xml file, including the vibration permission, microphone permission, storage write permission, storage read permission, Internet permission, network status access permission, and permission to obtaining the changed network connectivity state.
When the app's Android SDK version is 29 or later, add the following attribute to the application element, which is used for obtaining the external storage permission.
Code:
<application
android:requestLegacyExternalStorage="true"
…… >
SDK Integration1. Initialize the UI SDK and set the app authentication information. If the information is not set, this may affect some functions of the SDK.
Code:
// Obtain the API key from the agconnect-services.json file.
// It is recommended that the key be stored on cloud, which can be obtained when the app is running.
String api_key = AGConnectInstance.getInstance().getOptions().getString("client/api_key");
// Set the API key.
HAEApplication.getInstance().setApiKey(api_key);
2. Create AudioFilePickerActivity, which is a customized activity used for audio file selection.
Code:
/**
* Customized activity, used for audio file selection.
*/
public class AudioFilePickerActivity extends AppCompatActivity {
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
performFileSearch();
}
private void performFileSearch() {
// Select multiple audio files.
registerForActivityResult(new ActivityResultContracts.GetMultipleContents(), new ActivityResultCallback<List<Uri>>() {
@Override
public void onActivityResult(List<Uri> result) {
handleSelectedAudios(result);
finish();
}
}).launch("audio/*");
}
/**
* Process the selected audio files, turning the URIs into paths as needed.
*
* @param uriList indicates the selected audio files.
*/
private void handleSelectedAudios(List<Uri> uriList) {
// Check whether the audio files exist.
if (uriList == null || uriList.size() == 0) {
return;
}
ArrayList<String> audioList = new ArrayList<>();
for (Uri uri : uriList) {
// Obtain the real path.
String filePath = FileUtils.getRealPath(this, uri);
audioList.add(filePath);
}
// Return the audio file path to the audio editing UI.
Intent intent = new Intent();
// Use HAEConstant.AUDIO_PATH_LIST that is provided by the SDK.
intent.putExtra(HAEConstant.AUDIO_PATH_LIST, audioList);
// Use HAEConstant.RESULT_CODE as the result code.
this.setResult(HAEConstant.RESULT_CODE, intent);
finish();
}
}
The FileUtils utility class is used for obtaining the real path, which is detailed here. Below is the path to this class.
Code:
app/src/main/java/com/huawei/hms/audioeditor/demo/util/FileUtils.java
3. Add the action value to AudioFilePickerActivity in AndroidManifest.xml. The SDK would direct to a screen according to this action.
Code:
<activity
android:name=".AudioFilePickerActivity"
android:exported="false">
<intent-filter>
<action android:name="com.huawei.hms.audioeditor.chooseaudio" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</activity>
4. Launch the audio editing screen via either:
Mode 1: Launch the screen without input parameters. In this mode, the default configurations of the SDK are used.
Code:
HAEUIManager.getInstance().launchEditorActivity(this);
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Audio editing screens
Mode 2: Launch the audio editing screen with input parameters. This mode lets you set the menu list and customize the path for an output file. On top of this, the mode also allows for specifying the input audio file paths, setting the draft mode, and more.
Launch the screen with the menu list and customized output file path:
Code:
// List of level-1 menus. Below are just some examples:
ArrayList<Integer> menuList = new ArrayList<>();
// Add audio.
menuList.add(MenuCommon.MAIN_MENU_ADD_AUDIO_CODE);
// Record audio.
menuList.add(MenuCommon.MAIN_MENU_AUDIO_RECORDER_CODE);
// List of level-2 menus, which are displayed after audio files are input and selected.
ArrayList<Integer> secondMenuList = new ArrayList<>();
// Split audio.
secondMenuList.add(MenuCommon.EDIT_MENU_SPLIT_CODE);
// Delete audio.
secondMenuList.add(MenuCommon.EDIT_MENU_DEL_CODE);
// Adjust the volume.
secondMenuList.add(MenuCommon.EDIT_MENU_VOLUME2_CODE);
// Customize the output file path.
String exportPath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_MUSIC).getPath() + "/";
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the level-1 menus.
.setCustomMenuList(menuList)
// Set the level-2 menus.
.setSecondMenuList(secondMenuList)
// Set the output file path.
.setExportPath(exportPath);
// Launch the audio editing screen with the menu list and customized output file path.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
Level-1 menus
Level-2 menus
Launch the screen with the specified input audio file paths:
Code:
// Set the input audio file paths.
ArrayList<AudioInfo> audioInfoList = new ArrayList<>();
// Example of an audio file path:
String audioPath = "/storage/emulated/0/Music/Dream_It_Possible.flac";
// Create an instance of AudioInfo and pass the audio file path.
AudioInfo audioInfo = new AudioInfo(audioPath);
// Set the audio name.
audioInfo.setAudioName("Dream_It_Possible");
audioInfoList.add(audioInfo);
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the input audio file paths.
.setFilePaths(audioInfoList);
// Launch the audio editing screen with the specified input audio file paths.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
Launch the screen with drafts:
Code:
// Obtain the draft list. For example:
List<DraftInfo> draftList = HAEUIManager.getInstance().getDraftList();
// Specify the first draft in the draft list.
String draftId = null;
if (!draftList.isEmpty()) {
draftId = draftList.get(0).getDraftId();
}
AudioEditorLaunchOption.Builder audioEditorLaunch = new AudioEditorLaunchOption.Builder()
// Set the draft ID, which can be null.
.setDraftId(draftId)
// Set the draft mode. NOT_SAVE is the default value, which indicates not to save a project as a draft.
.setDraftMode(AudioEditorLaunchOption.DraftMode.SAVE_DRAFT);
// Launch the audio editing screen with drafts.
try {
HAEUIManager.getInstance().launchEditorActivity(this, audioEditorLaunch.build(), new LaunchCallback() {
@Override
public void onFailed(int errCode, String errMsg) {
Toast.makeText(mContext, errMsg, Toast.LENGTH_SHORT).show();
}
});
} catch (IOException e) {
e.printStackTrace();
}
And just like that, SDK integration is complete, and the prototype of the audio editing app I want is ready to use.
Not bad. It has all the necessary functions of an audio editing app, and best of all, it's pretty easy to develop, thanks to the all-in-one and ready-to-use SDK.
Anyway, I tried the spatial audio function preset in the SDK and I found I could effortlessly add more width to a song. However, I also want a customized UI for my app, instead of simply using the one provided by the UI SDK. So my next step is to create a demo with the UI that I have designed and the spatial audio function.
AfterthoughtsTruth to be told, the integration process wasn't as smooth as it seemed. I encountered two issues, but luckily, after doing some of my own research and contacting the kit's technical support team, I was able to fix the issues.
The first issue I came across was that after touching the Add effects and AI dubbing buttons, the UI displayed The token has expired or is invalid, and the Android Studio console printed the HAEApplication: please set your app apiKey log. The reason for this was that the app's authentication information was not configured. There are two ways of configuring this. The first was introduced in the first step of SDK Integration of this post, while the second was to use the app's access token, which had the following code:
Code:
HAEApplication.getInstance().setAccessToken("your access token");
The second issue — which is actually another result of unconfigured app authentication information — is the Something went wrong error displayed on the screen after an operation. To solve it, first make sure that the app authentication information is configured. Once this is done, go to AppGallery Connect to check whether Audio Editor Kit has been enabled for the app. If not, enable it. Note that because of caches (of either the mobile phone or server), it may take a while before the kit works for the app.
Also, in the Preparations part, I skipped the step for configuring obfuscation scripts before adding necessary permissions. This step is, according to technical support, necessary for apps that aim to be officially released. The app I have covered in this post is just a demo, so I just skipped this step.
TakeawayNo app would be complete with audio, and with spatial audio, you can deliver an even more immersive audio experience to your users.
Developing a spatial audio function for a mobile app can be a piece of cake thanks to HMS Core Audio Editor Kit. The spatial audio capability can be integrated either independently or together with other capabilities via the UI SDK, which delivers a ready-to-use UI, so that you can skip the tricky bits and focus more on what matters to users.