How to build an editing app with hms image kit vision service? - Huawei Developers

More information like this, you can visit HUAWEI Developer Forum
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Image Kit Vision Service of HMS (Huawei Mobile Services) offers us very stylish filters to build a photo editor app. In this article, we will design a nice filtering screen using Vision Service. Moreover, it will be very easy to develop and it will allow you to make elegant beauty apps.
The Image Vision service provides 24 color filters for image optimization. It renders the images you provide and returns them as filtered bitmap objects.
Requirements :
Huawei Phone (It doesn’t support non-Huawei Phones)
EMUI 8.1 or later (Min Android SDK Version 26)
Restrictions :
When using the filter function of Image Vision, make sure that the size of the image to be parsed is not greater than 15 MB, the image resolution is not greater than 8000 x 8000, and the aspect ratio is between 1:3 and 3:1. If the image resolution is greater than 8000 x 8000 after the image is decompressed by adjusting the compression settings or the aspect ratio is not between 1:3 and 3:1, a result code indicating parameter error will be returned. In addition, a larger image size can lead to longer parsing and response time as well as higher memory and CPU usage and power consumption.
Let’s start to build a nice filtering screen. First of all, please follow these steps to create a regular app on App Gallery.
Then we need to add dependencies to the app level gradle file. (implementation ‘com.huawei.hms:image-vision:1.0.2.301’)
Don’t forget to add agconnect plugin. (apply plugin: ‘com.huawei.agconnect’)
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'kotlin-kapt'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.huawei.hmsimagekitdemo"
minSdkVersion 26
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
lintOptions {
abortOnError false
}
}
repositories {
flatDir {
dirs 'libs'
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.aar'])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.0'
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.1.0'
implementation 'androidx.lifecycle:lifecycle-extensions:2.2.0'
implementation 'com.google.android.material:material:1.0.0'
implementation 'androidx.legacy:legacy-support-v4:1.0.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
implementation 'com.huawei.hms:image-vision:1.0.2.301'
}
Add maven repo url and agconnect dependency to the project level gradle file.
Code:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
ext.kotlin_version = "1.3.72"
repositories {
google()
jcenter()
maven { url 'http://developer.huawei.com/repo/' }
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.0"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'http://developer.huawei.com/repo/' }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
After added dependencies, we need to create an ImageVision instance to perform related operations such as obtaining the filter effect. From now on, you can initialize the service.
Code:
private fun initFilter() {
coroutineScope.launch { // CoroutineScope is used for the async calls
// Create an ImageVision instance and initialize the service.
imageVisionAPI = ImageVision.getInstance(baseContext)
imageVisionAPI.setVisionCallBack(object : VisionCallBack {
override fun onSuccess(successCode: Int) {
val initCode = imageVisionAPI.init(baseContext, authJson)
// initCode must be 0 if the initialization is successful.
if (initCode == 0)
Log.d(TAG, "getImageVisionAPI rendered image successfully")
}
override fun onFailure(errorCode: Int) {
Log.e(TAG, "getImageVisionAPI failure, errorCode = $errorCode")
Toast.makeText([email protected], "initFailed", Toast.LENGTH_SHORT).show()
}
})
}
}
Our app is allowed to use the Image Vision service only after the successful verification. So we should provide an authJson object with app credentials. The value of initCode must be 0, indicating that the initialization is successful.
Code:
private fun startFilter(filterType: String, intensity: String, compress: String) {
coroutineScope.launch { // CoroutineScope is used for the async calls
val jsonObject = JSONObject()
val taskJson = JSONObject()
try {
taskJson.put("intensity", intensity) //Filter strength. Generally, set this parameter to 1.
taskJson.put("filterType", filterType) // 24 different filterType code
taskJson.put("compressRate", compress) // Compression ratio.
jsonObject.put("requestId", "1")
jsonObject.put("taskJson", taskJson)
jsonObject.put("authJson", authJson) // App can use the service only after it is successfully authenticated.
coroutineScope.launch {
var deferred: Deferred<ImageVisionResult?> = async(Dispatchers.IO) {
imageVisionAPI?.getColorFilter(jsonObject, bitmapFromGallery)
// Obtain the rendering result from visionResult
}
visionResult = deferred.await() // wait till obtain ImageVisionResult object
val image = visionResult?.image
if (image == null)
Log.e(TAG, "FilterException: Couldn't render the image. Check the restrictions while rendering an image by Image Vision Service")
channel.send(image)
// Sending image bitmap with an async channel to make it receive with another channel
}
} catch (e: JSONException) {
Log.e(TAG, "JSONException: " + e.message)
}
}
}
Select an image from the Gallery. Call Init filter method and then start filtering images one by one which are located in recyclerView.
Code:
override fun onActivityResult(requestCode: Int, resultCode: Int, intent: Intent?) {
super.onActivityResult(requestCode, resultCode, intent)
if (resultCode == RESULT_OK) {
when (requestCode) {
PICK_REQUEST ->
if (intent != null) {
coroutineScope.launch {
var deferred: Deferred<Uri?> =
async(Dispatchers.IO) {
intent.data
}
imageUri = deferred.await()
imgView.setImageURI(imageUri)
bitmapFromGallery = (imgView.getDrawable() as BitmapDrawable).bitmap
initFilter()
startFilterForSubImages()
}
}
}
}
}
In our scenario, a user clicks a filter to render the selected image we provide and the Image Vision Result object returns the filtered bitmap. So we need to implement onSelected method of the interface to our activity which gets the FilterItem object of the clicked item from the adapter.
Code:
// Initialize and start a filter operation when a filter item is selected
override fun onSelected(item: BaseInterface) {
if (!channelIsFetching)
{
if (bitmapFromGallery == null)
Toast.makeText(baseContext, getString(R.string.select_photo_from_gallery), Toast.LENGTH_SHORT).show()
else
{
var filterItem = item as FilterItem
initFilter() // initialize the vision service
startFilter(filterItem.filterId, "1", "1") // intensity and compress are 1
coroutineScope.launch {
withContext(Dispatchers.Main)
{
imgView.setImageBitmap(channel.receive()) // receive the filtered bitmap result from another channel
stopFilter() // stop the vision service
}
}
}
}
else
Toast.makeText(baseContext, getString(R.string.wait_to_complete_filtering), Toast.LENGTH_SHORT).show()
}
FilterType codes of 24 different filters as follows:
When the user opens the gallery and selects an image from a directory, we will produce 24 different filtered versions of the image. I used async coroutine channels to render images with first in first out manner. So we can obtain the filter images one by one. Using Image Vision Service with Kotlin Coroutines is so fast and performance-friendly.
To turn off hardware acceleration, configure the AndroidManifest.xml file as follows:
Code:
<application
android:allowBackup="true"
android:hardwareAccelerated="false"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity
android:name=".ui.MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
If you do not need to use filters any longer, call the imageVisionAPI.stop() API to stop the Image Vision service. If the returned stopCode is 0, the service is successfully stopped.
Code:
private fun stopFilter() {
if(imageVisionAPI != null)
imageVisionAPI.stop() // Stop the service if you don't need anymore
}
We have designed an elegant filtering screen quite easily. Preparing a filter page will no longer take your time. You will be able to develop quickly without having to know OpenGL. You should try Image Kit Vision Service as soon as possible.
And the result :
For more information about HMS Image Kit Vision Service please refer to :
HMS Image Kit Vision Service Documentation

Related

React Native HMS ML Kit | Installation and Example

More information like this, you can visit HUAWEI Developer Forum​
Introduction
This article covers, how to integrate React Native HMS ML Kit to a React Native application.
React Native Hms ML Kit supports services listed below
· Text Related Services
· Language Related Services
· Image Related Services
· Face/Body Related Services
There are several number of uses cases of these services, you can combine them or just use them to create different functionalities in your app. For basic understanding, please read uses cases from here.
Github: https://github.com/HMS-Core/hms-react-native-plugin/tree/master/react-native-hms-ml
Prerequisites
Step 1
Prepare your development environment using this guide.
After reading this guide you should have React Native Development Environment setted up, Hms Core (APK) installed and Android Sdk installed.
Step 2
Configure your app information in App Gallery by following this guide.
After reading this guide you should have a Huawei Developer Account, an App Gallery app, a keystore file and enabled ml kit service from AppGallery.
Integrating React Native Hms ML Kit
Warning : Please make sure that, prerequisites part successfully completed.
Step 1
Code:
npm i @hmscore/react-native-hms-ml
Step 2
Open build.gradle file in project-dir > android folder.
Go to buildscript > repositories and allprojects > repositories, and configure the Maven repository address.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > dependencies and add dependency configurations.
Code:
buildscript {
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
}
Step 3
Open build.gradle file which is located under project.dir > android > app directory.
Add the AppGallery Connect plug-in dependency to the file header.
Code:
apply plugin: 'com.huawei.agconnect'
The apply plugin: ‘com.huawei.agconnect’ configuration must be added after the apply plugin: ‘com.android.application’ configuration.
The minimum Android API level (minSdkVersion) required for ML Kit is 19.
Configure build dependencies of your project.
Code:
dependencies {
...
implementation 'com.huawei.agconnect:agconnect-core:1.0.0.301'
}
Now you can use React Native Hms ML and import modules like below code.
Code:
import {<module_name>} from '@hmscore/react-native-hms-ml';
Lets Create An Application
We have already created an application in prerequisites section.
Our app will be about recognizing text in images and converting it to speech. So, we will use HmsTextRecognitionLocal, HmsFrame and HmsTextToSpeech modules. We will select images by using react-native-image-picker, so don’t forget to install it.
Note That : Before running this code snippet please check for your app permissions.
Step 1
We need to add some settings before using react-native-image-picker to AndroidManifest.xml.
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<application
...
android:requestLegacyExternalStorage="true">
Step 2
Code:
import React from 'react';
import { Text, View, ScrollView, TextInput, TouchableOpacity, StyleSheet } from 'react-native';
import { HmsTextRecognitionLocal, HmsFrame, HmsTextToSpeech, NativeEventEmitter } from '@hmscore/react-native-hms-ml';
import ImagePicker from 'react-native-image-picker';
const options = {
title: 'Choose Method',
storageOptions: {
skipBackup: true,
path: 'images',
},
};
const styles = StyleSheet.create({
bg: { backgroundColor: '#eee' },
customEditBox: {
height: 450,
borderColor: 'gray',
borderWidth: 2,
width: "95%",
alignSelf: "center",
marginTop: 10,
backgroundColor: "#fff",
color: "#000"
},
buttonTts: {
width: '95%',
height: 70,
alignSelf: "center",
marginTop: 35,
},
startButton: {
paddingTop: 10,
paddingBottom: 10,
backgroundColor: 'white',
borderRadius: 10,
borderWidth: 1,
borderColor: '#888',
backgroundColor: '#42aaf5',
},
startButtonLabel: {
fontWeight: 'bold',
color: '#fff',
textAlign: 'center',
paddingLeft: 10,
paddingRight: 10,
},
});
export default class App extends React.Component {
// create your states
// for keeping imageUri and recognition result
constructor(props) {
super(props);
this.state = {
imageUri: '',
result: '',
};
}
// this is a key function in Ml Kit
// It sets the frame for you and keeps it until you set a new one
async setMLFrame() {
try {
var result = await HmsFrame.fromBitmap(this.state.imageUri);
console.log(result);
} catch (e) {
console.error(e);
}
}
// this creates text recognition settings by default options given below
// languageCode : default is "rm"
// OCRMode : default is OCR_DETECT_MODE
async createTextSettings() {
try {
var result = await HmsTextRecognitionLocal.create({});
console.log(result);
} catch (e) {
console.error(e);
}
}
// this function calls analyze function and sets the results to state
// The parameter false means we don't want a block result
// If you want to see results as blocks you can set it to true
async analyze() {
try {
var result = await HmsTextRecognitionLocal.analyze(false);
this.setState({ result: result });
} catch (e) {
console.error(e);
}
}
// this function calls close function to stop recognizer
async close() {
try {
var result = await HmsTextRecognitionLocal.close();
console.log(result);
} catch (e) {
console.error(e);
}
}
// standart image picker operation
// sets imageUri to state
// calls startAnalyze function
showImagePicker() {
ImagePicker.showImagePicker(options, (response) => {
if (response.didCancel) {
console.log('User cancelled image picker');
} else if (response.error) {
console.log('ImagePicker Error: ', response.error);
} else {
this.setState({
imageUri: response.uri,
});
this.startAnalyze();
}
});
}
// configure tts engine by giving custom parameters
async configuration() {
try {
var result = await HmsTextToSpeech.configure({
"volume": 1.0,
"speed": 1.0,
"language": HmsTextToSpeech.TTS_EN_US,
"person": HmsTextToSpeech.TTS_SPEAKER_FEMALE_EN
});
console.log(result);
} catch (e) {
console.error(e);
}
}
// create Tts engine by call
async engineCreation() {
try {
var result = await HmsTextToSpeech.createEngine();
console.log(result);
} catch (e) {
console.error(e);
}
}
// set Tts callback
async callback() {
try {
var result = await HmsTextToSpeech.setTtsCallback();
console.log(result);
} catch (e) {
console.error(e);
}
}
// start speech
async speak(word) {
try {
var result = await HmsTextToSpeech.speak(word, HmsTextToSpeech.QUEUE_FLUSH);
console.log(result);
} catch (e) {
console.error(e);
}
}
// stop engine
async stop() {
try {
var result = await HmsTextToSpeech.stop();
console.log(result);
} catch (e) {
console.error(e);
}
}
// manage functions in order
startAnalyze() {
this.setState({
result: 'processing...',
}, () => {
this.createTextSettings()
.then(() => this.setMLFrame())
.then(() => this.analyze())
.then(() => this.close())
.then(() => this.configuration())
.then(() => this.engineCreation())
.then(() => this.callback())
.then(() => this.speak(this.state.result));
});
}
render() {
return (
<ScrollView style={styles.bg}>
<TextInput
style={styles.customEditBox}
value={this.state.result}
placeholder="Text Recognition Result"
multiline={true}
editable={false}
/>
<View style={styles.buttonTts}>
<TouchableOpacity
style={styles.startButton}
onPress={this.showImagePicker.bind(this)}
underlayColor="#fff">
<Text style={styles.startButtonLabel}> Start Analyze </Text>
</TouchableOpacity>
</View>
</ScrollView>
);
}
}
Test the App
· First write “Hello World” on a blank paper.
· Then run the application.
· Press “Start Analyze” button and take photo of your paper.
· Wait for the result.
· Here it comes. You will see “Hello World” on screen and you will hear it from your phone.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Conclusion
In this article, we integrated and used React Native Hms ML Kit to our application.
From: https://medium.com/huawei-developers/react-native-hms-ml-kit-installation-and-example-242dc83e0941
Is there any advantage in Huawei ML kit compare to others .

Hand Keypoint Detection with HMS ML Kit Explained (with a Demo Project)

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hand keypoint detection is the process of finding fingertips, knuckles and wrists in an image. Hand keypoint detection and hand gesture recognition is still a challenging problem in computer vision domain. It is really a tough work to build your own model for hand keypoint detection as it is hard to collect a large enough hand dataset and it requires expertise in this domain.
Hand keypoint detection can be used in variety of scenarios. For example, it can be used during artistic creation, users can convert the detected hand keypoints into a 2D model, and synchronize the model to a character’s model to produce a vivid 2D animation. You can create a puppet animation game using the above idea. Another example may be creating a rock paper scissors game. Or if you take it further, you can create a sign language to text conversion application. As you see varieties to possible usage scenarios are abundant and there is no limit to ideas.
Hand keypoint detection service is a brand-new feature that is added to Huawei Machine Learning Kit family. It has recently been released and it is making developers and computer vision geeks really excited! It detects 21 points of a hand and can detect up to ten hands in an image. It can detect hands in a static image or in a camera stream. Currently, it does not support scenarios where your hand is blocked by more than 50% or you wear gloves. You don’t need an internet connection as this is a device side capability and what is more: It is completely free!
It wouldn’t be a nice practice only to read related documents and forget about it after a few days. So I created a simple demo application that counts fingers and tells us the number we show by hand. I strongly advise you to develop your hand keypoint detection application beside me. I developed the application in Android Studio in Kotlin. Now, I am going to explain you how to build this application step by step. Don’t hesitate to ask questions in the comments if you face any issues.
1.Firstly, let’s create our project on Android Studio. I named my project as HandKeyPointDetectionDemo. I am sure you can find better names for your application. We can create our project by selecting Empty Activity option and then follow the steps described in this post to create and sign our project in App Gallery Connect.
2. In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.
3. Now we have integrated Huawei Mobile Services (HMS) into our project. Now let’s follow the documentation on developer.huawei.com and find the packages to add to our project. In the website click Developer / HMS Core/ AI / ML Kit. Here you will find introductory information to services, references, SDKs to download and others. Under ML Kit tab follow Android / Getting Started / Integrating HMS Core SDK / Adding Build Dependencies / Integrating the Hand Keypoint Detection SDK. We can follow the guide here to add hand detection capability to our project. We have also one meta-data tag to be added into our AndroidManifest.xml file. After the integration your app-level build.gradle file will look like this.
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 30
buildToolsVersion "30.0.2"
defaultConfig {
applicationId "com.demo.handkeypointdetection"
minSdkVersion 21
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.1'
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'androidx.constraintlayout:constraintlayout:2.0.1'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
//AppGalleryConnect Core
implementation 'com.huawei.agconnect:agconnect-core:1.3.1.300'
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.2.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.2.300'
}
Our project-level build.gradle file:
Code:
buildscript {
ext.kotlin_version = "1.4.0"
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
And don’t forget to add related meta-data tags into your AndroidManifest.xml.
Code:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.demo.handkeypointdetection">
<uses-permission android:name="android.permission.CAMERA" />
<application
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "handkeypoint"/>
</application>
</manifest>
4. I created a class named HandKeyPointDetector. This class will be called from our activity or fragment. Its init method has two parameters context and a viewgroup. We will add our views on rootLayout.
Code:
fun init(context: Context, rootLayout: ViewGroup) {
mContext = context
mRootLayout = rootLayout
addSurfaceViews()
}
5. We are going to detect hand key points in a camera stream, so we create a surfaceView for camera preview and another surfaceView to draw somethings. The surfaceView that is going to be used as overlay should be transparent. Then, we add our views to our rootLayout passed as a parameter from our activity. Lastly we add SurfaceHolder.Callback to our surfaceHolder to know when it is ready.
Code:
private fun addSurfaceViews() {
val surfaceViewCamera = SurfaceView(mContext).also {
it.layoutParams = LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)
mSurfaceHolderCamera = it.holder
}
val surfaceViewOverlay = SurfaceView(mContext).also {
it.layoutParams = LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)
mSurfaceHolderOverlay = it.holder
mSurfaceHolderOverlay.setFormat(PixelFormat.TRANSPARENT)
mHandKeyPointTransactor.setOverlay(mSurfaceHolderOverlay)
}
mRootLayout.addView(surfaceViewCamera)
mRootLayout.addView(surfaceViewOverlay)
mSurfaceHolderCamera.addCallback(surfaceHolderCallback)
}
6. Inside our surfaceHolderCallback we override three methods: surfaceCreated, surfacehanged and surfaceDestroyed.
Code:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceCreated(holder: SurfaceHolder) {
createAnalyzer()
}
override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {
prepareLensEngine(width, height)
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder) {
mLensEngine.release()
}
}
7. createAnalyzer method creates MLKeyPointAnalyzer with settings. If you want you can use default settings also. Scene type can be keypoint and rectangle around hands or we can use TYPE_ALL for both. Max hand results can be up to MLHandKeypointAnalyzerSetting.MAX_HANDS_NUM which is 10 currently. As we will count fingers of 2 hands, I set it to 2.
Code:
private fun createAnalyzer() {
val settings = MLHandKeypointAnalyzerSetting.Factory()
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
.setMaxHandResults(2)
.create()
mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(settings)
mAnalyzer.setTransactor(mHandKeyPointTransactor)
}
8. LensEngine is responsible for handling camera frames for us. All we need to do is to prepare it with right dimensions according to orientation, choose the camera we want to work with, apply fps an so on.
Code:
private fun prepareLensEngine(width: Int, height: Int) {
val dimen1: Int
val dimen2: Int
if (mContext.resources.configuration.orientation == Configuration.ORIENTATION_LANDSCAPE) {
dimen1 = width
dimen2 = height
} else {
dimen1 = height
dimen2 = width
}
mLensEngine = LensEngine.Creator(mContext, mAnalyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(dimen1, dimen2)
.applyFps(5F)
.enableAutomaticFocus(true)
.create()
}
9. When you no longer need the analyzer stop it and release resources.
Code:
fun stopAnalyzer() {
mAnalyzer.stop()
}
10. As you can see in step-7 we used mHandKeyPointTransactor. It is a custom class that we created named HandKeyPointTransactor, which inherits MLAnalyzer.MLTransactor<MLHandKeypoints>. It has two overriden methods inside. transactResult and destroy. Detected results will fall inside transactResult method and then we will try to find the number.
Code:
override fun transactResult(result: MLAnalyzer.Result<MLHandKeypoints>?) {
if (result == null)
return
val canvas = mOverlay?.lockCanvas() ?: return
//Clear canvas.
canvas.drawColor(0, PorterDuff.Mode.CLEAR)
//Find the number shown by our hands.
val numberString = analyzeHandsAndGetNumber(result)
//Find the middle of the canvas
val centerX = canvas.width / 2F
val centerY = canvas.height / 2F
//Draw a text that writes the number we found.
canvas.drawText(numberString, centerX, centerY, Paint().also {
it.style = Paint.Style.FILL
it.textSize = 100F
it.color = Color.GREEN
})
mOverlay?.unlockCanvasAndPost(canvas)
}
11. We will check hand by hand and then finger by finger to find the fingers that are up to find the number.
Code:
private fun analyzeHandsAndGetNumber(result: MLAnalyzer.Result<MLHandKeypoints>): String {
val hands = ArrayList<Hand>()
var number = 0
for (key in result.analyseList.keyIterator()) {
hands.add(Hand())
for (value in result.analyseList.valueIterator()) {
number += hands.last().createHand(value.handKeypoints).getNumber()
}
}
return number.toString()
}
For more information, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202369245767250343&fid=0101187876626530001

Intermediate: Easy fix of application crash using Huawei Crash Service and Remote Configuration

Introduction
Whether you are tracking down a weird behavior in your app or chasing a crash in app making the user frustrated, getting a precise and real time information is important. Huawei crash analytics is a primary crash reporting solution for mobile. It monitors and captures your crashes, intelligently analyses them, and then groups them into manageable issues. And it does this through lightweight SDK that won’t bloat your app. You can integrate Huawei crash analytics SDK with a single line of code before you publish.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this article, we will change app theme using Huawei Remote configuration and if something goes wrong while fetching data from remote config, we will report crash/exception using Huawei Crash Service.
To learn how to change app theme using Huawei Dark mode Awareness service, refer this.
Prerequisite
If you want to use Huawei Remote Configuration and Crash Service, you must have a developer account from AppGallery Connect. You need to create an application from your developer account and then integrate the HMS SDK into your project. I will not write these steps so that the article doesn’t lose its purpose and I will assume that it is already integrated in your project. You can find the guide from the link below.
HMS Integration Guide
Integration
1. Enable Remote Configuration and Crash Service in Manage APIs. Refer to Service Enabling.
2. Add AGC connect plugin in app-level build.gradle.
Code:
apply plugin: 'com.huawei.agconnect'
3. Integrate Crash Service and Remote configuration SDK by adding following code in app-level build.gradle.
Code:
implementation 'com.huawei.agconnect:agconnect-remoteconfig:1.5.2.300'
implementation 'com.huawei.agconnect:agconnect-crash:1.5.2.300'4.
4. Add following code in root-level build.gradle.
Code:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
// Add AppGallery Connect plugin configurations.
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
}
}
allprojects {
repositories {
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
}
5. Declare the following permissions in Androidmanifest.xml
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
Development
We will define JSON which will have mode value as 0 or 1.
1. If the value of mode is 0, we will use system setting to change app theme. For example, if device has dark mode enabled in system setting, our app theme will be dark.
2. If the value of mode is 1, we will force our app to use day theme.
Code:
{
"jsonmode": [{
"mode": 0,
"details": "system_settings_mode"
}]
}
Open AGC, select your project. Choose Growing > Remote Config and enable Remote Config service. Once the remote config is enabled, define the key-value parameters.
Key : “mode_status”
Value : {
"jsonmode": [{
"mode": "0",
"details": "system_settings_mode"
}]
}
Note: mode value should be int, however we are intentionally adding value as String, so that our app throws JSONException which we can monitor on AGC dashboard.
Implementation
Let’s create instance of AGConnectConfig and add the default value to hashmap before connecting to remote config service.
Java:
private void initializeRemoteConfig() {
agConnectConfig = AGConnectConfig.getInstance();
Map<String, Object> map = new HashMap<>();
map.put("mode_status", "NA");
agConnectConfig.applyDefault(map);
}
To fetch parameter values from Remote Configuration.
Java:
agConnectConfig.fetch(5).addOnSuccessListener(new OnSuccessListener<ConfigValues>() {
@Override
public void onSuccess(ConfigValues configValues) {
agConnectConfig.apply(configValues);
String value = agConnectConfig.getValueAsString("mode_status");
Log.d(TAG, "remoteconfig value : " + value);
try {
int mode = parseMode(value);
Log.d(TAG, "mode value : " + mode);
if(mode == 0) {
initilizeDarkModeListner();
}
else if(mode == 1) {
AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_NO);
}
} catch (JSONException e) {
Log.e(TAG,"JSONException : " +e.getMessage());
AGConnectCrash.getInstance().recordException(e);
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.e(TAG, " error: " + e.getMessage());
}
});
To parse the JSON received from Remote config.
Code:
private int parseMode(String json) throws JSONException {
if(json != null) {
JSONObject jsonObj = new JSONObject(json);
JSONArray jsonArrayMenu = jsonObj.getJSONArray("jsonmode");
for (int i = 0; i < jsonArrayMenu.length(); i++) {
JSONObject modeJsonObj = jsonArrayMenu.getJSONObject(i);
return modeJsonObj.getInt("mode");
}
}
return -1;
}
If parsing is successful, we will able to retrieve the mode value as 0 or 1.
However if parsing is unsuccessful, JSONException will be thrown and we will log this exception in AGC using Huawei Crash Service.
Java:
catch (JSONException e) {
Log.e(TAG,"JSONException : " +e.getMessage());
AGConnectCrash.getInstance().recordException(e);
}
Now when app encounters crash, Crash service reports the crash on dashboard in App Gallery connect. To monitor crash, as follows:
1. Sign in to App Gallery connect and select my project.
2. Choose the app.
3. Select Quality > Crash on left panel of the screen.
If you see parsing implementation of JSON, expected mode value should be integer
"mode": 0
But mistakenly, we have added mode value as string in remote config.
Code:
{
"jsonmode": [{
"mode": "0",
"details": "system_settings_mode"
}]
}
Now when we try to run our app, it will throw JSONException, since we are expecting mode value as int from remote config. This exception will be added to AGC dashboard using Huawei crash service.
As a developer, when I go to AGC dashboard to monito my app crash report, I realize my mistake and update the value in AGC remote config as follows:
Code:
{
"jsonmode": [{
"mode": 0,
"details": "system_settings_mode"
}]
}
Now our app will change its theme based on system settings whether if dark mode is enabled or not.
Code snippet of MainActivity.java
Java:
public class MainActivity extends AppCompatActivity {
private static final String TAG = "MainActivity";
private AGConnectConfig agConnectConfig;
TextView tv;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initializeRemoteConfig();
ConfigValues last = agConnectConfig.loadLastFetched();
agConnectConfig.apply(last);
agConnectConfig.fetch(5).addOnSuccessListener(new OnSuccessListener<ConfigValues>() {
@Override
public void onSuccess(ConfigValues configValues) {
agConnectConfig.apply(configValues);
String value = agConnectConfig.getValueAsString("mode_status");
Log.d(TAG, "remoteconfig value : " + value);
try {
int mode = parseMode(value);
Log.d(TAG, "mode value : " + mode);
if(mode == 0)) {
initilizeDarkModeListner();
}
else if(mode == 1) {
AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_NO);
}
} catch (JSONException e) {
Log.e(TAG,"JSONException : " +e.getMessage());
AGConnectCrash.getInstance().recordException(e);
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.e(TAG, " error: " + e.getMessage());
}
});
}
private void initializeRemoteConfig() {
agConnectConfig = AGConnectConfig.getInstance();
Map<String, Object> map = new HashMap<>();
map.put("mode_status", "NA");
agConnectConfig.applyDefault(map);
}
private void initilizeDarkModeListner() {
Awareness.getCaptureClient(this).getDarkModeStatus()
// Callback listener for execution success.
.addOnSuccessListener(new OnSuccessListener<DarkModeStatusResponse>() {
@Override
public void onSuccess(DarkModeStatusResponse darkModeStatusResponse) {
DarkModeStatus darkModeStatus = darkModeStatusResponse.getDarkModeStatus();
if (darkModeStatus.isDarkModeOn()) {
Log.i(TAG, "dark mode is on");
AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_YES);
} else {
Log.i(TAG, "dark mode is off");
AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_NO);
}
}
})
// Callback listener for execution failure.
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.e(TAG, "get darkMode status failed " + e.getMessage());
}
});
}
private int parseMode(String json) throws JSONException {
if(json != null) {
JSONObject jsonObj = new JSONObject(json);
JSONArray jsonArrayMenu = jsonObj.getJSONArray("jsonmode");
for (int i = 0; i < jsonArrayMenu.length(); i++) {
JSONObject modeJsonObj = jsonArrayMenu.getJSONObject(i);
return modeJsonObj.getInt("mode");
}
}
return -1;
}
}
Tips and Tricks
1. Huawei Crash services work on non-Huawei device.
2. AGConnectCrash.getInstance().testIt(mContext) triggers app crash. Make sure to comment or remove it before releasing your app.
3. Crash Service takes around 1 to 3 minutes to post the crash logs on App Gallery connect dashboard/console.
4. Crash SDK collects App and system data.
System data:
AAID, Android ID (obtained when AAID is empty), system type, system version, ROM version, device brand, system language, device model, whether the device is rooted, screen orientation, screen height, screen width, available memory space, available disk space, and network connection status.
App data:
APK name, app version, crashed stack, and thread stack.
5. The Crash SDK collects data locally and reports data to the collection server through HTTPS after encrypting the data.
Conclusion
In this article, we have learnt how Huawei crash service can help developers to monitor crash/exception report on AGC and fix it.
We uploaded wrong JSON data into Remote Configuration and cause our app to go into JSONException. Using Huawei Crash Service, we monitored the exception in AGC dashboard. After finding out issue in JSON data, we added correct data in remote config and fixed our app.
References
Huawei Crash Service
Huawei Remote Configuration
Original Source

Intermediate: How to extract the data from Image using Huawei HiAI Text Recognition service in Android

Introduction
In this article, we will learn how to implement Huawei HiAI kit using Text Recognition service into android application, this service helps us to extract the data from screen shots and photos.
Now a days everybody lazy to type the content, there are many reasons why we want to integrate this service into our apps. User can capture or pic image from gallery to retrieve the text, so that user can edit the content easily.
UseCase: Using this HiAI kit, user can extract the unreadble image content to make useful, let's start.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required process kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
How to integrate HMS Dependencies
1. First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. For more information check this link
2. Download agconnect-services.json file from AGC and add into app’s root directory.
3 Add the required dependencies to the build.gradle file under root folder.
Code:
maven {url 'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
4. Add the App level dependencies to the build.gradle file under app folder.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the required permission to the Manifestfile.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.hardware.camera"/>
<uses-permission android:name="android.permission.HARDWARE_TEST.camera.autofocus"/>
6. Now, sync your project.
How to apply for HiAI Engine Library
1. Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
2. Click Apply for HUAWEI HiAI kit.
3. Enter required information like product name and Package name, click Next button.
4. Verify the application details and click Submit button.
5. Click the Download SDK button to open the SDK list.
6. Unzip downloaded SDK and add into your android project under lib folder.
7. Add jar files dependences into app build.gradle file.
Code:
implementationfileTree(<b><span style="font-size: 10.0pt;font-family: Consolas;">include</span></b>: [<b><span style="font-size: 10.0pt;">'*.aar'</span></b>, <b><span style="font-size: 10.0pt;">'*.jar'</span></b>], <b><span style="font-size: 10.0pt;">dir</span></b>: <b><span style="font-size: 10.0pt;">'libs'</span></b>)
implementation <b><span style="font-size: 10.0pt;font-family: Consolas;">'com.google.code.gson:gson:2.8.6'
</span></b>repositories <b>{
</b>flatDir <b>{
</b>dirs <b><span style="font-size: 10.0pt;line-height: 115.0%;font-family: Consolas;color: green;">'libs'
}
}</span></b><b><span style="font-size: 10.0pt;font-family: Consolas;">
</span></b>
8. After completing this above setup, now Sync your gradle file.
Let’s do code
I have created a project on Android studio with empty activity let’s start coding.
In the MainActivity.java we can create the business logic.
Java:
public class MainActivity extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Bitmap resultBitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView contentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
requestPermissions(permission, REQUEST_CODE);
initHiAI();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
contentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
btnImage.setOnClickListener(v -> {
selectImage();
});
}
private void initHiAI() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DeviceCompatibility();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DeviceCompatibility() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
setBitmap();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void setBitmap() {
int height = bitmap.getHeight();
int width = bitmap.getWidth();
if (width <= 1440 && height <= 15210) {
originalImage.setImageBitmap(bitmap);
setTextHiAI();
} else {
Toast.makeText(this, "Image size should be below 1440*15210 pixels", Toast.LENGTH_SHORT).show();
}
}
private void setTextHiAI() {
textView.setText("Extraction Text");
contentText.setVisibility(View.VISIBLE);
TextDetector detector = new TextDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
TextConfiguration config = new TextConfiguration();
config.setEngineType(TextConfiguration.AUTO);
config.setEngineType(TextDetectType.TYPE_TEXT_DETECT_FOCUS_SHOOT_EF);
detector.setTextConfiguration(config);
Text result = new Text();
int statusCode = detector.detect(image, result, null);
if (statusCode != 0) {
Log.e("TAG", "Failed to start engine, try restart app,");
}
if (result.getValue() != null) {
contentText.setText(result.getValue());
Log.d("TAG", result.getValue());
} else {
Log.e("TAG", "Result test value is null!");
}
}
}
Demo
Tips and Tricks
1. Download latest Huawei HiAI SDK.
2. Set minSDK version to 23 or later.
3. Do not forget to add jar files into gradle file.
4. Screenshots size should be 1440*15210 pixels.
5. Photos recommended size is 720p.
6. Refer this URL for supported Countries/Regions list.
Conclusion
In this article, we have learned how to implement HiAI Text Recognition service in android application to extract the content from screen shots and photos.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Huawei HiAI Kit URL
Original Source

Share Educational Training Video Summary on Social Media by Huawei Video Summarization using Huawei HiAI in Android

Introduction
In this article, we will learn how to integrate Huawei Video summarization using Huawei HiAI. We will build the Video preview maker application to share it on social media to increase your video views.
What is Video summarization?
In general Video summarization is the process of distilling a raw video into a more compact form without losing much information.
This Service can generate a 10 seconds, 15 seconds, or 30 seconds video summary of a single video or multiple videos containing the original voice.
Note: Total Video lenght should not exceed more than 10 minutes.
Click to expand...
Click to collapse
Implementing an advanced multi-dimensional scoring framework, the aesthetic engine assists with shooting, photo selection, video editing, and video splitting, by comprehending complex subjective aspects in images, and making high-level judgments related to the attractiveness, memorability and engaging nature of images.
Features
Fast: This algorithm is currently developed based on the deep neural network, to fully utilize the neural processing unit (NPU) of Huawei mobile phones to accelerate the neural network, achieving an acceleration of over 10 times.
Lightweight: This API greatly reduces the computing time and ROM space the algorithm model takes up, making your app more lightweight.
Comprehensive scoring: The aesthetic engine provides scoring to measure image quality from objective dimensions (image quality), subjective dimensions (sensory evaluation), and photographic dimensions (rule evaluation).
Portrait aesthetics scoring: An industry-leading portrait aesthetics scoring feature obtains semantic information about human bodies in the image, including the number of people, individual body builds, positions, postures, facial positions and angles, eye movements, mouth movements, and facial expressions. Aesthetic scores of the portrait are given according to the various types of the body semantic information.
How to integrate Video Summarization
Configure the application on the AGC.
Apply for HiAI Engine Library
Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
Code:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!-- CAMERA -->
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Step 4: Build application.
First request run time permission
Code:
private void requestPermissions() {
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
int permission = ActivityCompat.checkSelfPermission(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
if (permission != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, 0x0010);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
Initialize vision base
Code:
private void initVisionBase() {
VisionBase.init(VideoSummaryActivity.this, new ConnectionCallback() {
[USER=439709]@override[/USER]
public void onServiceConnect() {
Log.i(LOG, "onServiceConnect ");
Toast.makeText(VideoSummaryActivity.this, "Service Connected", Toast.LENGTH_SHORT).show();
}
[USER=439709]@override[/USER]
public void onServiceDisconnect() {
Log.i(LOG, "onServiceDisconnect");
Toast.makeText(VideoSummaryActivity.this, "Service Disconnected", Toast.LENGTH_SHORT).show();
}
});
}
Create video Async class
Code:
public class VideoAsyncTask extends AsyncTask<String, Void, String> {
private static final String LOG = VideoAsyncTask.class.getSimpleName();
private Context context;
private VideoCoverListener listener;
private AestheticsScoreDetector aestheticsScoreDetector;
public VideoAsyncTask(Context context, VideoCoverListener listener) {
this.context = context;
this.listener = listener;
}
[USER=439709]@override[/USER]
protected String doInBackground(String... paths) {
Log.i(LOG, "init VisionBase");
VisionBase.init(context, ConnectManager.getInstance().getmConnectionCallback()); //try to start AIEngine
if (!ConnectManager.getInstance().isConnected()) { //wait for AIEngine service
ConnectManager.getInstance().waitConnect();
}
Log.i(LOG, "init videoCover");
aestheticsScoreDetector = new AestheticsScoreDetector(context);
AEModelConfiguration aeModelConfiguration;
aeModelConfiguration = new AEModelConfiguration();
aeModelConfiguration.getSummerizationConf().setSummerizationMaxLen(10);
aeModelConfiguration.getSummerizationConf().setSummerizationMinLen(2);
aestheticsScoreDetector.setAEModelConfiguration(aeModelConfiguration);
String videoResult = null;
if (listener.isAsync()) {
videoCoverAsync(paths);
videoResult = "-10000";
} else {
videoResult = videoCover(paths);
aestheticsScoreDetector.release();
}
//release engine after detect finished
return videoResult;
}
[USER=439709]@override[/USER]
protected void onPostExecute(String resultScore) {
if (!resultScore.equals("-10000")) {
listener.onTaskCompleted(resultScore, false);
}
super.onPostExecute(String.valueOf(resultScore));
}
private String videoCover(String[] videoPaths) {
if (videoPaths == null) {
Log.e(LOG, "uri is null ");
return null;
}
JSONObject jsonObject = new JSONObject();
int position = 0;
Video[] videos = new Video[videoPaths.length];
for (String path : videoPaths) {
Video video = new Video();
video.setPath(path);
videos[position++] = video;
}
jsonObject = aestheticsScoreDetector.getVideoSummerization(videos, null);
if (jsonObject == null) {
Log.e(LOG, "return JSONObject is null");
return "return JSONObject is null";
}
if (!jsonObject.optString("resultCode").equals("0")) {
Log.e(LOG, "return JSONObject resultCode is not 0");
return jsonObject.optString("resultCode");
}
Log.d(LOG, "videoCover get score end");
AEVideoResult aeVideoResult = aestheticsScoreDetector.convertVideoSummaryResult(jsonObject);
if (null == aeVideoResult) {
Log.e(LOG, "aeVideoResult is null ");
return null;
}
String result = new Gson().toJson(aeVideoResult, AEVideoResult.class);
return result;
}
private void videoCoverAsync(String[] videoPaths) {
if (videoPaths == null) {
Log.e(LOG, "uri is null ");
return;
}
Log.d(LOG, "runVisionService " + "start get score");
CVisionCallback callback = new CVisionCallback();
int position = 0;
Video[] videos = new Video[videoPaths.length];
for (String path : videoPaths) {
Video video = new Video();
video.setPath(path);
videos[position++] = video;
}
aestheticsScoreDetector.getVideoSummerization(videos, callback);
}
public class CVisionCallback extends VisionCallback {
[USER=439709]@override[/USER]
public void setRequestID(String requestID) {
super.setRequestID(requestID);
}
[USER=439709]@override[/USER]
public void onDetectedResult(AnnotateResult annotateResult) throws RemoteException {
if (annotateResult != null) {
Log.e("Visioncallback", annotateResult.toString());
}
Log.e("Visioncallback", annotateResult.getResult().toString());
JSONObject jsonObject = null;
try {
jsonObject = new JSONObject(annotateResult.getResult().toString());
} catch (JSONException e) {
e.printStackTrace();
}
if (jsonObject == null) {
Log.e(LOG, "return JSONObject is null");
aestheticsScoreDetector.release();
return;
}
if (!jsonObject.optString("resultCode").equals("0")) {
Log.e(LOG, "return JSONObject resultCode is not 0");
aestheticsScoreDetector.release();
return;
}
AEVideoResult aeVideoResult = aestheticsScoreDetector.convertVideoSummaryResult(jsonObject);
if (aeVideoResult == null) {
aestheticsScoreDetector.release();
return;
}
String result = new Gson().toJson(aeVideoResult, AEVideoResult.class);
aestheticsScoreDetector.release();
listener.onTaskCompleted(result, true);
}
[USER=439709]@override[/USER]
public void onDetectedInfo(InfoResult infoResult) throws RemoteException {
JSONObject jsonObject = null;
try {
jsonObject = new JSONObject(infoResult.getInfoResult().toString());
AEDetectVideoStatus aeDetectVideoStatus = aestheticsScoreDetector.convertDetectVideoStatusResult(jsonObject);
if (aeDetectVideoStatus != null) {
listener.updateProcessProgress(aeDetectVideoStatus);
} else {
Log.d(LOG, "[ASTaskPlus onDetectedInfo]aeDetectVideoStatus result is null!");
}
} catch (JSONException e) {
e.printStackTrace();
}
}
[USER=439709]@override[/USER]
public void onDetectedError(ErrorResult errorResult) throws RemoteException {
Log.e(LOG, errorResult.getResultCode() + "");
aestheticsScoreDetector.release();
listener.onTaskCompleted(String.valueOf(errorResult.getResultCode()), true);
}
[USER=439709]@override[/USER]
public String getRequestID() throws RemoteException {
return null;
}
}
}
Result
Tips and Tricks
Check dependencies added properly
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Maximum video length is 10 minute.
Resolution should be between 144P and 2160P.
Conclusion
In this article, we have learnt the following concepts.
What is Video summarization?
Features of Video summarization
How to integrate Video summarization using Huawei HiAI
How to Apply Huawei HiAI
How to build the application
Reference
Video summarization
Apply for Huawei HiAI
Happy coding

Categories

Resources