React Native HMS ML Kit | Installation and Example - Huawei Developers

More information like this, you can visit HUAWEI Developer Forum​
Introduction
This article covers, how to integrate React Native HMS ML Kit to a React Native application.
React Native Hms ML Kit supports services listed below
· Text Related Services
· Language Related Services
· Image Related Services
· Face/Body Related Services
There are several number of uses cases of these services, you can combine them or just use them to create different functionalities in your app. For basic understanding, please read uses cases from here.
Github: https://github.com/HMS-Core/hms-react-native-plugin/tree/master/react-native-hms-ml
Prerequisites
Step 1
Prepare your development environment using this guide.
After reading this guide you should have React Native Development Environment setted up, Hms Core (APK) installed and Android Sdk installed.
Step 2
Configure your app information in App Gallery by following this guide.
After reading this guide you should have a Huawei Developer Account, an App Gallery app, a keystore file and enabled ml kit service from AppGallery.
Integrating React Native Hms ML Kit
Warning : Please make sure that, prerequisites part successfully completed.
Step 1
Code:
npm i @hmscore/react-native-hms-ml
Step 2
Open build.gradle file in project-dir > android folder.
Go to buildscript > repositories and allprojects > repositories, and configure the Maven repository address.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > dependencies and add dependency configurations.
Code:
buildscript {
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
}
Step 3
Open build.gradle file which is located under project.dir > android > app directory.
Add the AppGallery Connect plug-in dependency to the file header.
Code:
apply plugin: 'com.huawei.agconnect'
The apply plugin: ‘com.huawei.agconnect’ configuration must be added after the apply plugin: ‘com.android.application’ configuration.
The minimum Android API level (minSdkVersion) required for ML Kit is 19.
Configure build dependencies of your project.
Code:
dependencies {
...
implementation 'com.huawei.agconnect:agconnect-core:1.0.0.301'
}
Now you can use React Native Hms ML and import modules like below code.
Code:
import {<module_name>} from '@hmscore/react-native-hms-ml';
Lets Create An Application
We have already created an application in prerequisites section.
Our app will be about recognizing text in images and converting it to speech. So, we will use HmsTextRecognitionLocal, HmsFrame and HmsTextToSpeech modules. We will select images by using react-native-image-picker, so don’t forget to install it.
Note That : Before running this code snippet please check for your app permissions.
Step 1
We need to add some settings before using react-native-image-picker to AndroidManifest.xml.
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<application
...
android:requestLegacyExternalStorage="true">
Step 2
Code:
import React from 'react';
import { Text, View, ScrollView, TextInput, TouchableOpacity, StyleSheet } from 'react-native';
import { HmsTextRecognitionLocal, HmsFrame, HmsTextToSpeech, NativeEventEmitter } from '@hmscore/react-native-hms-ml';
import ImagePicker from 'react-native-image-picker';
const options = {
title: 'Choose Method',
storageOptions: {
skipBackup: true,
path: 'images',
},
};
const styles = StyleSheet.create({
bg: { backgroundColor: '#eee' },
customEditBox: {
height: 450,
borderColor: 'gray',
borderWidth: 2,
width: "95%",
alignSelf: "center",
marginTop: 10,
backgroundColor: "#fff",
color: "#000"
},
buttonTts: {
width: '95%',
height: 70,
alignSelf: "center",
marginTop: 35,
},
startButton: {
paddingTop: 10,
paddingBottom: 10,
backgroundColor: 'white',
borderRadius: 10,
borderWidth: 1,
borderColor: '#888',
backgroundColor: '#42aaf5',
},
startButtonLabel: {
fontWeight: 'bold',
color: '#fff',
textAlign: 'center',
paddingLeft: 10,
paddingRight: 10,
},
});
export default class App extends React.Component {
// create your states
// for keeping imageUri and recognition result
constructor(props) {
super(props);
this.state = {
imageUri: '',
result: '',
};
}
// this is a key function in Ml Kit
// It sets the frame for you and keeps it until you set a new one
async setMLFrame() {
try {
var result = await HmsFrame.fromBitmap(this.state.imageUri);
console.log(result);
} catch (e) {
console.error(e);
}
}
// this creates text recognition settings by default options given below
// languageCode : default is "rm"
// OCRMode : default is OCR_DETECT_MODE
async createTextSettings() {
try {
var result = await HmsTextRecognitionLocal.create({});
console.log(result);
} catch (e) {
console.error(e);
}
}
// this function calls analyze function and sets the results to state
// The parameter false means we don't want a block result
// If you want to see results as blocks you can set it to true
async analyze() {
try {
var result = await HmsTextRecognitionLocal.analyze(false);
this.setState({ result: result });
} catch (e) {
console.error(e);
}
}
// this function calls close function to stop recognizer
async close() {
try {
var result = await HmsTextRecognitionLocal.close();
console.log(result);
} catch (e) {
console.error(e);
}
}
// standart image picker operation
// sets imageUri to state
// calls startAnalyze function
showImagePicker() {
ImagePicker.showImagePicker(options, (response) => {
if (response.didCancel) {
console.log('User cancelled image picker');
} else if (response.error) {
console.log('ImagePicker Error: ', response.error);
} else {
this.setState({
imageUri: response.uri,
});
this.startAnalyze();
}
});
}
// configure tts engine by giving custom parameters
async configuration() {
try {
var result = await HmsTextToSpeech.configure({
"volume": 1.0,
"speed": 1.0,
"language": HmsTextToSpeech.TTS_EN_US,
"person": HmsTextToSpeech.TTS_SPEAKER_FEMALE_EN
});
console.log(result);
} catch (e) {
console.error(e);
}
}
// create Tts engine by call
async engineCreation() {
try {
var result = await HmsTextToSpeech.createEngine();
console.log(result);
} catch (e) {
console.error(e);
}
}
// set Tts callback
async callback() {
try {
var result = await HmsTextToSpeech.setTtsCallback();
console.log(result);
} catch (e) {
console.error(e);
}
}
// start speech
async speak(word) {
try {
var result = await HmsTextToSpeech.speak(word, HmsTextToSpeech.QUEUE_FLUSH);
console.log(result);
} catch (e) {
console.error(e);
}
}
// stop engine
async stop() {
try {
var result = await HmsTextToSpeech.stop();
console.log(result);
} catch (e) {
console.error(e);
}
}
// manage functions in order
startAnalyze() {
this.setState({
result: 'processing...',
}, () => {
this.createTextSettings()
.then(() => this.setMLFrame())
.then(() => this.analyze())
.then(() => this.close())
.then(() => this.configuration())
.then(() => this.engineCreation())
.then(() => this.callback())
.then(() => this.speak(this.state.result));
});
}
render() {
return (
<ScrollView style={styles.bg}>
<TextInput
style={styles.customEditBox}
value={this.state.result}
placeholder="Text Recognition Result"
multiline={true}
editable={false}
/>
<View style={styles.buttonTts}>
<TouchableOpacity
style={styles.startButton}
onPress={this.showImagePicker.bind(this)}
underlayColor="#fff">
<Text style={styles.startButtonLabel}> Start Analyze </Text>
</TouchableOpacity>
</View>
</ScrollView>
);
}
}
Test the App
· First write “Hello World” on a blank paper.
· Then run the application.
· Press “Start Analyze” button and take photo of your paper.
· Wait for the result.
· Here it comes. You will see “Hello World” on screen and you will hear it from your phone.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Conclusion
In this article, we integrated and used React Native Hms ML Kit to our application.
From: https://medium.com/huawei-developers/react-native-hms-ml-kit-installation-and-example-242dc83e0941

Is there any advantage in Huawei ML kit compare to others .

Related

Flutter check HMS/GMS Availability check

More information like this, you can visit HUAWEI Developer Forum​
This guide describes how to write custom platform-specific code. Some platform-specific functionality is available through existing packages
Flutter uses a flexible system that allows you to call platform-specific APIs whether available in Kotlin or Java code on Android, or in Swift or Objective-C code on iOS.
Flutter’s platform-specific API support does not rely on code generation, but rather on a flexible message passing style:
The Flutter portion of the app sends messages to its host, the iOS or Android portion of the app, over a platform channel.
The host listens on the platform channel, and receives the message. It then calls into any number of platform-specific APIs—using the native programming language—and sends a response back to the client, the Flutter portion of the app.
Architectural overview: platform channels
Messages are passed between the client (UI) and host (platform) using platform channels as illustrated in this diagram:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Messages and responses are passed asynchronously, to ensure the user interface remains responsive.
Step 1: Create a new app project
Start by creating a new app:
In a terminal run: flutter create flutterhmsgmscheck
Step 2: Create the Flutter platform client
The app’s State class holds the current app state. Extend that to hold the current battery state.
First, construct the channel. Use a MethodChannel with a single platform method that returns the battery level.
The client and host sides of a channel are connected through a channel name passed in the channel constructor. All channel names used in a single app must be unique; prefix the channel name with a unique ‘domain prefix’, for example: com.salman.flutter.hmsgmscheck/isHmsGmsAvailable.
Code:
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
class HmsGmsCheck extends StatelessWidget {
HmsGmsCheck();
@override
Widget build(BuildContext context) {
return HmsGmsCheckStateful(
title: "HMS/GMS Check",
);
}
}
class HmsGmsCheckStateful extends StatefulWidget {
HmsGmsCheckStateful({Key key, this.title}) : super(key: key);
final String title;
@override
_HmsGmsCheckState createState() => _HmsGmsCheckState();
}
class _HmsGmsCheckState extends State<HmsGmsCheckStateful> {
static const MethodChannel methodChannel =
MethodChannel('com.salman.flutter.hmsallkitsflutter/isHmsGmsAvailable');
bool _isHmsAvailable;
bool _isGmsAvailable;
@override
void initState() {
checkHmsGms();
}
void checkHmsGms() async {
await _isHMS();
await _isGMS();
}
Future<void> _isHMS() async {
bool status;
try {
bool result = await methodChannel.invokeMethod('isHmsAvailable');
status = result;
print('status : ${status.toString()}');
} on PlatformException {
print('Failed to get _isHmsAvailable.');
}
setState(() {
_isHmsAvailable = status;
});
}
Future<void> _isGMS() async {
bool status;
try {
bool result = await methodChannel.invokeMethod('isGmsAvailable');
status = result;
print('status : ${status.toString()}');
} on PlatformException {
print('Failed to get _isGmsAvailable.');
}
setState(() {
_isGmsAvailable = status;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Column(
children: <Widget>[
new Container(
padding: EdgeInsets.all(20),
child: new Column(
children: <Widget>[
Text(
"HMS Available: $_isHmsAvailable",
style: Theme.of(context).textTheme.headline6,
),
Text(
"GMS Available: $_isGmsAvailable",
style: Theme.of(context).textTheme.headline6,
)
],
),
)
],
));
}
}
Step 3: Update your gradle
Open your gradle in Android Studio and apply huawei repo:
Project-level build.gradle
Code:
buildscript {
ext.kotlin_version = '1.3.50'
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
App-level build.gradle
Code:
dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
implementation "com.huawei.hms:hwid:4.0.0.300"
implementation "com.google.android.gms:play-services-base:17.3.0"
}
Step 4: Add an Android platform-specific implementation
Start by opening the Android host portion of your Flutter app in Android Studio:
Start Android Studio
Select the menu item File > Open…
Navigate to the directory holding your Flutter app, and select the android folder inside it. Click OK.
Open the file MainActivity.kt located in the kotlin folder in the Project view. (Note: If editing with Android Studio 2.3, note that the kotlin folder is shown as if named java.)
Inside the configureFlutterEngine() method, create a MethodChannel and call setMethodCallHandler(). Make sure to use the same channel name as was used on the Flutter client side.
Code:
class MainActivity: FlutterActivity() {
private val CHANNEL = "com.salman.flutter.hmsgmscheck/isHmsGmsAvailable"
var concurrentContext = [email protected]
override fun configureFlutterEngine(@NonNull flutterEngine: FlutterEngine) {
super.configureFlutterEngine(flutterEngine)
MethodChannel(flutterEngine.dartExecutor.binaryMessenger, CHANNEL).setMethodCallHandler {
call, result ->
// Note: this method is invoked on the main thread.
if (call.method.equals("isHmsAvailable")) {
result.success(isHmsAvailable());
} else if (call.method.equals("isGmsAvailable")) {
result.success(isGmsAvailable());
} else {
result.notImplemented()
}
}
}
private fun isHmsAvailable(): Boolean {
var isAvailable = false
val context: Context = concurrentContext
if (null != context) {
val result = HuaweiApiAvailability.getInstance().isHuaweiMobileServicesAvailable(context)
isAvailable = ConnectionResult.SUCCESS == result
}
Log.i("MainActivity", "isHmsAvailable: $isAvailable")
return isAvailable
}
private fun isGmsAvailable(): Boolean {
var isAvailable = false
val context: Context = concurrentContext
if (null != context) {
val result: Int = GoogleApiAvailability.getInstance().isGooglePlayServicesAvailable(context)
isAvailable = com.google.android.gms.common.ConnectionResult.SUCCESS === result
}
Log.i("MainActivity", "isGmsAvailable: $isAvailable")
return isAvailable
}
}
After completing above all steps compile your project you will get the following output.
Conclusion:
With the help of this article we can able to access platform specific native code under our flutter application. For further more details you can check offical flutter platform channels guide.

How to build an editing app with hms image kit vision service?

More information like this, you can visit HUAWEI Developer Forum
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Image Kit Vision Service of HMS (Huawei Mobile Services) offers us very stylish filters to build a photo editor app. In this article, we will design a nice filtering screen using Vision Service. Moreover, it will be very easy to develop and it will allow you to make elegant beauty apps.
The Image Vision service provides 24 color filters for image optimization. It renders the images you provide and returns them as filtered bitmap objects.
Requirements :
Huawei Phone (It doesn’t support non-Huawei Phones)
EMUI 8.1 or later (Min Android SDK Version 26)
Restrictions :
When using the filter function of Image Vision, make sure that the size of the image to be parsed is not greater than 15 MB, the image resolution is not greater than 8000 x 8000, and the aspect ratio is between 1:3 and 3:1. If the image resolution is greater than 8000 x 8000 after the image is decompressed by adjusting the compression settings or the aspect ratio is not between 1:3 and 3:1, a result code indicating parameter error will be returned. In addition, a larger image size can lead to longer parsing and response time as well as higher memory and CPU usage and power consumption.
Let’s start to build a nice filtering screen. First of all, please follow these steps to create a regular app on App Gallery.
Then we need to add dependencies to the app level gradle file. (implementation ‘com.huawei.hms:image-vision:1.0.2.301’)
Don’t forget to add agconnect plugin. (apply plugin: ‘com.huawei.agconnect’)
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'kotlin-kapt'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.huawei.hmsimagekitdemo"
minSdkVersion 26
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
lintOptions {
abortOnError false
}
}
repositories {
flatDir {
dirs 'libs'
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.aar'])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.0'
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.1.0'
implementation 'androidx.lifecycle:lifecycle-extensions:2.2.0'
implementation 'com.google.android.material:material:1.0.0'
implementation 'androidx.legacy:legacy-support-v4:1.0.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
implementation 'com.huawei.hms:image-vision:1.0.2.301'
}
Add maven repo url and agconnect dependency to the project level gradle file.
Code:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
ext.kotlin_version = "1.3.72"
repositories {
google()
jcenter()
maven { url 'http://developer.huawei.com/repo/' }
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.0"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'http://developer.huawei.com/repo/' }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
After added dependencies, we need to create an ImageVision instance to perform related operations such as obtaining the filter effect. From now on, you can initialize the service.
Code:
private fun initFilter() {
coroutineScope.launch { // CoroutineScope is used for the async calls
// Create an ImageVision instance and initialize the service.
imageVisionAPI = ImageVision.getInstance(baseContext)
imageVisionAPI.setVisionCallBack(object : VisionCallBack {
override fun onSuccess(successCode: Int) {
val initCode = imageVisionAPI.init(baseContext, authJson)
// initCode must be 0 if the initialization is successful.
if (initCode == 0)
Log.d(TAG, "getImageVisionAPI rendered image successfully")
}
override fun onFailure(errorCode: Int) {
Log.e(TAG, "getImageVisionAPI failure, errorCode = $errorCode")
Toast.makeText([email protected], "initFailed", Toast.LENGTH_SHORT).show()
}
})
}
}
Our app is allowed to use the Image Vision service only after the successful verification. So we should provide an authJson object with app credentials. The value of initCode must be 0, indicating that the initialization is successful.
Code:
private fun startFilter(filterType: String, intensity: String, compress: String) {
coroutineScope.launch { // CoroutineScope is used for the async calls
val jsonObject = JSONObject()
val taskJson = JSONObject()
try {
taskJson.put("intensity", intensity) //Filter strength. Generally, set this parameter to 1.
taskJson.put("filterType", filterType) // 24 different filterType code
taskJson.put("compressRate", compress) // Compression ratio.
jsonObject.put("requestId", "1")
jsonObject.put("taskJson", taskJson)
jsonObject.put("authJson", authJson) // App can use the service only after it is successfully authenticated.
coroutineScope.launch {
var deferred: Deferred<ImageVisionResult?> = async(Dispatchers.IO) {
imageVisionAPI?.getColorFilter(jsonObject, bitmapFromGallery)
// Obtain the rendering result from visionResult
}
visionResult = deferred.await() // wait till obtain ImageVisionResult object
val image = visionResult?.image
if (image == null)
Log.e(TAG, "FilterException: Couldn't render the image. Check the restrictions while rendering an image by Image Vision Service")
channel.send(image)
// Sending image bitmap with an async channel to make it receive with another channel
}
} catch (e: JSONException) {
Log.e(TAG, "JSONException: " + e.message)
}
}
}
Select an image from the Gallery. Call Init filter method and then start filtering images one by one which are located in recyclerView.
Code:
override fun onActivityResult(requestCode: Int, resultCode: Int, intent: Intent?) {
super.onActivityResult(requestCode, resultCode, intent)
if (resultCode == RESULT_OK) {
when (requestCode) {
PICK_REQUEST ->
if (intent != null) {
coroutineScope.launch {
var deferred: Deferred<Uri?> =
async(Dispatchers.IO) {
intent.data
}
imageUri = deferred.await()
imgView.setImageURI(imageUri)
bitmapFromGallery = (imgView.getDrawable() as BitmapDrawable).bitmap
initFilter()
startFilterForSubImages()
}
}
}
}
}
In our scenario, a user clicks a filter to render the selected image we provide and the Image Vision Result object returns the filtered bitmap. So we need to implement onSelected method of the interface to our activity which gets the FilterItem object of the clicked item from the adapter.
Code:
// Initialize and start a filter operation when a filter item is selected
override fun onSelected(item: BaseInterface) {
if (!channelIsFetching)
{
if (bitmapFromGallery == null)
Toast.makeText(baseContext, getString(R.string.select_photo_from_gallery), Toast.LENGTH_SHORT).show()
else
{
var filterItem = item as FilterItem
initFilter() // initialize the vision service
startFilter(filterItem.filterId, "1", "1") // intensity and compress are 1
coroutineScope.launch {
withContext(Dispatchers.Main)
{
imgView.setImageBitmap(channel.receive()) // receive the filtered bitmap result from another channel
stopFilter() // stop the vision service
}
}
}
}
else
Toast.makeText(baseContext, getString(R.string.wait_to_complete_filtering), Toast.LENGTH_SHORT).show()
}
FilterType codes of 24 different filters as follows:
When the user opens the gallery and selects an image from a directory, we will produce 24 different filtered versions of the image. I used async coroutine channels to render images with first in first out manner. So we can obtain the filter images one by one. Using Image Vision Service with Kotlin Coroutines is so fast and performance-friendly.
To turn off hardware acceleration, configure the AndroidManifest.xml file as follows:
Code:
<application
android:allowBackup="true"
android:hardwareAccelerated="false"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity
android:name=".ui.MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
If you do not need to use filters any longer, call the imageVisionAPI.stop() API to stop the Image Vision service. If the returned stopCode is 0, the service is successfully stopped.
Code:
private fun stopFilter() {
if(imageVisionAPI != null)
imageVisionAPI.stop() // Stop the service if you don't need anymore
}
We have designed an elegant filtering screen quite easily. Preparing a filter page will no longer take your time. You will be able to develop quickly without having to know OpenGL. You should try Image Kit Vision Service as soon as possible.
And the result :
For more information about HMS Image Kit Vision Service please refer to :
HMS Image Kit Vision Service Documentation

Intermediate: How to Integrate Location Kit into Hotel booking application

Introduction
This article is based on Multiple HMS services application. I have created Hotel Booking application using HMS Kits. We need mobile app for reservation hotels when we are traveling from one place to another place.
In this article, I am going to implement HMS Location Kit & Shared Preferences.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Flutter setup
Refer this URL to setup Flutter.
Software Requirements
1. Android Studio 3.X
2. JDK 1.8 and later
3. SDK Platform 19 and later
4. Gradle 4.6 and later
Steps to integrate service
1. We need to register as a developer account in AppGallery Connect.
2. Create an app by referring to Creating a Project and Creating an App in the Project
3. Set the data storage location based on current location.
4. Enabling Required Services: Location Kit.
5. Generating a Signing Certificate Fingerprint.
6. Configuring the Signing Certificate Fingerprint.
7. Get your agconnect-services.json file to the app root directory.
Important: While adding app, the package name you enter should be the same as your Flutter project’s package name.
Note: Before you download agconnect-services.json file, make sure the required kits are enabled.
Development Process
Create Application in Android Studio.
1. Create Flutter project.
2. App level gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies
Code:
maven {url 'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Add the below permissions in Android Manifest file.
Code:
<manifest xlmns:android...>
...
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="com.huawei.hms.permission.ACTIVITY_RECOGNITION" />
<application ...
</manifest>
3. Refer below URL for cross-platform plugins. Download required plugins.
https://developer.huawei.com/consum...y-V1/flutter-sdk-download-0000001050304074-V1
4. After completing all the steps above, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. You can find all the plugins in pub.dev with the latest versions.
Code:
dependencies:
flutter:
sdk: flutter
shared_preferences: ^0.5.12+4
bottom_navy_bar: ^5.6.0
cupertino_icons: ^1.0.0
provider: ^4.3.3
huawei_location:
path: ../huawei_location/
flutter:
uses-material-design: true
assets:
- assets/images/
5. After adding them, run flutter pub get command. Now all the plugins are ready to use.
6. Open main.dart file to create UI and business logics.
Location kit
HUAWEI Location Kit assists developers in enabling their apps to get quick and accurate user locations and expand global positioning capabilities using GPS, Wi-Fi, and base station locations.
Fused location: Provides a set of simple and easy-to-use APIs for you to quickly obtain the device location based on the GPS, Wi-Fi, and base station location data.
Activity identification: Identifies user motion status through the acceleration sensor, cellular network information, and magnetometer, helping you adjust your app based on user behaviour.
Geofence: Allows you to set an interested area through an API so that your app can receive a notification when a specified action (such as leaving, entering, or lingering in the area) occurs.
Integration
Permissions
First of all we need permissions to access location and physical data.
Create a PermissionHandler instance,add initState() for initialize.
Code:
final PermissionHandler permissionHandler;
@override
void initState() {
permissionHandler = PermissionHandler(); super.initState();
}
Check Permissions
We need to check device has permission or not using hasLocationPermission() method.
Code:
void hasPermission() async {
try {
final bool status = await permissionHandler.hasLocationPermission();
if(status == true){
showToast("Has permission: $status");
}else{
requestPermission();
}
} on PlatformException catch (e) {
showToast(e.toString());
}
}
If device don’t have permission,then request for Permission to use requestLocationPermission() method.
Code:
void requestPermission() async {
try {
final bool status = await permissionHandler.requestLocationPermission();
showToast("Is permission granted");
} on PlatformException catch (e) {
showToast(e.toString());
}
}
Fused Location
Create FusedLocationPrvoiderClient instance using the init() method and use the instance to call location APIs.
Code:
final FusedLocationProviderClient locationService
@override
void initState() {
locationService = FusedLocationProviderClient(); super.initState();
}
getLastLocation()
Code:
void getLastLocation() async {
try {
Location location = await locationService.getLastLocation();
setState(() {
lastlocation = location.toString();
print("print: " + lastlocation);
});
} catch (e) {
setState(() {
print("error: " + e.toString());
});
}
}
getLastLocationWithAddress()
Create LocationRequest instance and set required parameters.
Code:
final LocationRequest locationRequest;
locationRequest = LocationRequest()
..needAddress = true
..interval = 5000;
void _getLastLocationWithAddress() async {
try {
HWLocation location =
await locationService.getLastLocationWithAddress(locationRequest);
setState(() {
String street = location.street;
String city = location.city;
String countryname = location.countryName;
currentAddress = '$street' + ',' + '$city' + ' , ' + '$countryname';
print("res: $location");
});
showToast(currentAddress);
} on PlatformException catch (e) {
showToast(e.toString());
}
}
Location Update using Call back
Create LocationCallback instance and create callback functions in initstate().
Code:
LocationCallback locationCallback;
@override
void initState() {
locationCallback = LocationCallback(
onLocationResult: _onCallbackResult,
onLocationAvailability: _onCallbackResult,
);
super.initState();
}
void requestLocationUpdatesCallback() async {
if (_callbackId == null) {
try {
final int callbackId = await locationService.requestLocationUpdatesExCb(
locationRequest, locationCallback);
_callbackId = callbackId;
} on PlatformException catch (e) {
showToast(e.toString());
}
} else {
showToast("Already requested location updates.");
}
}
void onCallbackResult(result) {
print(result.toString());
showToast(result.toString());
}
I have created Helper class to store user login information in locally using shared Preferences class.
Code:
class StorageUtil {
static StorageUtil _storageUtil;
static SharedPreferences _preferences;
static Future<StorageUtil> getInstance() async {
if (_storageUtil == null) {
var secureStorage = StorageUtil._();
await secureStorage._init();
_storageUtil = secureStorage;
}
return _storageUtil;
}
StorageUtil._();
Future _init() async {
_preferences = await SharedPreferences.getInstance();
}
// get string
static String getString(String key) {
if (_preferences == null) return null;
String result = _preferences.getString(key) ?? null;
print('result,$result');
return result;
}
// put string
static Future<void> putString(String key, String value) {
if (_preferences == null) return null;
print('result $value');
return _preferences.setString(key, value);
}
}
Result
Tips & Tricks
1. Download latest HMS Flutter plugin.
2. To work with mock location we need to add permissions in Manifest.XML.
3. Whenever you updated plugins, click on pug get.
Conclusion
We implemented simple hotel booking application using Location kit in this article. We have learned how to get Lastlocation, getLocationWithAddress and how to use callback method, in flutter how to store data into Shared Preferences in applications.
Thank you for reading and if you have enjoyed this article, I would suggest you to implement this and provide your experience.
Reference
Location Kit URL
Shared Preferences URL
Read full article
Goodjob
Thank you
I thought huawei doesn't support flutter. I guess it should as it is Android only.
good
Wow
Nice.
I thought its not doable
Interesting.
Like

Intermediate: How to extract the data from Image using Huawei HiAI Text Recognition service in Android

Introduction
In this article, we will learn how to implement Huawei HiAI kit using Text Recognition service into android application, this service helps us to extract the data from screen shots and photos.
Now a days everybody lazy to type the content, there are many reasons why we want to integrate this service into our apps. User can capture or pic image from gallery to retrieve the text, so that user can edit the content easily.
UseCase: Using this HiAI kit, user can extract the unreadble image content to make useful, let's start.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required process kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
How to integrate HMS Dependencies
1. First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. For more information check this link
2. Download agconnect-services.json file from AGC and add into app’s root directory.
3 Add the required dependencies to the build.gradle file under root folder.
Code:
maven {url 'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
4. Add the App level dependencies to the build.gradle file under app folder.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the required permission to the Manifestfile.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.hardware.camera"/>
<uses-permission android:name="android.permission.HARDWARE_TEST.camera.autofocus"/>
6. Now, sync your project.
How to apply for HiAI Engine Library
1. Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
2. Click Apply for HUAWEI HiAI kit.
3. Enter required information like product name and Package name, click Next button.
4. Verify the application details and click Submit button.
5. Click the Download SDK button to open the SDK list.
6. Unzip downloaded SDK and add into your android project under lib folder.
7. Add jar files dependences into app build.gradle file.
Code:
implementationfileTree(<b><span style="font-size: 10.0pt;font-family: Consolas;">include</span></b>: [<b><span style="font-size: 10.0pt;">'*.aar'</span></b>, <b><span style="font-size: 10.0pt;">'*.jar'</span></b>], <b><span style="font-size: 10.0pt;">dir</span></b>: <b><span style="font-size: 10.0pt;">'libs'</span></b>)
implementation <b><span style="font-size: 10.0pt;font-family: Consolas;">'com.google.code.gson:gson:2.8.6'
</span></b>repositories <b>{
</b>flatDir <b>{
</b>dirs <b><span style="font-size: 10.0pt;line-height: 115.0%;font-family: Consolas;color: green;">'libs'
}
}</span></b><b><span style="font-size: 10.0pt;font-family: Consolas;">
</span></b>
8. After completing this above setup, now Sync your gradle file.
Let’s do code
I have created a project on Android studio with empty activity let’s start coding.
In the MainActivity.java we can create the business logic.
Java:
public class MainActivity extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Bitmap resultBitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView contentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
requestPermissions(permission, REQUEST_CODE);
initHiAI();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
contentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
btnImage.setOnClickListener(v -> {
selectImage();
});
}
private void initHiAI() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DeviceCompatibility();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DeviceCompatibility() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
setBitmap();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void setBitmap() {
int height = bitmap.getHeight();
int width = bitmap.getWidth();
if (width <= 1440 && height <= 15210) {
originalImage.setImageBitmap(bitmap);
setTextHiAI();
} else {
Toast.makeText(this, "Image size should be below 1440*15210 pixels", Toast.LENGTH_SHORT).show();
}
}
private void setTextHiAI() {
textView.setText("Extraction Text");
contentText.setVisibility(View.VISIBLE);
TextDetector detector = new TextDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
TextConfiguration config = new TextConfiguration();
config.setEngineType(TextConfiguration.AUTO);
config.setEngineType(TextDetectType.TYPE_TEXT_DETECT_FOCUS_SHOOT_EF);
detector.setTextConfiguration(config);
Text result = new Text();
int statusCode = detector.detect(image, result, null);
if (statusCode != 0) {
Log.e("TAG", "Failed to start engine, try restart app,");
}
if (result.getValue() != null) {
contentText.setText(result.getValue());
Log.d("TAG", result.getValue());
} else {
Log.e("TAG", "Result test value is null!");
}
}
}
Demo
Tips and Tricks
1. Download latest Huawei HiAI SDK.
2. Set minSDK version to 23 or later.
3. Do not forget to add jar files into gradle file.
4. Screenshots size should be 1440*15210 pixels.
5. Photos recommended size is 720p.
6. Refer this URL for supported Countries/Regions list.
Conclusion
In this article, we have learned how to implement HiAI Text Recognition service in android application to extract the content from screen shots and photos.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Huawei HiAI Kit URL
Original Source

Using Huawei Cloud Functions as Chatbot Service in Flutter ChatBotApp Part-1

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we will learn how to make use of Huawei Cloud Functions service as Chatbot service in ChatBotApp in flutter. Cloud Functions enables serverless computing. It provides the Function as a Service (FaaS) capabilities to simplify app development and O&M by splitting service logic into functions and offers the Cloud Functions SDK that works with Cloud DB and Cloud Storage so that your app functions can be implemented more easily. Cloud Functions automatically scales in or out functions based on actual traffic, freeing you from server resource management and helping you reduce costs.
Key Functions
Key Concepts
How the Service Works
To use Cloud Functions, you need to develop cloud functions that can implement certain service functions in AppGallery Connect and add triggers for them, for example, HTTP triggers for HTTP requests, and Cloud DB triggers for data deletion or insertion requests after Cloud DB is integrated. After your app that integrates the Cloud Functions SDK meets conditions of specific function triggers, your app can call the cloud functions, which greatly facilitates service function building.
Platform Support
Development Overview
You need to install Flutter and Dart plugin in IDE and I assume that you have prior knowledge about the Flutter and Dart.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
Android phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.7 or later.
Android studio software or Visual Studio or Code installed.
HMS Core (APK) 4.X or later.
Integration process
Step 1: Create Flutter project.
Step 2: Add the App level gradle dependencies. Choose inside project Android > app > build.gradle.
[/B][/B]
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
[B][B]
Root level gradle dependencies
[/B][/B][/B]
maven {url 'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.5.2.300'
[B][B][B]
Step 3: Add the below permissions in Android Manifest file.
<uses-permission android:name="android.permission.INTERNET" />
Step 5: Add downloaded file into parent directory of the project. Declare plugin path in pubspec.yaml file under dependencies.
Add path location for asset image.
Let's start coding
main.dart
[/B]
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({Key? key}) : super(key: key);
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'ChatBotService',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const MyHomePage(title: 'ChatBotService'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({Key? key, required this.title}) : super(key: key);
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
bool isLoggedIn = false;
String str = 'Login required';
final HMSAnalytics _hmsAnalytics = new HMSAnalytics();
List<String> gridItems = ['Email Service', 'Call Center', 'FAQ', 'Chat Now'];
@override
void initState() {
_enableLog();
super.initState();
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child:
Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Visibility(
visible: true,
child: Card(
child: Padding(
padding: EdgeInsets.all(20),
child: Text(
str,
style: const TextStyle(color: Colors.teal, fontSize: 22),
),
),
),
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
if (!isLoggedIn) {
setState(() {
isLoggedIn = true;
signInWithHuaweiID();
});
print('$isLoggedIn');
} else {
setState(() {
isLoggedIn = false;
signOutWithID();
});
print('$isLoggedIn');
}
},
tooltip: 'Login/Logout',
child: isLoggedIn ? const Icon(Icons.logout) : const Icon(Icons.login),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
void signInWithHuaweiID() async {
try {
// The sign-in is successful, and the user's ID information and authorization code are obtained.
Future<AuthAccount> account = AccountAuthService.signIn();
account.then(
(value) => setLoginSuccess(value),
);
} on Exception catch (e) {
print(e.toString());
}
}
Future<void> _enableLog() async {
_hmsAnalytics.setUserId("ChatBotServiceApp");
await _hmsAnalytics.enableLog();
}
void setLoginSuccess(AuthAccount value) {
setState(() {
str = 'Welcome ' + value.displayName.toString();
});
showToast(value.displayName.toString());
print('Login Success');
}
Future<void> signOutWithID() async {
try {
final bool result = await AccountAuthService.signOut();
if (result) {
setState(() {
str = 'Login required';
showToast('You are logged out.');
});
}
} on Exception catch (e) {
print(e.toString());
}
}
Future<void> showToast(String name) async {
Fluttertoast.showToast(
msg: "$name",
toastLength: Toast.LENGTH_SHORT,
gravity: ToastGravity.CENTER,
timeInSecForIosWeb: 1,
backgroundColor: Colors.lightBlue,
textColor: Colors.white,
fontSize: 16.0);
}
}
[B]
Result
Tricks and Tips
Makes sure that agconnect-services.json file added.
Make sure dependencies are added yaml file.
Run flutter pug get after adding dependencies.
Make sure that service is enabled in agc.
Makes sure images are defined in yaml file.
Conclusion
In this article, we have learnt how to integrate Huawei Account kit, analytics kit in flutter ChatBotApp. Once Account kit integrated, users can login quickly and conveniently sign in to apps with their Huawei IDs after granting initial access permission. In part-2 we will learn the actual Cloud Functions as Chatbot service.
Thank you so much for reading. I hope this article helps you to understand the integration of Huawei Account kit and Analytics kit in flutter ChatBotApp.
Reference
Cloud Functions
Checkout in forum

Categories

Resources