More articles like this, you can visit HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
Introduction
React Native is a convenient tool for cross platform development, and though it has become more and more powerful through the updates, there are limits to it, for example its capability to interact with and using the native components. Bridging native code with Javascript is one of the most popular and effective ways to solve the problem. Best of both worlds!
Currently not all HMS Kits has official RN support yet, this article will walk you through how to create android native bridge to connect your RN app with HMS Kits, and Scan Kit will be used as an example here.
The tutorial is based on https://github.com/clementf-hw/rn_integration_demo/tree/4b2262aa2110041f80cb41ebd7caa1590a48528a, you can find more details about the sample project in this article:
https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201230857831870061&fid=0101187876626530001
Prerequisites
Basic Android development
Basic React Native development
These areas have been covered immensely already on RN’s official site, this forum and other sources
HMS properly configured
You can also reference the above article for this matter
Major dependencies
RN Version: 0.62.2 (released on 9th April, 2020)
Gradle Version: 5.6.4
Gradle Plugin Version: 3.6.1
agcp: 1.2.1.301
This tutorial is broken into 3 parts:
Pt. 1: Create a simple native UI component as intro and warm up
Pt. 2: Bridging HMS Scan Kit into React Native
Pt. 3: Make Scan Kit into a stand alone React Native Module that you can import into other projects or even upload to npm.
Warm up
Let’s get to it. Before everything, the app looks like this:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s make a simple UI component for warm up! We will add a native android button, which text will change when it is clicked.
First, in our android folder, let us create ReactNativeWarmUpModule.java. For simplicity purpose, you can create this file in the same folder with MainApplication.java
Code:
package com.cfdemo.d001rn;
import android.widget.Button;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.uimanager.SimpleViewManager;
import com.facebook.react.uimanager.ThemedReactContext;
public class ReactNativeWarmUpModule extends SimpleViewManager<Button> {
private static final String REACT_CLASS = "RCTWarmUpView";
private static ReactApplicationContext reactContext;
public ReactNativeWarmUpModule(ReactApplicationContext context) {
reactContext = context;
}
@NonNull
@Override
public String getName() {
return null;
}
@NonNull
@Override
protected Button createViewInstance(@NonNull ThemedReactContext reactContext)
{
return null;
}
}
This part would be easier to do on Android Studio due to auto import feature.
For the name (the REACT_CLASS) of the module, it’s mostly up to you. RCT is an abbreviation of ReaCT and is a popular prefix
Modify createViewInstance to do something
Code:
@NonNull
@Override
protected Button createViewInstance(@NonNull ThemedReactContext reactContext) {
Button button = new Button(reactContext);
button.setOnClickListener(v -> ((Button) v).setText("Button Clicked"));
return button;
}
To make it more interactive with the React Native side, we can give it a ReactProp
Code:
@ReactProp(name = "text")
public void setText(Button button, String text) {
button.setText(text);
}
We are done with ReactNativeWarmUpModule for now, this is how it looks
Code:
package com.cfdemo.d001rn;
import android.widget.Button;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.uimanager.SimpleViewManager;
import com.facebook.react.uimanager.ThemedReactContext;
import com.facebook.react.uimanager.annotations.ReactProp;
public class ReactNativeWarmUpModule extends SimpleViewManager<Button> {
private static final String REACT_CLASS = "RCTWarmUpView";
private static ReactApplicationContext reactContext;
public ReactNativeWarmUpModule(ReactApplicationContext context) {
reactContext = context;
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
@NonNull
@Override
protected Button createViewInstance(@NonNull ThemedReactContext reactContext) {
Button button = new Button(reactContext);
button.setOnClickListener(v -> ((Button) v).setText("Button Clicked"));
return button;
}
@ReactProp(name = "text")
public void setText(Button button, String text) {
button.setText(text);
}
}
Now, to register the ViewManager, we’ll have to create a ReactPackage. You can put it in different places, but also for simplicity, you can put it in the same folder with ReactNativeWarmUpModule.
Code:
package com.cfdemo.d001rn;
import androidx.annotation.NonNull;
import com.facebook.react.ReactPackage;
import com.facebook.react.bridge.NativeModule;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.uimanager.ViewManager;
import java.util.Collections;
import java.util.List;
public class ReactNativeWarmUpPackage implements ReactPackage {
@NonNull
@Override
public List<NativeModule> createNativeModules(@NonNull ReactApplicationContext reactContext) {
return null;
}
@Override
public List<ViewManager> createViewManagers(ReactApplicationContext reactContext) {
return Collections.emptyList();
}
}
Put ReactNativeWarmUpModule into createViewManagers
Code:
package com.cfdemo.d001rn;
import androidx.annotation.NonNull;
import com.facebook.react.ReactPackage;
import com.facebook.react.bridge.NativeModule;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.uimanager.ViewManager;
import java.util.Arrays;
import java.util.List;
public class ReactNativeWarmUpPackage implements ReactPackage {
@NonNull
@Override
public List<NativeModule> createNativeModules(@NonNull ReactApplicationContext reactContext) {
return Collections.emptyList();;
}
@Override
public List<ViewManager> createViewManagers(ReactApplicationContext reactContext) {
return Arrays.<ViewManager>asList(
new ReactNativeWarmUpModule(reactContext)
);
}
}
Since we will not register any other Native Modules for now, let's also give it an empty list
Code:
Collections.emptyList();
Next, we’ll add the package to our MainApplication.java, import the Package, and add it in the getPackages() function
Code:
import com.cfdemo.d001rn.ReactNativeWarmUpPackage;
…
@Override
protected List<ReactPackage> getPackages() {
@SuppressWarnings("UnnecessaryLocalVariable")
List<ReactPackage> packages = new PackageList(this).getPackages();
// Packages that cannot be autolinked yet can be added manually here, for example:
// packages.add(new MyReactNativePackage());
packages.add(new ReactNativeWarmUpPackage());
return packages;
}
At this point, our native UI component is ready! So, how do we actually use it in React Native? There are a couple ways to invoke it, the simplest way would be to add this in the file where you want it to be
Code:
const RNWarmUpView = requireNativeComponent('RCTWarmUpView')
In the sample project it looks like this
Code:
//…
import { RNRemoteMessage } from 'react-native-hwpush';
import { getLastLocation, parseLocation, getLocationPermission } from '../../Helpers/LocationHelper'
import * as R from 'ramda'
const RNWarmUpView = requireNativeComponent('RCTWarmUpView')
export default class App extends Component {
constructor(props) {
//…
Then we render it
Code:
render() {
const { displayText, region } = this.state
return (
<View style={styles.container}>
<Text style={styles.textBox}>
{displayText}
</Text>
<RNWarmUpView style={styles.nativeModule}/>
<MapView
style={styles.map}
region={region}
showCompass={true}
showsUserLocation={true}
showsMyLocationButton={true}
>
</MapView>
</View>
);
}
Now the app looks like this, you can see the big grey area, which is actually our button here
Now we set the ReactProp we gave it earlier
Code:
<RNWarmUpView
style={styles.nativeModule}
text={"Render in Javascript"}
/>
Reload and it'd looks like this
Click on the button:
And we have finished the warm up!
Pt. 1 of the tutorial is done, please feel free to ask questions. You can also check out the repo of the sample project on github: https://github.com/clementf-hw/rn_integration_demo, and raise issue if you have any question or any update.
Related
More articles like this, you can visit HUAWEI Developer Forum and Medium
Background
I believe that we all start to learn a language when we have done dictation, now when the primary school students learn the language is an important after - school work is dictating the text of the new words, many parents have this experience. However, on the one hand, the pronunciation is relatively simple. On the other hand, parents' time is very precious. Now, there are many dictation voices in the market. These broadcasters record the dictation words in the language teaching materials after class for parents to download. However, this kind of recording is not flexible enough, if the teacher leaves a few extra words today that are not part of the after-school problem set, the recordings won't meet the needs of parents and children. This document describes how to use the general text recognition and speech synthesis functions of the ML kit to implement the automatic voice broadcast app. You only need to take photos of dictation words or texts, and then the text in the photos can be automatically played. The tone color and tone of the voice can be adjusted.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Preparations
Open the project-level build.gradle file
Choose allprojects > repositories and configure the Maven repository address of HMS SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Configure the Maven repository address of HMS SDK in buildscript->repositories.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Choose buildscript > dependencies and configure the AGC plug-in.
Code:
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
Adding Compilation Dependencies
Open the application levelbuild.gradle file.
SDK integration
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-voice-tts:1.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr:1.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:1.0.4.300'
}
Add the ACG plug-in to the file header.
Code:
apply plugin: 'com.huawei.agconnect'
Specify permissions and features: Declare them in AndroidManifest.xml.
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Key Development Steps
There are two main functions. One is to identify the operation text, and the other is to read the operation. The OCR+TTS mode is used to read the operation. After taking a photo, click the play button to read the operation.
Dynamic permission application
Code:
private static final int PERMISSION_REQUESTS = 1;
@Override
public void onCreate(Bundle savedInstanceState) {
// Checking camera permission
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
2. Start the reading interface.
Code:
public void takePhoto(View view) {
Intent intent = new Intent(MainActivity.this, ReadPhotoActivity.class);
startActivity(intent);
}
3.Invoke createLocalTextAnalyzer() in the onCreate() method to create a device-side text recognizer.
Code:
private void createLocalTextAnalyzer() {
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("zh")
.create();
this.textAnalyzer = MLAnalyzerFactory.getInstance().getLocalTextAnalyzer(setting);
}
4.Invoke createLocalTextAnalyzer() in the onCreate() method to create a device-side text recognizer.
Code:
private void createTtsEngine() {
MLTtsConfig mlConfigs = new MLTtsConfig()
.setLanguage(MLTtsConstants.TTS_ZH_HANS)
.setPerson(MLTtsConstants.TTS_SPEAKER_FEMALE_ZH)
.setSpeed(0.2f)
.setVolume(1.0f);
this.mlTtsEngine = new MLTtsEngine(mlConfigs);
MLTtsCallback callback = new MLTtsCallback() {
@Override
public void onError(String taskId, MLTtsError err) {
}
@Override
public void onWarn(String taskId, MLTtsWarn warn) {
}
@Override
public void onRangeStart(String taskId, int start, int end) {
}
@Override
public void onEvent(String taskId, int eventName, Bundle bundle) {
if (eventName == MLTtsConstants.EVENT_PLAY_STOP) {
if (!bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED)) {
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_finish, Toast.LENGTH_SHORT).show();
}
}
}
};
mlTtsEngine.setTtsCallback(callback);
}
5.Set the buttons for reading photos, taking photos, and reading aloud.
Code:
this.relativeLayoutLoadPhoto.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ReadPhotoActivity.this.selectLocalImage(ReadPhotoActivity.this.REQUEST_CHOOSE_ORIGINPIC);
}
});
this.relativeLayoutTakePhoto.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ReadPhotoActivity.this.takePhoto(ReadPhotoActivity.this.REQUEST_TAKE_PHOTO);
}
});
6.Start TextAnalyzer() during the callback of photographing and reading photos.
Code:
private void startTextAnalyzer() {
if (this.isChosen(this.originBitmap)) {
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Task<MLText> task = this.textAnalyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText mlText) {
// Transacting logic for segment success.
if (mlText != null) {
ReadPhotoActivity.this.remoteDetectSuccess(mlText);
} else {
ReadPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Transacting logic for segment failure.
ReadPhotoActivity.this.displayFailure();
return;
}
});
} else {
Toast.makeText(this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show();
return;
}
}
7.After the recognition is successful, click the play button to start the playback.
Code:
this.relativeLayoutRead.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (ReadPhotoActivity.this.sourceText == null) {
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show();
} else {
ReadPhotoActivity.this.mlTtsEngine.speak(sourceText, MLTtsEngine.QUEUE_APPEND);
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_start, Toast.LENGTH_SHORT).show();
}
}
});
Demo
Any questions about this process, you can visit HUAWEI Developer Forum.
Seems quite simple and useful. I will try.
sanghati said:
Hi,
Nice article. Can you use ML kit for scanning product and find that product online to buy.
Thanks
Click to expand...
Click to collapse
Hi, if you want to scan products which you want to buy, you can use scan kit. Refer the document and acquire help from HUAWEI Developer Forum.
Preface
Free Translation is a real-time translation app that provides a range of services, including speech recognition, text translation, and text-to-speech (TTS).
Developing an AI app like Free Translation tends to require complex machine learning know-how, but integrating ML Kit makes the development quick and easy.
Use Scenarios
Free Translation is equipped to handle a wide range of user needs, for example translating content at work, assisting during travel in a foreign country, or helping with communicating with a foreign friend or learning a new language.
Development Preparations
1. Configure the Huawei Maven repository address.
2. Add build dependencies for the ML SDK.
Open the build.gradle file in the app directory of your project.
Code:
dependencies {
// Import the automatic speech recognition (ASR) plug-in.
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.0.3.300'
// Import the text translation SDK.
implementation 'com.huawei.hms:ml-computer-translate:2.0.4.300'
// Import the text translation algorithm package.
implementation 'com.huawei.hms:ml-computer-translate-model:2.0.4.300'
// Import the TTS SDK.
implementation 'com.huawei.hms:ml-computer-voice-tts:2.0.4.300'
// Import the bee voice package of on-device TTS.
implementation 'com.huawei.hms:ml-computer-voice-tts-model-bee:2.0.4.300'
For more details, please refer to Preparations.
Open the AndroidManifest.xml file in the main directory, and add the relevant permissions above the <application/> line.
Code:
<uses-permission android:name="android.permission.INTERNET" /> <!-- Accessing the Internet. -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <!-- Obtaining the network status. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /><!-- Upgrading the algorithm version. -->
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /><!-- Obtaining the Wi-Fi status. -->
<uses-permission android:name="android.permission.RECORD_AUDIO" /><!-- Recording audio data by using the recorder. -->
Development Procedure
UI Design
Customize the app UI according to your needs, and based on activity_main.xml, the layout file.
Tap on START RECOGNITION to load the ASR module, which recognizes what the user says.
Tap on SYNTHETIC VOICE to load the TTS module, which reads out the resulting translation.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Function Development
You can integrate an ASR plug-in to quickly integrate the ASR service.
Code:
public void startAsr(View view) {
// Use Intent for recognition settings.
Intent intent = new Intent(this, MLAsrCaptureActivity.class)
// Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Languages supported include the following: "zh-CN": Chinese; "en-US": English; "fr-FR": French; "de-DE": German; "it-IT": Italian.
.putExtra(MLAsrCaptureConstants.LANGUAGE, Constants.ASR_SOURCE[spinnerInput.getSelectedItemPosition()])
// Set whether to display the recognition result on the speech pickup UI. MLAsrCaptureConstants.FEATURE_ALLINONE: no; MLAsrCaptureConstants.FEATURE_WORDFLUX: yes.
.putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX);
// 100: request code between the current activity and speech pickup UI activity. You can use this code to obtain the processing result of the speech pickup UI.
startActivityForResult(intent, 100);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
String text;
// 100: request code between the current activity and speech pickup UI activity, which is defined above.
if (requestCode == 100) {
switch (resultCode) {
// MLAsrCaptureConstants.ASR_SUCCESS: Recognition is successful.
case MLAsrCaptureConstants.ASR_SUCCESS:
if (data != null) {
Bundle bundle = data.getExtras();
// Obtain the text information recognized from speech.
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT);
// Process the recognized text information.
textViewInput.setText(text);
Translation.run(this, textViewOutput, spinnerInput.getSelectedItemPosition(),
spinnerOutput.getSelectedItemPosition(), text);
}
}
break;
}
}
}
Create the Translation class to use the text translation service.
Step 1 Define the public method, which decides whether to use real-time or on-device translation.
Code:
public static void run(Activity activity, TextView textView, int sourcePosition, inttargetPosition, String sourceText) {
Log.d(TAG, Constants.TRANSLATE[sourcePosition] + ", " + Constants.TRANSLATE[targetPosition] + ", " + sourceText);
if (isOffline) {
onDeviceTranslation(activity, textView, sourcePosition, targetPosition, sourceText);
} else {
realTimeTranslation(textView, sourcePosition, targetPosition, sourceText);
}
}
Step 2 Call the real-time or on-device translation method.
Code:
private static void realTimeTranslation(final TextView textView, int sourcePosition, final int targetPosition, String sourceText) {
Log.d(TAG, "realTimeTranslation");
}
private static void onDeviceTranslation(final Activity activity, final TextView textView, final int sourcePosition, final int targetPosition, final String sourceText) {
Set<String> result = MLTranslateLanguage.syncGetLocalAllLanguages();
Log.d(TAG, "Languages supported by on-device translation: " +Arrays.toString(result.toArray()));
}
Create the TTS class to use the text-to-speech service.
Step 1 Just as with Step 1 in Translation, define the public method, which decides whether to use real-time or on-device TTS.
Code:
public static void run(Activity activity, int targetPosition, String sourceText) {
Log.d(TAG, sourceText);
if (isNotAuto || sourceText.isEmpty()) {
return;
}
if (isOffline) {
if (0 == targetPosition) {
Toast.makeText(activity, ,
Toast.LENGTH_SHORT).show();
return;
}
offlineTts(activity, Constants.TTS_TARGET[targetPosition],
Constants.TTS_TARGET_SPEAKER_OFFLINE[targetPosition], sourceText);
} else {
onlineTts(Constants.TTS_TARGET[targetPosition], Constants.TTS_TARGET_SPEAKER[targetPosition], sourceText);
}
}
Step 2 Call the real-time or on-device TTS method.
Code:
private static void onlineTts(String language, String person, String sourceText) {
Log.d(TAG, language + ", " + person + ", " + sourceText);
}
private static void offlineTts(final Activity activity, String language, final String person, finalString sourceText) {
// Use customized parameter settings to create a TTS engine.
// For details about the speaker names, please refer to the Timbres section.
final MLTtsConfig mlTtsConfig = new MLTtsConfig().setLanguage(language)
.setPerson(person)
// Set the TTS mode to on-device mode. The default mode is real-time mode.
.setSynthesizeMode(MLTtsConstants.TTS_OFFLINE_MODE);
}
Final Effects
More Information
To join in on developer discussion forums, go to Reddit.
To download the demo app and sample code, go to GitHub.
For solutions to integration-related issues, go to Stack Overflow.
More details
Introduction
When you need to process a time-consuming operation. For example, executing a download task in the current thread without blocking the thread, try out the EventHandler to achieve efficient communications between different threads in HarmonyOS. You can use EventRunner to create another thread to run time-consuming tasks without blocking the current thread. In this way, all types of tasks can run more smoothly and effectively in different threads. For example, you can use EventHandler in the main thread to create a child thread to download an image. The EventHandler will notify the main thread when the image is downloaded in child thread, and the main thread can update the UI display.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Limitations and Constraints
During inter-thread communication, the EventHandler can be bound only to the threads created by the EventRunner. When creating an EventRunner instance, ensure that the creation is successful. An EventHandler can be bounded to threads created by an EventRunner instance only when the EventRunner instance is not null.
An EventHandler can be bounded only one EventRunner at a time. However, an EventRunner can be bounded to multiple EventHandler instances at the same time.
When to Use
Development Scenarios of EventHandler
The EventHandler dispatches InnerEvent instances or Runnable tasks to other threads for processing, including the following situations:
An InnerEvent instance needs to be dispatched to a new thread and to be processed based on the priority and delay time. The priority of an InnerEvent can be IMMEDIATE, HIGH, LOW, or IDLE, and the delay time can be specified.
A Runnable task needs to be dispatched to a new thread and to be processed based on the priority and delay time. The priority of a Runnable task can be IMMEDIATE, HIGH, LOW, or IDLE, and the delay time can be specified.
An event needs to be dispatched from a new thread to the original thread for processing.
Development Overview
You need to install DevEcho studio IDE and I assume that you have prior knowledge about the Harmony OS and java.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
A Huawei phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK installation package.
DevEcho studio installed.
HMS Core (APK) 4.X or later.
Follows the steps.
1. Create Harmony Project.
Open DevEcho studio.
Click NEW Project, select a Project Templet.
Select ability template and click Next as per below image.
Enter Project and Package Name and click on Finish.
2. Once you have created the project, DevEco Studio will automatically sync it with Gradle files. Find the below image after synchronization is successful.
3. Update Permission and app version in config.json file as per your requirement, otherwise retain the default values.
4. Create New Ability as follows.
5. Development Procedure.
Add the below code in MainAbilitySlice.java
Code:
package com.hms.interthreadcom.slice;
import com.hms.interthreadcom.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Button;
import ohos.agp.components.Text;
import ohos.eventhandler.EventHandler;
import ohos.eventhandler.EventRunner;
import ohos.eventhandler.InnerEvent;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
import java.util.stream.IntStream;
public class MainAbilitySlice extends AbilitySlice {
public static final int CODE_DOWNLOAD_FILE1 = 1;
public static final int CODE_DOWNLOAD_FILE2 = 2;
public static final int CODE_DOWNLOAD_FILE3 = 3;
HiLogLabel LABEL_LOG;
Text handlerTV;
Button interThreadBtn,bankingSystemBtn,delayInterThreadBtn;
EventRunner runnerA;
MyEventHandler handler;
MyEventHandler handler2;
Customer customer;
// 1. Create a class that inherits from EventHandler.
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
handlerTV=(Text)findComponentById(ResourceTable.Id_text_helloworld);
interThreadBtn=(Button)findComponentById(ResourceTable.Id_inter_thread);
delayInterThreadBtn=(Button)findComponentById(ResourceTable.Id_delay_inter_thread);
bankingSystemBtn=(Button)findComponentById(ResourceTable.Id_inter_thread_banking_system);
LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
interThreadBtn.setClickedListener(component -> executeTask());
delayInterThreadBtn.setClickedListener(component -> executeTaskWithDelay());
bankingSystemBtn.setClickedListener(component -> interThreadInBankingSystem());
// Thread A:
runnerA = EventRunner.create("downloadRunner");
handler = new MyEventHandler(runnerA);
handler2 = new MyEventHandler(runnerA);
}
private void interThreadInBankingSystem() {
customer=new Customer();
new Thread(() -> customer.withdraw(15000)).start();
new Thread(() -> customer.deposit(10000)).start();
}
private void executeTaskWithDelay() {
handler.sendEvent(CODE_DOWNLOAD_FILE1,2 , EventHandler.Priority.HIGH);
handler.sendEvent(CODE_DOWNLOAD_FILE2);
handler.sendEvent(CODE_DOWNLOAD_FILE3);
}
private void executeTask() {
// 3. Send events to thread A.
handler.sendEvent(CODE_DOWNLOAD_FILE1);
handler.sendEvent(CODE_DOWNLOAD_FILE2);
handler.sendEvent(CODE_DOWNLOAD_FILE3);
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
public class MyEventHandler extends EventHandler {
MyEventHandler(EventRunner runner) {
super(runner);
}
@Override
public void processEvent(InnerEvent event) {
super.processEvent(event);
if (event == null) {
return;
}
int eventId = event.eventId;
switch (eventId) {
case CODE_DOWNLOAD_FILE1: {
handlerTV.setText("Please wait image downloading...");
HiLog.info(LABEL_LOG, "The Image 1 is downloading please wait");
break;
}
case CODE_DOWNLOAD_FILE2: {
IntStream.rangeClosed(0, 100).forEach(i -> HiLog.info(LABEL_LOG, "The Image" + i + "is downloading please wait"));
break;
}
case CODE_DOWNLOAD_FILE3: {
handlerTV.setText("Image downloading successfully");
HiLog.info(LABEL_LOG, "The Image done");
break;
}
default:
break;
}
}
}
public class Customer {
int amount=10000;
synchronized void withdraw(int amount){
System.out.println("going to withdraw...");
if(this.amount<amount){
System.out.println("Less balance; waiting for deposit...");
try{
wait();
}catch(Exception e){}
}
this.amount-=amount;
System.out.println("withdraw completed...");
}
synchronized void deposit(int amount){
System.out.println("going to deposit...");
this.amount+=amount;
System.out.println("deposit completed... ");
notify();
}
}
}
Add the below code in ability_main.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:alignment="center"
ohos:orientation="vertical">
<Button
ohos:id="$+id:inter_thread"
ohos:height="match_content"
ohos:width="match_content"
ohos:text="$string:mainability_interthread"
ohos:text_size="22fp"
ohos:padding="10vp"
ohos:margin="20vp"
ohos:background_element="$graphic:background_button"></Button>
<Button
ohos:id="$+id:delay_inter_thread"
ohos:height="match_content"
ohos:width="match_content"
ohos:text="$string:mainability_delay_interthread"
ohos:text_size="22fp"
ohos:padding="10vp"
ohos:margin="20vp"
ohos:background_element="$graphic:background_button"></Button>
<Button
ohos:id="$+id:inter_thread_banking_system"
ohos:height="match_content"
ohos:width="match_content"
ohos:text="$string:mainability_banking_system"
ohos:text_size="22fp"
ohos:padding="10vp"
ohos:margin="20vp"
ohos:background_element="$graphic:background_button"></Button>
<Text
ohos:id="$+id:text_helloworld"
ohos:height="match_content"
ohos:width="match_content"
ohos:background_element="$graphic:background_ability_main"
ohos:layout_alignment="horizontal_center"
ohos:text_size="40vp"
/>
</DirectionalLayout>
6. To build apk and run in device, choose Build > Generate Key and CSR Build for Hap(s)\ APP(s) or Build and Run into connected device, follow the steps.
Result
Click on UI Inter Thread Communication button. It will execute an image download time-consuming operation task in the current thread without blocking the thread.
Pros: You can perform time consuming operation without blocking current thread.
Cons: If you trying to execute time consuming operation without
2. Click on‘Banking System’ button, it will communicate between two thread.
Pros: You can communicate between two threads. Only single thread to access the shared resource.
Cons: Increase the waiting time of the thread. Create performance problem.
Tips and Tricks
Always use the latest version of DevEcho Studio.
Use Harmony Device Simulator from HVD section.
Conclusion
In this article, we have learnt Inter Thread Management in Harmony OS. EventHandler mainly used for time-consuming operation task in current thread without blocking the thread, try out the EventHandler to achieve efficient communications between different threads in HarmonyOS.
Thanks for reading the article, please do like and comment your queries or suggestions.
References
Harmony OS: https://www.harmonyos.com/en/develop/
Harmony OS Thread Management:
https://developer.harmonyos.com/en/docs/documentation/doc-guides/inter-thread-guidelines-0000000000038955
Original Source
very good sharing, thanks
Overview
In this article, I will create a Doctor Consult Demo App along with the integration of Huawei Id, Crash, Analytics and Identity Kit. Which provides an easy interface to consult with doctor. Users can choose specific doctors and get the doctor details using Huawei User Address.
By reading this article, you'll get an overview of HMS Core Identity, Analytics, Crash and Account Kit, including its functions, open capabilities and business value.
HMS Core Identity Service Introduction
Hms Core Identity provides an easy interface to add or edit or delete user details and enables the users to authorize apps to access their addresses through a single tap on the screen. That is, app can obtain user addresses in a more convenient way.
Prerequisite
Huawei Phone EMUI 3.0 or later.
Non-Huawei phones Android 4.4 or later (API level 19 or higher).
Android Studio
AppGallery Account.
App Gallery Integration process
Sign In and Create or Choose a project on AppGallery Connect portal.
Navigate to Project settings and download the configuration file.
Navigate to General Information, and then provide Data Storage location.
App Development
Create A New Project.
Configure Project Gradle.
Configure App Gradle.
Configure AndroidManifest.xml.
Create Activity class with XML UI.
AppModule:
Code:
package com.hms.doctorconsultdemo.di;
import android.app.Application;
import javax.inject.Singleton;
import dagger.Module;
import dagger.Provides;
@Module
public class AppModule {
private Application mApplication;
public AppModule(Application mApplication) {
this.mApplication = mApplication;
}
@Provides
@Singleton
Application provideApplication() {
return mApplication;
}
}
ApiModule:
JavaScript:
package com.hms.doctorconsultdemo.di;
import android.app.Application;
import com.google.gson.FieldNamingPolicy;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;
import javax.inject.Singleton;
import dagger.Module;
import dagger.Provides;
import okhttp3.Cache;
import okhttp3.OkHttpClient;
import retrofit2.Retrofit;
import retrofit2.converter.gson.GsonConverterFactory;
@Module
public class ApiModule {
String mBaseUrl;
public ApiModule(String mBaseUrl) {
this.mBaseUrl = mBaseUrl;
}
@Provides
@Singleton
Cache provideHttpCache(Application application) {
int cacheSize = 10 * 1024 * 1024;
Cache cache = new Cache(application.getCacheDir(), cacheSize);
return cache;
}
@Provides
@Singleton
Gson provideGson() {
GsonBuilder gsonBuilder = new GsonBuilder();
gsonBuilder.setFieldNamingPolicy(FieldNamingPolicy.LOWER_CASE_WITH_UNDERSCORES);
return gsonBuilder.create();
}
@Provides
@Singleton
OkHttpClient provideOkhttpClient(Cache cache) {
OkHttpClient.Builder client = new OkHttpClient.Builder();
client.cache(cache);
return client.build();
}
@Provides
@Singleton
Retrofit provideRetrofit(Gson gson, OkHttpClient okHttpClient) {
return new Retrofit.Builder()
.addConverterFactory(GsonConverterFactory.create(gson))
.baseUrl(mBaseUrl)
.client(okHttpClient)
.build();
}
}
ApiComponent:
Code:
package com.hms.doctorconsultdemo.di;
import com.hms.doctorconsultdemo.MainActivity;
import javax.inject.Singleton;
import dagger.Component;
@Singleton
@Component(modules = {AppModule.class, ApiModule.class})
public interface ApiComponent {
void inject(MainActivity activity);
}
MyApplication:
Code:
package com.hms.doctorconsultdemo;
import android.app.Application;
import com.hms.doctorconsultdemo.di.ApiComponent;
import com.hms.doctorconsultdemo.di.ApiModule;
import com.hms.doctorconsultdemo.di.AppModule;
import com.hms.doctorconsultdemo.di.DaggerApiComponent;
public class MyApplication extends Application {
private ApiComponent mApiComponent;
@Override
public void onCreate() {
super.onCreate();
mApiComponent = DaggerApiComponent.builder()
.appModule(new AppModule(this))
.apiModule(new ApiModule("REST_API_URL"))
.build();
}
public ApiComponent getNetComponent() {
return mApiComponent;
}
}
MainActivity:
Code:
package com.hms.doctorconsultdemo;
import android.content.Intent;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import androidx.appcompat.app.AppCompatActivity;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.common.ApiException;
import com.huawei.hms.support.hwid.HuaweiIdAuthManager;
import com.huawei.hms.support.hwid.request.HuaweiIdAuthParams;
import com.huawei.hms.support.hwid.request.HuaweiIdAuthParamsHelper;
import com.huawei.hms.support.hwid.result.AuthHuaweiId;
import com.huawei.hms.support.hwid.service.HuaweiIdAuthService;
public class MainActivity extends AppCompatActivity implements View.OnClickListener {
private static final int REQUEST_SIGN_IN_LOGIN = 1002;
private static String TAG = MainActivity.class.getName();
private HuaweiIdAuthService mAuthManager;
private HuaweiIdAuthParams mAuthParam;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Button view = findViewById(R.id.btn_sign);
view.setOnClickListener(this);
}
private void signIn() {
mAuthParam = new HuaweiIdAuthParamsHelper(HuaweiIdAuthParams.DEFAULT_AUTH_REQUEST_PARAM)
.setIdToken()
.setAccessToken()
.createParams();
mAuthManager = HuaweiIdAuthManager.getService(this, mAuthParam);
startActivityForResult(mAuthManager.getSignInIntent(), REQUEST_SIGN_IN_LOGIN);
}
@Override
public void onClick(View v) {
switch (v.getId()) {
case R.id.btn_sign:
signIn();
break;
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_SIGN_IN_LOGIN) {
Task<AuthHuaweiId> authHuaweiIdTask = HuaweiIdAuthManager.parseAuthResultFromIntent(data);
if (authHuaweiIdTask.isSuccessful()) {
AuthHuaweiId huaweiAccount = authHuaweiIdTask.getResult();
Log.i(TAG, huaweiAccount.getDisplayName() + " signIn success ");
Log.i(TAG, "AccessToken: " + huaweiAccount.getAccessToken());
Intent intent = new Intent(this, HomeActivity.class);
intent.putExtra("user", huaweiAccount.getDisplayName());
startActivity(intent);
this.finish();
} else {
Log.i(TAG, "signIn failed: " + ((ApiException) authHuaweiIdTask.getException()).getStatusCode());
}
}
}
}
App Build Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Tips and Tricks
Identity Kit displays the HUAWEI ID registration or sign-in page first. The user can use the functions provided by Identity Kit only after signing in using a registered HUAWEI ID.
A maximum of 10 user addresses are allowed.
If HMS Core (APK) is installed on a mobile phone, check the version. If the version is earlier than 4.0.0, upgrade it to 4.0.0 or later. If the version is 4.0.0 or later, you can call the HMS Core Identity SDK to use the capabilities.
Conclusion
In this article, we have learned how to integrate HMS Core Identity in Android application. After completely read this article user can easily implement Huawei User Address APIs by HMS Core Identity, so that User can consult with doctor using Huawei User Address.
Thanks for reading this article. Be sure to like and comment to this article, if you found it helpful. It means a lot to me.
References
HMS Identity Docs: https://developer.huawei.com/consumer/en/hms/huawei-identitykit/
Identity Kit - https://developer.huawei.com/consumer/en/training/course/video/101582966949059136
Overview
In this article, I will create a Doctor Consult Demo App along with the integration of ML Product Visual Search Api. Which provides an easy interface to consult with doctor. Users can scan their prescriptions using Product Visual Search Api.
Previous Articles Link:
https://forums.developer.huawei.com/forumPortal/en/topic/0201829733289720014?fid=0101187876626530001
https://forums.developer.huawei.com/forumPortal/en/topic/0201817617825540005?fid=0101187876626530001
https://forums.developer.huawei.com/forumPortal/en/topic/0201811543541800017?fid=0101187876626530001
HMS Core Map Service Introduction
ML Kit allows your apps to easily leverage Huawei's long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei's technology accumulation, ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
Product Visual Search: This service searches for the same or similar products in the pre-established product image library based on a product photo taken by a user, and returns the IDs of those products and related information. In addition, to better manage products in real time, this service supports offline product import, online product adding, deletion, modification, and query, and product distribution.
Prerequisite
Huawei Phone EMUI 3.0 or later.
Non-Huawei phones Android 4.4 or later (API level 19 or higher).
Android Studio.
AppGallery Account.
App Gallery Integration process
Sign In and Create or Choose a project on AppGallery Connect portal.
Navigate to Project settings and download the configuration file.
Navigate to General Information, and then provide Data Storage location.
App Development
Create A New Project.
Configure Project Gradle.
Configure App Gradle.
Configure AndroidManifest.xml.
Create Activity class with XML UI.
Java:
package com.hms.doctorconsultdemo.ml;
import android.Manifest;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.graphics.Bitmap;
import android.os.Bundle;
import android.provider.MediaStore;
import android.util.Log;
import androidx.annotation.Nullable;
import androidx.core.app.ActivityCompat;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.MLAnalyzerFactory;
import com.huawei.hms.mlsdk.common.MLException;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.productvisionsearch.MLProductVisionSearch;
import com.huawei.hms.mlsdk.productvisionsearch.MLVisionSearchProduct;
import com.huawei.hms.mlsdk.productvisionsearch.MLVisionSearchProductImage;
import com.huawei.hms.mlsdk.productvisionsearch.cloud.MLRemoteProductVisionSearchAnalyzer;
import com.huawei.hms.mlsdk.productvisionsearch.cloud.MLRemoteProductVisionSearchAnalyzerSetting;
import java.util.ArrayList;
import java.util.List;
public class ScanMainActivity extends BaseActivity {
private static final String TAG = ScanMainActivity.class.getName();
private static final int CAMERA_PERMISSION_CODE = 100;
MLRemoteProductVisionSearchAnalyzer analyzer;
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
init();
initializeProductVisionSearch();
}
private void init() {
if (!(ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA)
== PackageManager.PERMISSION_GRANTED)) {
this.requestCameraPermission();
}
initializeProductVisionSearch();
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(intent, 101);
}
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.CAMERA};
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, this.CAMERA_PERMISSION_CODE);
return;
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == 101) {
if (resultCode == RESULT_OK) {
Bitmap bitmap = (Bitmap) data.getExtras().get("data");
if (bitmap != null) {
MLFrame mlFrame = new MLFrame.Creator().setBitmap(bitmap).create();
mlImageDetection(mlFrame);
}
}
}
}
private void mlImageDetection(MLFrame mlFrame) {
Task<List<MLProductVisionSearch>> task = analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(products -> {
Log.d(TAG, "success");
displaySuccess(products);
})
.addOnFailureListener(e -> {
try {
MLException mlException = (MLException) e;
int errorCode = mlException.getErrCode();
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
});
}
private void initializeProductVisionSearch() {
MLRemoteProductVisionSearchAnalyzerSetting settings = new MLRemoteProductVisionSearchAnalyzerSetting.Factory()
.setLargestNumOfReturns(16)
.setRegion(MLRemoteProductVisionSearchAnalyzerSetting.REGION_DR_CHINA)
.create();
analyzer
= MLAnalyzerFactory.getInstance().getRemoteProductVisionSearchAnalyzer(settings);
}
private void displaySuccess(List<MLProductVisionSearch> productVisionSearchList) {
List<MLVisionSearchProductImage> productImageList = new ArrayList<>();
String prodcutType = "";
for (MLProductVisionSearch productVisionSearch : productVisionSearchList) {
Log.d(TAG, "type: " + productVisionSearch.getType());
prodcutType = productVisionSearch.getType();
for (MLVisionSearchProduct product : productVisionSearch.getProductList()) {
productImageList.addAll(product.getImageList());
Log.d(TAG, "custom content: " + product.getCustomContent());
}
}
StringBuffer buffer = new StringBuffer();
for (MLVisionSearchProductImage productImage : productImageList) {
String str = "ProductID: " + productImage.getProductId() + "\nImageID: " + productImage.getImageId() + "\nPossibility: " + productImage.getPossibility();
buffer.append(str);
buffer.append("\n");
}
Log.d(TAG, "display success: " + buffer.toString());
ScanDataActivity.start(this, productImageList);
}
}
App Build Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Tips and Tricks
Images in PNG, JPG, JPEG, and BMP formats are supported. GIF images are not supported.
ML Kit complies with GDPR requirements for data processing.
Face detection requires Android phones with the Arm architecture.
Conclusion
In this article, we have learned how to integrate HMS ML Kit using Product Visual Search Api in Android application. After completely read this article user can easily implement HMS ML Kit using Product Visual Search Api. So that Users can scan their prescriptions using Product Visual Search Api.
Thanks for reading this article. Be sure to like and comment to this article, if you found it helpful. It means a lot to me.
References
HMS ML Docs:
https://developer.huawei.com/consum...-Guides/service-introduction-0000001050040017
HMS Training Videos -
https://developer.huawei.com/consumer/en/training/