Preface
Free Translation is a real-time translation app that provides a range of services, including speech recognition, text translation, and text-to-speech (TTS).
Developing an AI app like Free Translation tends to require complex machine learning know-how, but integrating ML Kit makes the development quick and easy.
Use Scenarios
Free Translation is equipped to handle a wide range of user needs, for example translating content at work, assisting during travel in a foreign country, or helping with communicating with a foreign friend or learning a new language.
Development Preparations
1. Configure the Huawei Maven repository address.
2. Add build dependencies for the ML SDK.
Open the build.gradle file in the app directory of your project.
Code:
dependencies {
// Import the automatic speech recognition (ASR) plug-in.
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.0.3.300'
// Import the text translation SDK.
implementation 'com.huawei.hms:ml-computer-translate:2.0.4.300'
// Import the text translation algorithm package.
implementation 'com.huawei.hms:ml-computer-translate-model:2.0.4.300'
// Import the TTS SDK.
implementation 'com.huawei.hms:ml-computer-voice-tts:2.0.4.300'
// Import the bee voice package of on-device TTS.
implementation 'com.huawei.hms:ml-computer-voice-tts-model-bee:2.0.4.300'
For more details, please refer to Preparations.
Open the AndroidManifest.xml file in the main directory, and add the relevant permissions above the <application/> line.
Code:
<uses-permission android:name="android.permission.INTERNET" /> <!-- Accessing the Internet. -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <!-- Obtaining the network status. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /><!-- Upgrading the algorithm version. -->
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /><!-- Obtaining the Wi-Fi status. -->
<uses-permission android:name="android.permission.RECORD_AUDIO" /><!-- Recording audio data by using the recorder. -->
Development Procedure
UI Design
Customize the app UI according to your needs, and based on activity_main.xml, the layout file.
Tap on START RECOGNITION to load the ASR module, which recognizes what the user says.
Tap on SYNTHETIC VOICE to load the TTS module, which reads out the resulting translation.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Function Development
You can integrate an ASR plug-in to quickly integrate the ASR service.
Code:
public void startAsr(View view) {
// Use Intent for recognition settings.
Intent intent = new Intent(this, MLAsrCaptureActivity.class)
// Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Languages supported include the following: "zh-CN": Chinese; "en-US": English; "fr-FR": French; "de-DE": German; "it-IT": Italian.
.putExtra(MLAsrCaptureConstants.LANGUAGE, Constants.ASR_SOURCE[spinnerInput.getSelectedItemPosition()])
// Set whether to display the recognition result on the speech pickup UI. MLAsrCaptureConstants.FEATURE_ALLINONE: no; MLAsrCaptureConstants.FEATURE_WORDFLUX: yes.
.putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX);
// 100: request code between the current activity and speech pickup UI activity. You can use this code to obtain the processing result of the speech pickup UI.
startActivityForResult(intent, 100);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
String text;
// 100: request code between the current activity and speech pickup UI activity, which is defined above.
if (requestCode == 100) {
switch (resultCode) {
// MLAsrCaptureConstants.ASR_SUCCESS: Recognition is successful.
case MLAsrCaptureConstants.ASR_SUCCESS:
if (data != null) {
Bundle bundle = data.getExtras();
// Obtain the text information recognized from speech.
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT);
// Process the recognized text information.
textViewInput.setText(text);
Translation.run(this, textViewOutput, spinnerInput.getSelectedItemPosition(),
spinnerOutput.getSelectedItemPosition(), text);
}
}
break;
}
}
}
Create the Translation class to use the text translation service.
Step 1 Define the public method, which decides whether to use real-time or on-device translation.
Code:
public static void run(Activity activity, TextView textView, int sourcePosition, inttargetPosition, String sourceText) {
Log.d(TAG, Constants.TRANSLATE[sourcePosition] + ", " + Constants.TRANSLATE[targetPosition] + ", " + sourceText);
if (isOffline) {
onDeviceTranslation(activity, textView, sourcePosition, targetPosition, sourceText);
} else {
realTimeTranslation(textView, sourcePosition, targetPosition, sourceText);
}
}
Step 2 Call the real-time or on-device translation method.
Code:
private static void realTimeTranslation(final TextView textView, int sourcePosition, final int targetPosition, String sourceText) {
Log.d(TAG, "realTimeTranslation");
}
private static void onDeviceTranslation(final Activity activity, final TextView textView, final int sourcePosition, final int targetPosition, final String sourceText) {
Set<String> result = MLTranslateLanguage.syncGetLocalAllLanguages();
Log.d(TAG, "Languages supported by on-device translation: " +Arrays.toString(result.toArray()));
}
Create the TTS class to use the text-to-speech service.
Step 1 Just as with Step 1 in Translation, define the public method, which decides whether to use real-time or on-device TTS.
Code:
public static void run(Activity activity, int targetPosition, String sourceText) {
Log.d(TAG, sourceText);
if (isNotAuto || sourceText.isEmpty()) {
return;
}
if (isOffline) {
if (0 == targetPosition) {
Toast.makeText(activity, ,
Toast.LENGTH_SHORT).show();
return;
}
offlineTts(activity, Constants.TTS_TARGET[targetPosition],
Constants.TTS_TARGET_SPEAKER_OFFLINE[targetPosition], sourceText);
} else {
onlineTts(Constants.TTS_TARGET[targetPosition], Constants.TTS_TARGET_SPEAKER[targetPosition], sourceText);
}
}
Step 2 Call the real-time or on-device TTS method.
Code:
private static void onlineTts(String language, String person, String sourceText) {
Log.d(TAG, language + ", " + person + ", " + sourceText);
}
private static void offlineTts(final Activity activity, String language, final String person, finalString sourceText) {
// Use customized parameter settings to create a TTS engine.
// For details about the speaker names, please refer to the Timbres section.
final MLTtsConfig mlTtsConfig = new MLTtsConfig().setLanguage(language)
.setPerson(person)
// Set the TTS mode to on-device mode. The default mode is real-time mode.
.setSynthesizeMode(MLTtsConstants.TTS_OFFLINE_MODE);
}
Final Effects
More Information
To join in on developer discussion forums, go to Reddit.
To download the demo app and sample code, go to GitHub.
For solutions to integration-related issues, go to Stack Overflow.
More details
Related
More articles like this, you can visit HUAWEI Developer Forum and Medium
Background
I believe that we all start to learn a language when we have done dictation, now when the primary school students learn the language is an important after - school work is dictating the text of the new words, many parents have this experience. However, on the one hand, the pronunciation is relatively simple. On the other hand, parents' time is very precious. Now, there are many dictation voices in the market. These broadcasters record the dictation words in the language teaching materials after class for parents to download. However, this kind of recording is not flexible enough, if the teacher leaves a few extra words today that are not part of the after-school problem set, the recordings won't meet the needs of parents and children. This document describes how to use the general text recognition and speech synthesis functions of the ML kit to implement the automatic voice broadcast app. You only need to take photos of dictation words or texts, and then the text in the photos can be automatically played. The tone color and tone of the voice can be adjusted.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Preparations
Open the project-level build.gradle file
Choose allprojects > repositories and configure the Maven repository address of HMS SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Configure the Maven repository address of HMS SDK in buildscript->repositories.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Choose buildscript > dependencies and configure the AGC plug-in.
Code:
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
Adding Compilation Dependencies
Open the application levelbuild.gradle file.
SDK integration
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-voice-tts:1.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr:1.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:1.0.4.300'
}
Add the ACG plug-in to the file header.
Code:
apply plugin: 'com.huawei.agconnect'
Specify permissions and features: Declare them in AndroidManifest.xml.
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Key Development Steps
There are two main functions. One is to identify the operation text, and the other is to read the operation. The OCR+TTS mode is used to read the operation. After taking a photo, click the play button to read the operation.
Dynamic permission application
Code:
private static final int PERMISSION_REQUESTS = 1;
@Override
public void onCreate(Bundle savedInstanceState) {
// Checking camera permission
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
2. Start the reading interface.
Code:
public void takePhoto(View view) {
Intent intent = new Intent(MainActivity.this, ReadPhotoActivity.class);
startActivity(intent);
}
3.Invoke createLocalTextAnalyzer() in the onCreate() method to create a device-side text recognizer.
Code:
private void createLocalTextAnalyzer() {
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("zh")
.create();
this.textAnalyzer = MLAnalyzerFactory.getInstance().getLocalTextAnalyzer(setting);
}
4.Invoke createLocalTextAnalyzer() in the onCreate() method to create a device-side text recognizer.
Code:
private void createTtsEngine() {
MLTtsConfig mlConfigs = new MLTtsConfig()
.setLanguage(MLTtsConstants.TTS_ZH_HANS)
.setPerson(MLTtsConstants.TTS_SPEAKER_FEMALE_ZH)
.setSpeed(0.2f)
.setVolume(1.0f);
this.mlTtsEngine = new MLTtsEngine(mlConfigs);
MLTtsCallback callback = new MLTtsCallback() {
@Override
public void onError(String taskId, MLTtsError err) {
}
@Override
public void onWarn(String taskId, MLTtsWarn warn) {
}
@Override
public void onRangeStart(String taskId, int start, int end) {
}
@Override
public void onEvent(String taskId, int eventName, Bundle bundle) {
if (eventName == MLTtsConstants.EVENT_PLAY_STOP) {
if (!bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED)) {
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_finish, Toast.LENGTH_SHORT).show();
}
}
}
};
mlTtsEngine.setTtsCallback(callback);
}
5.Set the buttons for reading photos, taking photos, and reading aloud.
Code:
this.relativeLayoutLoadPhoto.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ReadPhotoActivity.this.selectLocalImage(ReadPhotoActivity.this.REQUEST_CHOOSE_ORIGINPIC);
}
});
this.relativeLayoutTakePhoto.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ReadPhotoActivity.this.takePhoto(ReadPhotoActivity.this.REQUEST_TAKE_PHOTO);
}
});
6.Start TextAnalyzer() during the callback of photographing and reading photos.
Code:
private void startTextAnalyzer() {
if (this.isChosen(this.originBitmap)) {
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Task<MLText> task = this.textAnalyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText mlText) {
// Transacting logic for segment success.
if (mlText != null) {
ReadPhotoActivity.this.remoteDetectSuccess(mlText);
} else {
ReadPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Transacting logic for segment failure.
ReadPhotoActivity.this.displayFailure();
return;
}
});
} else {
Toast.makeText(this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show();
return;
}
}
7.After the recognition is successful, click the play button to start the playback.
Code:
this.relativeLayoutRead.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (ReadPhotoActivity.this.sourceText == null) {
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show();
} else {
ReadPhotoActivity.this.mlTtsEngine.speak(sourceText, MLTtsEngine.QUEUE_APPEND);
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_start, Toast.LENGTH_SHORT).show();
}
}
});
Demo
Any questions about this process, you can visit HUAWEI Developer Forum.
Seems quite simple and useful. I will try.
sanghati said:
Hi,
Nice article. Can you use ML kit for scanning product and find that product online to buy.
Thanks
Click to expand...
Click to collapse
Hi, if you want to scan products which you want to buy, you can use scan kit. Refer the document and acquire help from HUAWEI Developer Forum.
More information like this, you can visit HUAWEI Developer Forum
Introduction
Customers of Huawei maps are people who use maps API like Ecommerce, real estate portals, travel portals etc. and end users who use the Huawei maps application through different devices. End users also experience the maps interface through different search experiences like the local business searches, hotel searches, flight searches.
Let’s Start how to Integrate Map:
Step1: create a new project in Android studio.
Step 2: Configure your app into AGC.
Step 3: Enable required Api & add SHA-256.
Step 4: Download the agconnect-services.json from AGC. Paste into app directory.
Step 5: Add the below dependency in app.gradle file.
Code:
implementation 'com.huawei.hms:maps:4.0.0.301'
implementation 'com.huawei.hms:site:4.0.3.300'
Step 6: Add the below dependency in root.gradle file
Code:
maven { url 'http://developer.huawei.com/repo/' }
Step 7: Add appId & permissions in AndoridManifest.xml file
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<meta-data
android:name="com.huawei.hms.client.appid"
android:value="appid=*********" />
Step 8: Sync your Project
Use Cases:
Huawei site kit supports below capabilities.
· Keyword Search: It will display list based on user input.
· Nearby Place Search: It will display nearby places based on the current location of the user's device.
· Place Detail Search: Searches for details about a place.
· Place Search Suggestion: Returns a list of suggested places.
Let’s Discuss how to integrate Site Kit:
Declare SearchService create Instance for SearchService.
Code:
SearchService searchService = SearchServiceFactory.create(this,URLEncoder.encode("API_KEY", "utf-8"));
Create TextSearchRequest, which is the place to search request. below are the parameters.
query: search keyword.
location: longitude and latitude to which search results need to be biased.
radius: search radius, in meters. The value ranges from 1 to 50000. The default value is 50000.
poiType: POI type. The value range is the same as that of LocationType.
HwPoiType: Huawei POI type. his parameter is recommended. The value range is the same as that of HwLocationType.
countrycode: code of the country where places are searched. The country code must comply with the ISO 3166-1 alpha-2 standard.
language: language in which search results are displayed. For details about the value range, please refer to language codes in LanguageMapping
If this parameter is not passed, the language of the query field is used in priority. If the field language is unavailable, the local language will be used.
pageSize: number of records on each page. The value ranges from 1 to 20. The default value is 20.
pageIndex: current page number. The value ranges from 1 to 60. The default value is 1.
Code:
autoCompleteTextView.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence charSequence, int i, int i1, int i2) {
}
@Override
public void onTextChanged(CharSequence charSequence, int i, int i1, int i2) {
if (timer != null) {
timer.cancel();
}
}
@Override
public void afterTextChanged(final Editable text) {
timer = new Timer();
timer.schedule(new TimerTask() {
@Override
public void run() {
if (text.length() > 3) {
list.clear();
final TextSearchRequest textSearchRequest = new TextSearchRequest();
textSearchRequest.setQuery(text.toString());
searchService.textSearch(textSearchRequest, new SearchResultListener<TextSearchResponse>() {
@Override
public void onSearchResult(TextSearchResponse response) {
for (Site site : response.getSites()) {
LatLng latLng = new LatLng(site.getLocation().getLat(), site.getLocation().getLng());
SearchResult searchResult = new SearchResult(latLng, site.getAddress().getLocality(), site.getName());
String result = site.getName() + "," + site.getAddress().getSubAdminArea() + "\n" +
site.getAddress().getAdminArea() + "," +
site.getAddress().getLocality() + "\n" +
site.getAddress().getCountry() + "\n" +
site.getAddress().getPostalCode() + "\n";
list.add(result);
searchList.add(searchResult);
}
mAutoCompleteAdapter.clear();
mAutoCompleteAdapter.addAll(list);
mAutoCompleteAdapter.notifyDataSetChanged();
autoCompleteTextView.setAdapter(mAutoCompleteAdapter);
Toast.makeText(MainActivity.this, String.valueOf(response.getTotalCount()), Toast.LENGTH_SHORT).show();
}
@Override
public void onSearchError(SearchStatus searchStatus) {
Toast.makeText(MainActivity.this, searchStatus.getErrorCode(), Toast.LENGTH_SHORT).show();
}
});
}
}
}, 200);
autoCompleteTextView.setOnItemClickListener(new AdapterView.OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> parent, View view, int position, long id) {
mAutoCompleteAdapter.getItem(position);
selectedPostion = searchList.get(position);
try {
createMarker(new LatLng(selectedPostion.latLng.latitude, selectedPostion.latLng.longitude));
} catch (IOException e) {
e.printStackTrace();
}
}
});
}
});
Output:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Reference:
https://developer.huawei.com/consumer/en/doc/development/HMS-References/hms-map-cameraupdate
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/android-sdk-introduction-0000001050158571
More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201257887466590240&fid=0101187876626530001
Introductions
Richard Yu introduced Huawei HMS Core 4.0 to you at the launch event a while ago. Please check the launch event information:
What does the global release of HMS Core 4.0 mean?
Machine Learning Kit (MLKit) is one of the most important services.
What can MLKIT do? Which of the following problems can be solved during application development?
Today, let’s take face detection as an example to show you the powerful functions of MLKIT and the convenience it provides for developers.
1.1 Capabilities Provided by MLKIT Face Detection
First, let’s look at the face detection capability of Huawei Machine Learning Service (MLKIT).
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
As shown in the animation, facial recognition can recognize the face direction, detect facial expressions (such as happy, disgusted, surprised, sad, angry, and angry), detect facial attributes (such as gender, age, and wearable), and detect whether to open or close eyes, supports coordinate detection of features such as faces, noses, eyes, lips, and eyebrows. In addition, multiple faces can be detected at the same time.
Tips: This function is free of charge and covers all Android models.
2 Development of the Multi-Face Smile Photographing Function
Today, I will use the multi-facial recognition and expression detection capabilities of MLKIT to write a small demo for smiling snapshot and perform a practice.
To download the Github demo source code, click here (the project directory is Smile-Camera).
2.1 Development Preparations
The preparations for developing the kit of Huawei HMS are similar. The only difference is that the Maven dependency is added and the SDK is introduced.
1. Add the Huawei Maven repository to the project-level gradle.
Incrementally add the following Maven addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
}
}
2. Add the SDK dependency to the build.gradle file at the application level.
Introduce the facial recognition SDK and basic SDK.
Code:
dependencies{
// Introduce the basic SDK.
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
// Introduce the face detection capability package.
implementation 'com.huawei.hms:ml-computer-vision-face-recognition-model:1.0.2.300'
}
3. The model is added to the AndroidManifest.xml file in incremental mode for automatic download.
This is mainly used to update the model. After the algorithm is optimized, the model can be automatically downloaded to the mobile phone for update.
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
</application>
</manifest>
4. Apply for camera and storage permissions in the AndroidManifest.xml file.
Code:
<!-Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Use the storage permission.-->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2.2 Code development
1. Create a face analyzer and take photos when a smile is detected.
Photos taken after detection:
1) Analyzer parameter configuration
2) Sends analyzer parameter settings to the analyzer.
3) In analyzer.setTransacto, transactResult is rewritten to process the content after facial recognition. After facial recognition, a confidence level (smiling probability) is returned. You only need to set the confidence level to a certain value.
Code:
private MLFaceAnalyzer analyzer;
private void createFaceAnalyzer() {
MLFaceAnalyzerSetting setting =
new MLFaceAnalyzerSetting.Factory()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
.setKeyPointType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_KEYPOINTS)
.setMinFaceProportion(0.1f)
.setTracingAllowed(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
this.analyzer.setTransactor(new MLAnalyzer.MLTransactor<MLFace>() {
@Override
public void destroy() {
}
Code:
@Override
public void transactResult(MLAnalyzer.Result<MLFace> result) {
SparseArray<MLFace> faceSparseArray = result.getAnalyseList();
int flag = 0;
for (int i = 0; i < faceSparseArray.size(); i++) {
MLFaceEmotion emotion = faceSparseArray.valueAt(i).getEmotions();
if (emotion.getSmilingProbability() > smilingPossibility) {
flag++;
}
}
if (flag > faceSparseArray.size() * smilingRate && safeToTakePicture) {
safeToTakePicture = false;
mHandler.sendEmptyMessage(TAKE_PHOTO);
}
}
});
}
Photographing and storage:
Code:
private void takePhoto() {
this.mLensEngine.photograph(null,
new LensEngine.PhotographListener() {
@Override
public void takenPhotograph(byte[] bytes) {
mHandler.sendEmptyMessage(STOP_PREVIEW);
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
saveBitmapToDisk(bitmap);
}
});
}
2. Create a visual engine to capture dynamic video streams from cameras and send the streams to the analyzer.
Code:
private void createLensEngine() {
Context context = this.getApplicationContext();
// Create LensEngine
this.mLensEngine = new LensEngine.Creator(context, this.analyzer).setLensType(this.lensType)
.applyDisplayDimension(640, 480)
.applyFps(25.0f)
.enableAutomaticFocus(true)
.create();
}
3. Dynamic permission application, attaching the analyzer and visual engine creation code3. Dynamic permission application, attaching the analyzer and visual engine creation code
Code:
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
this.setContentView(R.layout.activity_live_face_analyse);
if (savedInstanceState! = null) {
this.lensType = savedInstanceState.getInt("lensType");
}
this.mPreview = this.findViewById(R.id.preview);
this.createFaceAnalyzer();
this.findViewById(R.id.facingSwitch).setOnClickListener(this);
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
} else {
this.requestCameraPermission();
}
}
Code:
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE};
Code:
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE);
return;
}
}
Code:
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
if (requestCode != LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
return;
}
if (grantResults.length != 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
return;
}
}
3 Conclusion
Is the development process very simple? A new feature can be developed in 30 minutes. Let’s experience the effect of the multi-faced smile capture.
Multi-person smiling face snapshot:
Based on the face detection capability, which functions can be done? Open your brain hole! Here are a few hints:
1. Add interesting decorative effects by identifying the locations of facial features such as ears, eyes, nose, mouth, and eyebrows.
2. Identify facial contours and stretch the contours to generate interesting portraits or develop facial beautification functions for contour areas.
3. Develop some parental control functions based on age identification and children’s infatuation with electronic products.
4. Develop the eye comfort feature by detecting the duration of eyes staring at the screen.
5. Implements liveness detection through random commands (such as shaking the head, blinking the eyes, and opening the mouth).
6. Recommend offerings to users based on their age and gender.
For details about the development guide, visit the HUAWEI Developers
Introduction
Huawei provides Remote Configuration service to manage parameters online, with this service you can control or change the behavior and appearance of you app online without requiring user’s interaction or update to app. By implementing the SDK you can fetch the online parameter values delivered on the AG-console to change the app behavior and appearance.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Functional features
1. Parameter management: This function enables user to add new parameter, delete, update existing parameter and setting conditional values.
2. Condition management: This function enables user to adding, deleting and modifying conditions and copy and modify existing conditions. Currently, you can set the following conditions version, country/region, audience, user attribute, user percentage, time and language. You can expect more conditions in the future.
3. Version management: This feature function supports user to manage and rollback up to 90 days of 300 historical versions for parameters and conditions.
4. Permission management: This feature function allows account holder, app administrator, R&D personnel, and administrator and operations personals to access Remote Configuration by default.
Service use cases
Change app language by Country/Region
Show Different Content to Different Users
Change the App Theme by Time
Development Overview
You need to install Unity software and I assume that you have prior knowledge about the unity and C#.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
A Huawei phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.7 or later.
Unity software installed.
Visual Studio/Code installed.
HMS Core (APK) 4.X or later.
Integration Preparations
1. Create a project in AppGallery Connect.
2. Create Unity project.
3. Huawei HMS AGC Services to project.
4. Download and save the configuration file.
Add the agconnect-services.json file following directory Assests > Plugins > Android
5. Add the following plugin and dependencies in LaucherTemplate.
Code:
apply plugin:'com.huawei.agconnect'
Code:
implementation 'com.huawei.agconnect:agconnect-remoteconfig:1.4.1.300'
implementation 'com.huawei.agconnect:agconnect-core:1.4.2.301'
6. Add the following dependencies in MainTemplate.
Code:
apply plugin: 'com.huawei.agconnect'
Code:
implementation 'com.huawei.agconnect:agconnect-remoteconfig:1.4.1.300'
implementation 'com.huawei.agconnect:agconnect-core:1.4.2.301'
7. Add dependencies in build script repositories and all project repositories & class path in BaseProjectTemplate.
Code:
maven { url 'https://developer.huawei.com/repo/' }
8. Configuring project in AGC
9. Create Empty Game object rename to RemoteConfigManager, UI canvas texts and button and assign onclick events to respective text and button as shown below.
RemoteConfigManager.cs
C#:
using UnityEngine;
using HuaweiService.RemoteConfig;
using HuaweiService;
using Exception = HuaweiService.Exception;
using System;
public class RemoteConfigManager : MonoBehaviour
{
public static bool develporMode;
public delegate void SuccessCallBack<T>(T o);
public delegate void SuccessCallBack(AndroidJavaObject o);
public delegate void FailureCallBack(Exception e);
public void SetDeveloperMode()
{
AGConnectConfig config;
config = AGConnectConfig.getInstance();
develporMode = !develporMode;
config.setDeveloperMode(develporMode);
Debug.Log($"set developer mode to {develporMode}");
}
public void showAllValues()
{
AGConnectConfig config = AGConnectConfig.getInstance();
if(config!=null)
{
Map map = config.getMergedAll();
var keySet = map.keySet();
var keyArray = keySet.toArray();
foreach (var key in keyArray)
{
Debug.Log($"{key}: {map.getOrDefault(key, "default")}");
}
}else
{
Debug.Log(" No data ");
}
config.clearAll();
}
void Start()
{
SetDeveloperMode();
SetXmlValue();
}
public void SetXmlValue()
{
var config = AGConnectConfig.getInstance();
// get res id
int configId = AndroidUtil.GetId(new Context(), "xml", "remote_config");
config.applyDefault(configId);
// get variable
Map map = config.getMergedAll();
var keySet = map.keySet();
var keyArray = keySet.toArray();
config.applyDefault(map);
foreach (var key in keyArray)
{
var value = config.getSource(key);
//Use the key and value ...
Debug.Log($"{key}: {config.getSource(key)}");
}
}
public void GetCloudSettings()
{
AGConnectConfig config = AGConnectConfig.getInstance();
config.fetch().addOnSuccessListener(new HmsSuccessListener<ConfigValues>((ConfigValues configValues) =>
{
config.apply(configValues);
Debug.Log("===== ** Success ** ====");
showAllValues();
config.clearAll();
}))
.addOnFailureListener(new HmsFailureListener((Exception e) =>
{
Debug.Log("activity failure " + e.toString());
}));
}
public class HmsFailureListener:OnFailureListener
{
public FailureCallBack CallBack;
public HmsFailureListener(FailureCallBack c)
{
CallBack = c;
}
public override void onFailure(Exception arg0)
{
if(CallBack !=null)
{
CallBack.Invoke(arg0);
}
}
}
public class HmsSuccessListener<T>:OnSuccessListener
{
public SuccessCallBack<T> CallBack;
public HmsSuccessListener(SuccessCallBack<T> c)
{
CallBack = c;
}
public void onSuccess(T arg0)
{
if(CallBack != null)
{
CallBack.Invoke(arg0);
}
}
public override void onSuccess(AndroidJavaObject arg0)
{
if(CallBack !=null)
{
Type type = typeof(T);
IHmsBase ret = (IHmsBase)Activator.CreateInstance(type);
ret.obj = arg0;
CallBack.Invoke((T)ret);
}
}
}
}
10. Click to Build apk, choose File > Build settings > Build, to Build and Run, choose File > Build settings > Build And Run
Result
Tips and Tricks
Add agconnect-services.json file without fail.
Make sure dependencies added in build files.
Make sure that you released once parameters added/updated.
Conclusion
We have learnt integration of Huawei Remote Configuration Service into Unity Game development. Remote Configuration service lets you to fetch configuration data from local xml file and online i.e. AG-Console,changes will reflect immediately once you releases the changes.Conclusion is service lets you to change your app behaviour and appearance without app update or user interaction.
Thank you so much for reading article, hope this article helps you.
Reference
Unity Manual
GitHub Sample Android
Huawei Remote Configuration service
Read in huawei developer forum
Overview
In this article, I will create a Doctor Consult Demo App along with the integration of ML Product Visual Search Api. Which provides an easy interface to consult with doctor. Users can scan their prescriptions using Product Visual Search Api.
Previous Articles Link:
https://forums.developer.huawei.com/forumPortal/en/topic/0201829733289720014?fid=0101187876626530001
https://forums.developer.huawei.com/forumPortal/en/topic/0201817617825540005?fid=0101187876626530001
https://forums.developer.huawei.com/forumPortal/en/topic/0201811543541800017?fid=0101187876626530001
HMS Core Map Service Introduction
ML Kit allows your apps to easily leverage Huawei's long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei's technology accumulation, ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
Product Visual Search: This service searches for the same or similar products in the pre-established product image library based on a product photo taken by a user, and returns the IDs of those products and related information. In addition, to better manage products in real time, this service supports offline product import, online product adding, deletion, modification, and query, and product distribution.
Prerequisite
Huawei Phone EMUI 3.0 or later.
Non-Huawei phones Android 4.4 or later (API level 19 or higher).
Android Studio.
AppGallery Account.
App Gallery Integration process
Sign In and Create or Choose a project on AppGallery Connect portal.
Navigate to Project settings and download the configuration file.
Navigate to General Information, and then provide Data Storage location.
App Development
Create A New Project.
Configure Project Gradle.
Configure App Gradle.
Configure AndroidManifest.xml.
Create Activity class with XML UI.
Java:
package com.hms.doctorconsultdemo.ml;
import android.Manifest;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.graphics.Bitmap;
import android.os.Bundle;
import android.provider.MediaStore;
import android.util.Log;
import androidx.annotation.Nullable;
import androidx.core.app.ActivityCompat;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.MLAnalyzerFactory;
import com.huawei.hms.mlsdk.common.MLException;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.productvisionsearch.MLProductVisionSearch;
import com.huawei.hms.mlsdk.productvisionsearch.MLVisionSearchProduct;
import com.huawei.hms.mlsdk.productvisionsearch.MLVisionSearchProductImage;
import com.huawei.hms.mlsdk.productvisionsearch.cloud.MLRemoteProductVisionSearchAnalyzer;
import com.huawei.hms.mlsdk.productvisionsearch.cloud.MLRemoteProductVisionSearchAnalyzerSetting;
import java.util.ArrayList;
import java.util.List;
public class ScanMainActivity extends BaseActivity {
private static final String TAG = ScanMainActivity.class.getName();
private static final int CAMERA_PERMISSION_CODE = 100;
MLRemoteProductVisionSearchAnalyzer analyzer;
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
init();
initializeProductVisionSearch();
}
private void init() {
if (!(ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA)
== PackageManager.PERMISSION_GRANTED)) {
this.requestCameraPermission();
}
initializeProductVisionSearch();
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(intent, 101);
}
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.CAMERA};
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, this.CAMERA_PERMISSION_CODE);
return;
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == 101) {
if (resultCode == RESULT_OK) {
Bitmap bitmap = (Bitmap) data.getExtras().get("data");
if (bitmap != null) {
MLFrame mlFrame = new MLFrame.Creator().setBitmap(bitmap).create();
mlImageDetection(mlFrame);
}
}
}
}
private void mlImageDetection(MLFrame mlFrame) {
Task<List<MLProductVisionSearch>> task = analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(products -> {
Log.d(TAG, "success");
displaySuccess(products);
})
.addOnFailureListener(e -> {
try {
MLException mlException = (MLException) e;
int errorCode = mlException.getErrCode();
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
});
}
private void initializeProductVisionSearch() {
MLRemoteProductVisionSearchAnalyzerSetting settings = new MLRemoteProductVisionSearchAnalyzerSetting.Factory()
.setLargestNumOfReturns(16)
.setRegion(MLRemoteProductVisionSearchAnalyzerSetting.REGION_DR_CHINA)
.create();
analyzer
= MLAnalyzerFactory.getInstance().getRemoteProductVisionSearchAnalyzer(settings);
}
private void displaySuccess(List<MLProductVisionSearch> productVisionSearchList) {
List<MLVisionSearchProductImage> productImageList = new ArrayList<>();
String prodcutType = "";
for (MLProductVisionSearch productVisionSearch : productVisionSearchList) {
Log.d(TAG, "type: " + productVisionSearch.getType());
prodcutType = productVisionSearch.getType();
for (MLVisionSearchProduct product : productVisionSearch.getProductList()) {
productImageList.addAll(product.getImageList());
Log.d(TAG, "custom content: " + product.getCustomContent());
}
}
StringBuffer buffer = new StringBuffer();
for (MLVisionSearchProductImage productImage : productImageList) {
String str = "ProductID: " + productImage.getProductId() + "\nImageID: " + productImage.getImageId() + "\nPossibility: " + productImage.getPossibility();
buffer.append(str);
buffer.append("\n");
}
Log.d(TAG, "display success: " + buffer.toString());
ScanDataActivity.start(this, productImageList);
}
}
App Build Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Tips and Tricks
Images in PNG, JPG, JPEG, and BMP formats are supported. GIF images are not supported.
ML Kit complies with GDPR requirements for data processing.
Face detection requires Android phones with the Arm architecture.
Conclusion
In this article, we have learned how to integrate HMS ML Kit using Product Visual Search Api in Android application. After completely read this article user can easily implement HMS ML Kit using Product Visual Search Api. So that Users can scan their prescriptions using Product Visual Search Api.
Thanks for reading this article. Be sure to like and comment to this article, if you found it helpful. It means a lot to me.
References
HMS ML Docs:
https://developer.huawei.com/consum...-Guides/service-introduction-0000001050040017
HMS Training Videos -
https://developer.huawei.com/consumer/en/training/