Implement Language Detection — Thought and Practice - Huawei Developers

Background​Quick question: How many languages are there in the world? Before you rush off to search for the answer, read on.
There are over 7000 languages — astonishing, right? Such diversity highlights the importance of translation, which is valuable to us on so many levels because it opens us up to a rich range of cultures. Psycholinguist Frank Smith said that, "one language sets you in a corridor for life. Two languages open every door along the way."
These days, it is very easy for someone to pick up their phone, download a translation app, and start communicating in another language without having a sound understanding of it. It has taken away the need to really master a foreign language. AI technologies such as natural language processing (NLP) not only simplify translation, but also open up opportunities for people to learn and use a foreign language.
Modern translation apps are capable of translating text into another language in just a tap. That's not to say that developing translation at a tap is as easy as it sounds. An integral and initial step of it is language detection, which tells the software what the language is.
Below is a walkthrough of how I implemented language detection for my demo app, using this service from HMS Core ML Kit. It automatically detects the language of input text, and then returns all the codes and the confidence levels of the detected languages, or returns only the code of the language with the highest confidence level. This is ideal for creating a translation app.
Implementation Procedure​Preparations​1. Configure the Maven repository address.
Code:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the SDK of the language detection capability.
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-language-detection:3.4.0.301'
}
Project Configuration​1. Set the app authentication information by setting either an access token or an API key.
Call the setAccessToken method to set an access token. Note that this needs to be set only once during app initialization.
Code:
MLApplication.getInstance().setAccessToken("your access token");
Or, call the setApiKey method to set an API key, which is also required only once during app initialization.
Code:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create a language detector using either of these two methods.
Code:
// Method 1: Use the default parameter settings.
MLRemoteLangDetector mlRemoteLangDetector = MLLangDetectorFactory.getInstance()
.getRemoteLangDetector();
// Method 2: Use the customized parameter settings.
MLRemoteLangDetectorSetting setting = new MLRemoteLangDetectorSetting.Factory()
// Set the minimum confidence level for language detection.
.setTrustedThreshold(0.01f)
.create();
MLRemoteLangDetector mlRemoteLangDetector = MLLangDetectorFactory.getInstance()
.getRemoteLangDetector(setting);
3. Detect the text language.
Asynchronous method
Code:
// Method 1: Return detection results that contain language codes and confidence levels of multiple languages. In the code, sourceText indicates the text of which the language is to be detected. The maximum character count of the text is 5000.
Task<List<MLDetectedLang>> probabilityDetectTask = mlRemoteLangDetector.probabilityDetect(sourceText);
probabilityDetectTask.addOnSuccessListener(new OnSuccessListener<List<MLDetectedLang>>() {
@Override
public void onSuccess(List<MLDetectedLang> result) {
// Callback when the detection is successful.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Callback when the detection failed.
try {
MLException mlException = (MLException)e;
// Result code for the failure. The result code can be customized with different popups on the UI.
int errorCode = mlException.getErrCode();
// Description for the failure. Used together with the result code, the description facilitates troubleshooting.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
// Method 2: Return only the code of the language with the highest confidence level. In the code, sourceText indicates the text of which the language is to be detected. The maximum character count of the text is 5000.
Task<String> firstBestDetectTask = mlRemoteLangDetector.firstBestDetect(sourceText);
firstBestDetectTask.addOnSuccessListener(new OnSuccessListener<String>() {
@Override
public void onSuccess(String s) {
// Callback when the detection is successful.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Callback when the detection failed.
try {
MLException mlException = (MLException)e;
// Result code for the failure. The result code can be customized with different popups on the UI.
int errorCode = mlException.getErrCode();
// Description for the failure. Used together with the result code, the description facilitates troubleshooting.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
Synchronous method
Code:
// Method 1: Return detection results that contain language codes and confidence levels of multiple languages. In the code, sourceText indicates the text of which the language is to be detected. The maximum character count of the text is 5000.
try {
List<MLDetectedLang> result= mlRemoteLangDetector.syncProbabilityDetect(sourceText);
} catch (MLException mlException) {
// Callback when the detection failed.
// Result code for the failure. The result code can be customized with different popups on the UI.
int errorCode = mlException.getErrCode();
// Description for the failure. Used together with the result code, the description facilitates troubleshooting.
String errorMessage = mlException.getMessage();
}
// Method 2: Return only the code of the language with the highest confidence level. In the code, sourceText indicates the text of which the language is to be detected. The maximum character count of the text is 5000.
try {
String language = mlRemoteLangDetector.syncFirstBestDetect(sourceText);
} catch (MLException mlException) {
// Callback when the detection failed.
// Result code for the failure. The result code can be customized with different popups on the UI.
int errorCode = mlException.getErrCode();
// Description for the failure. Used together with the result code, the description facilitates troubleshooting.
String errorMessage = mlException.getMessage();
}
4. Stop the language detector when the detection is complete, to release the resources occupied by the detector.
Code:
if (mlRemoteLangDetector != null) {
mlRemoteLangDetector.stop();
}
And once you've done this, your app will have implemented the language detection function, which works as shown in the demo below.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Conclusion​Translation apps are vital to helping people communicate across cultures, and play an important role in all aspects of our life, from study to business, and particularly travel. Without such apps, communication across different languages would be limited to people who have become proficient in another language.
In order to translate text for users, a translation app must first be able to identify the language of text. One way of doing this is to integrate a language detection service, which detects the language — or languages — of text and then returns either all language codes and their confidence levels or the code of the language with the highest confidence level. This capability improves the efficiency of such apps to build user confidence in translations offered by translation apps.
References​What Is Natural Language Processing
What Is Language Detection

Related

Usage of ML Kit Services in Flutter

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone, in this article, we’ll develop a flutter application using the Huawei Ml kit’s text recognition, translation and landmark services. Lets get start it.
About the Service
Flutter ML Plugin enables communication between the HMS Core ML SDK and Flutter platform. This plugin exposes all functionality provided by the HMS Core ML SDK.
HUAWEI ML Kit allows your apps to easily leverage Huawei’s long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei’s technology accumulation, ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
Configure your project on AppGallery Connect
Registering a Huawei ID
You need to register a Huawei ID to use the plugin. If you don’t have one, follow the instructions here.
Preparations for Integrating HUAWEI HMS Core
First of all, you need to integrate Huawei Mobile Services with your application. I will not get into details about how to integrate your application but you can use this tutorial as step by step guide.
Integrating the Flutter Ml Plugin
1. Download the ML Kit Flutter Plugin and decompress it.
2. On your Flutter project directory find and open your pubspec.yaml file and add library to dependencies to download the package from pub.dev. Or if you downloaded the package from the HUAWEI Developer website, specify the library path on your local device. For both ways, after running pub get command, the plugin will be ready to use.
1. Text Recognition
The text recognition service extracts text from images of receipts, business cards, and documents. This service is widely used in office, education, transit, and other apps. For example, you can use this service in a translation app to extract text in a photo and translate the text, improving user experience.
This service can run on the cloud or device, but the supported languages differ in the two scenarios. On-device APIs can recognize text in Simplified Chinese, Japanese, Korean, and Latin-based languages (refer to Latin Script Supported by On-device Text Recognition). When running on the cloud, the service can recognize text in languages such as Simplified Chinese, English, Spanish, Portuguese, Italian, German, French, Russian, Japanese, Korean, Polish, Finnish, Norwegian, Swedish, Danish, Turkish, Thai, Arabic, Hindi, and Indonesian.
Remote Text Analyzer
The text analyzer is on the cloud, which runs a detection model on the cloud after the cloud API is called.
Implementation Procedure
Create an MlTextSettings object and set desired values. The path is mandatory.
Code:
MlTextSettings _mlTextSettings;
@override
void initState() {
_mlTextSettings = new MlTextSettings();
_checkPermissions();
super.initState();
}
Then call analyzeRemotely method by passing the MlTextSettings object you’ve created. This method returns an MlText object on a successful operation. Otherwise it throws exception.
Code:
_startRecognition() async {
_mlTextSettings.language = MlTextLanguage.English;
try {
final MlText mlText = await MlTextClient.analyzeRemotely(_mlTextSettings);
setState(() {
_recognitionResult = mlText.stringValue;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
2. Text Translation
The translation service can translate text into different languages. Currently, this service supports offline translation of text in Simplified Chinese, English, German, Spanish, French, and Russian (automatic model download is supported), and online translation of text in Simplified Chinese, English, French, Arabic, Thai, Spanish, Turkish, Portuguese, Japanese, German, Italian, Russian, Polish, Malay, Swedish, Finnish, Norwegian, Danish, and Korean.
Create an MlTranslatorSettings object and set the values. Source text must not be null.
Code:
MlTranslatorSettings settings;
@override
void initState() {
settings = new MlTranslatorSettings();
super.initState();
}
Then call getTranslateResult method by passing the MlTranslatorSettings object you’ve created. This method returns translated text on a successful operation. Otherwise it throws exception.
Code:
_startRecognition() async {
settings.sourceLangCode = MlTranslateLanguageOptions.English;
settings.sourceText = controller.text;
settings.targetLangCode = MlTranslateLanguageOptions.Turkish;
try {
final String result =
await MlTranslatorClient.getTranslateResult(settings);
setState(() {
_translateResult = result;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
3. Landmark Recognition
The landmark recognition service can identify the names and latitude and longitude of landmarks in an image. You can use this information to create individualized experiences for users. For example, you can create a travel app that identifies a landmark in an image and gives users the location along with everything they need to know about that landmark.
Landmark Recognition
This API is used to carry out the landmark recognition with customized parameters.
Implementation Procedure
Create an MlLandMarkSettings object and set the values. The path is mandatory.
Code:
MlLandMarkSettings settings;
String _landmark = "landmark name";
String _identity = "landmark identity";
dynamic _possibility = 0;
dynamic _bottomCorner = 0;
dynamic _topCorner = 0;
dynamic _leftCorner = 0;
dynamic _rightCorner = 0;
@override
void initState() {
settings = new MlLandMarkSettings();
_checkPermissions();
super.initState();
}
Then call getLandmarkAnalyzeInformation method by passing the MlLandMarkSettings object you’ve created. This method returns an MlLandmark object on a successful operation. Otherwise it throws exception.
Code:
try {
settings.patternType = LandmarkAnalyzerPattern.STEADY_PATTERN;
settings.largestNumberOfReturns = 5;
final MlLandmark landmark =
await MlLandMarkClient.getLandmarkAnalyzeInformation(settings);
setState(() {
_landmark = landmark.landmark;
_identity = landmark.landmarkIdentity;
_possibility = landmark.possibility;
_bottomCorner = landmark.border.bottom;
_topCorner = landmark.border.top;
_leftCorner = landmark.border.left;
_rightCorner = landmark.border.right;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
Demo proje github link:
https://github.com/EfnanAkkus/Ml-Kit-Usage-Flutter
Resources:
https://developer.huawei.com/consumer/en/doc/development/HMS-Plugin-Guides/introduction-0000001051432503
https://developer.huawei.com/consumer/en/hms/huawei-mlkit
Related Links
Original post: https://medium.com/huawei-developers/usage-of-ml-kit-services-in-flutter-42cdc1bc67d

Usage of ML Kit Services in Flutter

Hello everyone, in this article, we’ll develop a flutter application using the Huawei Ml kit’s text recognition, translation and landmark services. Lets get start it.
About the Service
Flutter ML Plugin enables communication between the HMS Core ML SDK and Flutter platform. This plugin exposes all functionality provided by the HMS Core ML SDK.
HUAWEI ML Kit allows your apps to easily leverage Huawei’s long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei’s technology accumulation, ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
Configure your project on AppGallery Connect
Registering a Huawei ID
You need to register a Huawei ID to use the plugin. If you don’t have one, follow the instructions here.
Preparations for Integrating HUAWEI HMS Core
First of all, you need to integrate Huawei Mobile Services with your application. I will not get into details about how to integrate your application but you can use this tutorial as step by step guide.
Integrating the Flutter Ml Plugin
1. Download the ML Kit Flutter Plugin and decompress it.
2. On your Flutter project directory find and open your pubspec.yaml file and add library to dependencies to download the package from pub.dev. Or if you downloaded the package from the HUAWEI Developer website, specify the library path on your local device. For both ways, after running pub get command, the plugin will be ready to use.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
1. Text Recognition
The text recognition service extracts text from images of receipts, business cards, and documents. This service is widely used in office, education, transit, and other apps. For example, you can use this service in a translation app to extract text in a photo and translate the text, improving user experience.
This service can run on the cloud or device, but the supported languages differ in the two scenarios. On-device APIs can recognize text in Simplified Chinese, Japanese, Korean, and Latin-based languages (refer to Latin Script Supported by On-device Text Recognition). When running on the cloud, the service can recognize text in languages such as Simplified Chinese, English, Spanish, Portuguese, Italian, German, French, Russian, Japanese, Korean, Polish, Finnish, Norwegian, Swedish, Danish, Turkish, Thai, Arabic, Hindi, and Indonesian.
Remote Text Analyzer
The text analyzer is on the cloud, which runs a detection model on the cloud after the cloud API is called.
Implementation Procedure
Create an MlTextSettings object and set desired values. The path is mandatory.
Code:
MlTextSettings _mlTextSettings;
@override
void initState() {
_mlTextSettings = new MlTextSettings();
_checkPermissions();
super.initState();
}
Then call analyzeRemotely method by passing the MlTextSettings object you’ve created. This method returns an MlText object on a successful operation. Otherwise it throws exception.
Code:
_startRecognition() async {
_mlTextSettings.language = MlTextLanguage.English;
try {
final MlText mlText = await MlTextClient.analyzeRemotely(_mlTextSettings);
setState(() {
_recognitionResult = mlText.stringValue;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
2. Text Translation
The translation service can translate text into different languages. Currently, this service supports offline translation of text in Simplified Chinese, English, German, Spanish, French, and Russian (automatic model download is supported), and online translation of text in Simplified Chinese, English, French, Arabic, Thai, Spanish, Turkish, Portuguese, Japanese, German, Italian, Russian, Polish, Malay, Swedish, Finnish, Norwegian, Danish, and Korean.
Create an MlTranslatorSettings object and set the values. Source text must not be null.
Code:
MlTranslatorSettings settings;
@override
void initState() {
settings = new MlTranslatorSettings();
super.initState();
}
Then call getTranslateResult method by passing the MlTranslatorSettings object you’ve created. This method returns translated text on a successful operation. Otherwise it throws exception.
Code:
_startRecognition() async {
settings.sourceLangCode = MlTranslateLanguageOptions.English;
settings.sourceText = controller.text;
settings.targetLangCode = MlTranslateLanguageOptions.Turkish;
try {
final String result =
await MlTranslatorClient.getTranslateResult(settings);
setState(() {
_translateResult = result;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
3. Landmark Recognition
The landmark recognition service can identify the names and latitude and longitude of landmarks in an image. You can use this information to create individualized experiences for users. For example, you can create a travel app that identifies a landmark in an image and gives users the location along with everything they need to know about that landmark.
Landmark Recognition
This API is used to carry out the landmark recognition with customized parameters.
Implementation Procedure
Create an MlLandMarkSettings object and set the values. The path is mandatory.
Code:
MlLandMarkSettings settings;
String _landmark = "landmark name";
String _identity = "landmark identity";
dynamic _possibility = 0;
dynamic _bottomCorner = 0;
dynamic _topCorner = 0;
dynamic _leftCorner = 0;
dynamic _rightCorner = 0;
@override
void initState() {
settings = new MlLandMarkSettings();
_checkPermissions();
super.initState();
}
Then call getLandmarkAnalyzeInformation method by passing the MlLandMarkSettings object you’ve created. This method returns an MlLandmark object on a successful operation. Otherwise it throws exception.
Code:
try {
settings.patternType = LandmarkAnalyzerPattern.STEADY_PATTERN;
settings.largestNumberOfReturns = 5;
final MlLandmark landmark =
await MlLandMarkClient.getLandmarkAnalyzeInformation(settings);
setState(() {
_landmark = landmark.landmark;
_identity = landmark.landmarkIdentity;
_possibility = landmark.possibility;
_bottomCorner = landmark.border.bottom;
_topCorner = landmark.border.top;
_leftCorner = landmark.border.left;
_rightCorner = landmark.border.right;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
Demo proje github link:
https://github.com/EfnanAkkus/Ml-Kit-Usage-Flutter
Resources:
https://developer.huawei.com/consumer/en/doc/development/HMS-Plugin-Guides/introduction-0000001051432503
https://developer.huawei.com/consumer/en/hms/huawei-mlkit
How much time it takes to integrate ML Kit in flutter ? Does flutter provides native look and feel ?

Huawei ML kit – Integration of Scene Detection

Introduction
To help understand the image content, the scene detection service can classify the scenario content of images and add labels, such as outdoor scenery, indoor places, and buildings.You can create more customised experiences for users based on the data detected from image. Currently Huawei supports detection of 102 scenarios. For details about the scenarios, refer to List of Scenario Identification Categories.
This service can be used to identify image sets by scenario and create intelligent album sets. You can also select various camera parameters based on the scene in your app, to help users to take better-looking photos.
Prerequisite
The scene detection service supports integration with Android 6.0 and later versions.
The scene detection needs READ_EXTERNAL_STORAGE and CAMERA in AndroidManifest.xml.
Implementation of dynamic permission for camera and storage is not covered in this article. Please make sure to integrate dynamic permission feature.
Development
  1. Register as a developer account in AppGallery Connect.
  2. Create an application and enable ML kit from AppGallery connect.
  3. Refer to Service Enabling. Integrate AppGallery connect SDK. Refer to AppGallery Connect Service Getting Started.
4. Add Huawei Scene detection dependencies in app-level build.gradle.
Code:
// ML Scene Detection SDK
implementation 'com.huawei.hms:ml-computer-vision-scenedetection:2.0.3.300'
// Import the scene detection model package.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection-model:2.0.3.300'
implementation 'com.huawei.hms:ml-computer-vision-cloud:2.0.3.300'
 5. Sync the gradle.
We have an Activity (MainActivity.java) which has floating buttons to select static scene detection and live scene detection.
Static scene detection is used to detect scene in static images. When we select a photo, the scene detection service returns the results.
Camera stream (Live) scene detection can process camera streams, convert video frames into an MLFrame object, and detect scenarios using the static image detection method.
Implementation of Static scene detection
Code:
private void sceneDetectionEvaluation(Bitmap bitmap) {
//Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer(setting);
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
for( MLSceneDetection sceneDetection : result) {
sb.append("Detected Scene : " + sceneDetection.getResult() + " , " + "Confidence : " + sceneDetection.getConfidence() + "\n");
tvResult.setText(sb.toString());
if (analyzer != null) {
analyzer.stop();
}
}
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
Log.e(TAG, "MLException : " + errorMessage +", error code: "+ String.valueOf(errorCode));
} else {
// Other errors.
Log.e(TAG, "Exception : " + e.getMessage());
}
if (analyzer != null) {
analyzer.stop();
}
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
if(analyzer != null) {
analyzer.stop();
}
}
We can set the settings MLSceneDetectionAnalyzerSetting() and set the confidence level for scene detection. setConfidence() methods needs to get float value. After settings are fixed, we can create the analyzer with settings value. Then, we can set the frame with bitmap. Lastly, we have created a task for list of MLSceneDetection object. We have listener functions for success and failure. The service returns list of results. The results have two parameter which are result and confidence. We can set the response to textView tvResult.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
More details, you can visit https://forums.developer.huawei.com/forumPortal/en/topic/0204400184662360088
we will get accuracy result ?

【ML】Free Translation — A Real-Time, Ad-Free Translation App

Preface
Free Translation is a real-time translation app that provides a range of services, including speech recognition, text translation, and text-to-speech (TTS).
Developing an AI app like Free Translation tends to require complex machine learning know-how, but integrating ML Kit makes the development quick and easy.
Use Scenarios
Free Translation is equipped to handle a wide range of user needs, for example translating content at work, assisting during travel in a foreign country, or helping with communicating with a foreign friend or learning a new language.
Development Preparations
1. Configure the Huawei Maven repository address.
2. Add build dependencies for the ML SDK.
Open the build.gradle file in the app directory of your project.
Code:
dependencies {
// Import the automatic speech recognition (ASR) plug-in.
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.0.3.300'
// Import the text translation SDK.
implementation 'com.huawei.hms:ml-computer-translate:2.0.4.300'
// Import the text translation algorithm package.
implementation 'com.huawei.hms:ml-computer-translate-model:2.0.4.300'
// Import the TTS SDK.
implementation 'com.huawei.hms:ml-computer-voice-tts:2.0.4.300'
// Import the bee voice package of on-device TTS.
implementation 'com.huawei.hms:ml-computer-voice-tts-model-bee:2.0.4.300'
For more details, please refer to Preparations.
Open the AndroidManifest.xml file in the main directory, and add the relevant permissions above the <application/> line.
Code:
<uses-permission android:name="android.permission.INTERNET" /> <!-- Accessing the Internet. -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <!-- Obtaining the network status. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /><!-- Upgrading the algorithm version. -->
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /><!-- Obtaining the Wi-Fi status. -->
<uses-permission android:name="android.permission.RECORD_AUDIO" /><!-- Recording audio data by using the recorder. -->
Development Procedure
UI Design
Customize the app UI according to your needs, and based on activity_main.xml, the layout file.
Tap on START RECOGNITION to load the ASR module, which recognizes what the user says.
Tap on SYNTHETIC VOICE to load the TTS module, which reads out the resulting translation.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Function Development
You can integrate an ASR plug-in to quickly integrate the ASR service.
Code:
public void startAsr(View view) {
// Use Intent for recognition settings.
Intent intent = new Intent(this, MLAsrCaptureActivity.class)
// Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Languages supported include the following: "zh-CN": Chinese; "en-US": English; "fr-FR": French; "de-DE": German; "it-IT": Italian.
.putExtra(MLAsrCaptureConstants.LANGUAGE, Constants.ASR_SOURCE[spinnerInput.getSelectedItemPosition()])
// Set whether to display the recognition result on the speech pickup UI. MLAsrCaptureConstants.FEATURE_ALLINONE: no; MLAsrCaptureConstants.FEATURE_WORDFLUX: yes.
.putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX);
// 100: request code between the current activity and speech pickup UI activity. You can use this code to obtain the processing result of the speech pickup UI.
startActivityForResult(intent, 100);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
String text;
// 100: request code between the current activity and speech pickup UI activity, which is defined above.
if (requestCode == 100) {
switch (resultCode) {
// MLAsrCaptureConstants.ASR_SUCCESS: Recognition is successful.
case MLAsrCaptureConstants.ASR_SUCCESS:
if (data != null) {
Bundle bundle = data.getExtras();
// Obtain the text information recognized from speech.
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT);
// Process the recognized text information.
textViewInput.setText(text);
Translation.run(this, textViewOutput, spinnerInput.getSelectedItemPosition(),
spinnerOutput.getSelectedItemPosition(), text);
}
}
break;
}
}
}
Create the Translation class to use the text translation service.
Step 1 Define the public method, which decides whether to use real-time or on-device translation.
Code:
public static void run(Activity activity, TextView textView, int sourcePosition, inttargetPosition, String sourceText) {
Log.d(TAG, Constants.TRANSLATE[sourcePosition] + ", " + Constants.TRANSLATE[targetPosition] + ", " + sourceText);
if (isOffline) {
onDeviceTranslation(activity, textView, sourcePosition, targetPosition, sourceText);
} else {
realTimeTranslation(textView, sourcePosition, targetPosition, sourceText);
}
}
Step 2 Call the real-time or on-device translation method.
Code:
private static void realTimeTranslation(final TextView textView, int sourcePosition, final int targetPosition, String sourceText) {
Log.d(TAG, "realTimeTranslation");
}
private static void onDeviceTranslation(final Activity activity, final TextView textView, final int sourcePosition, final int targetPosition, final String sourceText) {
Set<String> result = MLTranslateLanguage.syncGetLocalAllLanguages();
Log.d(TAG, "Languages supported by on-device translation: " +Arrays.toString(result.toArray()));
}
Create the TTS class to use the text-to-speech service.
Step 1 Just as with Step 1 in Translation, define the public method, which decides whether to use real-time or on-device TTS.
Code:
public static void run(Activity activity, int targetPosition, String sourceText) {
Log.d(TAG, sourceText);
if (isNotAuto || sourceText.isEmpty()) {
return;
}
if (isOffline) {
if (0 == targetPosition) {
Toast.makeText(activity, ,
Toast.LENGTH_SHORT).show();
return;
}
offlineTts(activity, Constants.TTS_TARGET[targetPosition],
Constants.TTS_TARGET_SPEAKER_OFFLINE[targetPosition], sourceText);
} else {
onlineTts(Constants.TTS_TARGET[targetPosition], Constants.TTS_TARGET_SPEAKER[targetPosition], sourceText);
}
}
Step 2 Call the real-time or on-device TTS method.
Code:
private static void onlineTts(String language, String person, String sourceText) {
Log.d(TAG, language + ", " + person + ", " + sourceText);
}
private static void offlineTts(final Activity activity, String language, final String person, finalString sourceText) {
// Use customized parameter settings to create a TTS engine.
// For details about the speaker names, please refer to the Timbres section.
final MLTtsConfig mlTtsConfig = new MLTtsConfig().setLanguage(language)
.setPerson(person)
// Set the TTS mode to on-device mode. The default mode is real-time mode.
.setSynthesizeMode(MLTtsConstants.TTS_OFFLINE_MODE);
}
Final Effects
More Information
To join in on developer discussion forums, go to Reddit.
To download the demo app and sample code, go to GitHub.
For solutions to integration-related issues, go to Stack Overflow.
More details

Solution to Creating an Image Classifier

I don't know if it's the same for you, but I always get frustrated when sorting through my phone's album. It seems to take forever before I can find the image that I want to use. As a coder, I can't help but wonder if there's a solution for this. Is there a way to organize an entire album? Well, let's take a look at how to develop an image classifier using a service called image classification.
Development Preparations​1. Configure the Maven repository address for the SDK to be used.
Java:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the image classification SDK.
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-classification:3.3.0.300'
// Import the image classification model package.
implementation 'com.huawei.hms:ml-computer-vision-image-classification-model:3.3.0.300'
Project Configuration​1. Set the authentication information for the app.
This information can be set through an API key or access token.
Use the setAccessToken method to set an access token during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create an image classification analyzer in on-device static image detection mode.
Java:
// Method 1: Use customized parameter settings for device-based recognition.
MLLocalClassificationAnalyzerSetting setting =
new MLLocalClassificationAnalyzerSetting.Factory()
.setMinAcceptablePossibility(0.8f)
.create();
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer(setting);
// Method 2: Use default parameter settings for on-device recognition.
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer();
3. Create an MLFrame object.
Java:
// Create an MLFrame object using the bitmap which is the image data in bitmap format. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image dimensions be greater than or equal to 112 x 112 px.
MLFrame frame = MLFrame.fromBitmap(bitmap);
4. Call asyncAnalyseFrame to classify images.
Java:
Task<List<MLImageClassification>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Recognition success.
// Callback when the MLImageClassification list is returned, to obtain information like image categories.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize relevant messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
5. Stop the analyzer after recognition is complete.
Java:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Remarks​The image classification capability supports the on-device static image detection mode, on-cloud static image detection mode, and camera stream detection mode. The demo here illustrates only the first mode.
I came up with a bunch of application scenarios to use image classification, for example: education apps. With the help of image classification, such an app enables its users to categorize images taken in a period into different albums; travel apps. Image classification allows such apps to classify images according to where they are taken or by objects in the images; file sharing apps. Image classification allows users of such apps to upload and share images by image category.
References​>>Image classification Development Guide
>>Reddit to join developer discussions
>>GitHub to download the sample code
>>Stack Overflow to solve integration problems

Categories

Resources