Hello everyone, in this article, we’ll develop a flutter application using the Huawei Ml kit’s text recognition, translation and landmark services. Lets get start it.
About the Service
Flutter ML Plugin enables communication between the HMS Core ML SDK and Flutter platform. This plugin exposes all functionality provided by the HMS Core ML SDK.
HUAWEI ML Kit allows your apps to easily leverage Huawei’s long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei’s technology accumulation, ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
Configure your project on AppGallery Connect
Registering a Huawei ID
You need to register a Huawei ID to use the plugin. If you don’t have one, follow the instructions here.
Preparations for Integrating HUAWEI HMS Core
First of all, you need to integrate Huawei Mobile Services with your application. I will not get into details about how to integrate your application but you can use this tutorial as step by step guide.
Integrating the Flutter Ml Plugin
1. Download the ML Kit Flutter Plugin and decompress it.
2. On your Flutter project directory find and open your pubspec.yaml file and add library to dependencies to download the package from pub.dev. Or if you downloaded the package from the HUAWEI Developer website, specify the library path on your local device. For both ways, after running pub get command, the plugin will be ready to use.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
1. Text Recognition
The text recognition service extracts text from images of receipts, business cards, and documents. This service is widely used in office, education, transit, and other apps. For example, you can use this service in a translation app to extract text in a photo and translate the text, improving user experience.
This service can run on the cloud or device, but the supported languages differ in the two scenarios. On-device APIs can recognize text in Simplified Chinese, Japanese, Korean, and Latin-based languages (refer to Latin Script Supported by On-device Text Recognition). When running on the cloud, the service can recognize text in languages such as Simplified Chinese, English, Spanish, Portuguese, Italian, German, French, Russian, Japanese, Korean, Polish, Finnish, Norwegian, Swedish, Danish, Turkish, Thai, Arabic, Hindi, and Indonesian.
Remote Text Analyzer
The text analyzer is on the cloud, which runs a detection model on the cloud after the cloud API is called.
Implementation Procedure
Create an MlTextSettings object and set desired values. The path is mandatory.
Code:
MlTextSettings _mlTextSettings;
@override
void initState() {
_mlTextSettings = new MlTextSettings();
_checkPermissions();
super.initState();
}
Then call analyzeRemotely method by passing the MlTextSettings object you’ve created. This method returns an MlText object on a successful operation. Otherwise it throws exception.
Code:
_startRecognition() async {
_mlTextSettings.language = MlTextLanguage.English;
try {
final MlText mlText = await MlTextClient.analyzeRemotely(_mlTextSettings);
setState(() {
_recognitionResult = mlText.stringValue;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
2. Text Translation
The translation service can translate text into different languages. Currently, this service supports offline translation of text in Simplified Chinese, English, German, Spanish, French, and Russian (automatic model download is supported), and online translation of text in Simplified Chinese, English, French, Arabic, Thai, Spanish, Turkish, Portuguese, Japanese, German, Italian, Russian, Polish, Malay, Swedish, Finnish, Norwegian, Danish, and Korean.
Create an MlTranslatorSettings object and set the values. Source text must not be null.
Code:
MlTranslatorSettings settings;
@override
void initState() {
settings = new MlTranslatorSettings();
super.initState();
}
Then call getTranslateResult method by passing the MlTranslatorSettings object you’ve created. This method returns translated text on a successful operation. Otherwise it throws exception.
Code:
_startRecognition() async {
settings.sourceLangCode = MlTranslateLanguageOptions.English;
settings.sourceText = controller.text;
settings.targetLangCode = MlTranslateLanguageOptions.Turkish;
try {
final String result =
await MlTranslatorClient.getTranslateResult(settings);
setState(() {
_translateResult = result;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
3. Landmark Recognition
The landmark recognition service can identify the names and latitude and longitude of landmarks in an image. You can use this information to create individualized experiences for users. For example, you can create a travel app that identifies a landmark in an image and gives users the location along with everything they need to know about that landmark.
Landmark Recognition
This API is used to carry out the landmark recognition with customized parameters.
Implementation Procedure
Create an MlLandMarkSettings object and set the values. The path is mandatory.
Code:
MlLandMarkSettings settings;
String _landmark = "landmark name";
String _identity = "landmark identity";
dynamic _possibility = 0;
dynamic _bottomCorner = 0;
dynamic _topCorner = 0;
dynamic _leftCorner = 0;
dynamic _rightCorner = 0;
@override
void initState() {
settings = new MlLandMarkSettings();
_checkPermissions();
super.initState();
}
Then call getLandmarkAnalyzeInformation method by passing the MlLandMarkSettings object you’ve created. This method returns an MlLandmark object on a successful operation. Otherwise it throws exception.
Code:
try {
settings.patternType = LandmarkAnalyzerPattern.STEADY_PATTERN;
settings.largestNumberOfReturns = 5;
final MlLandmark landmark =
await MlLandMarkClient.getLandmarkAnalyzeInformation(settings);
setState(() {
_landmark = landmark.landmark;
_identity = landmark.landmarkIdentity;
_possibility = landmark.possibility;
_bottomCorner = landmark.border.bottom;
_topCorner = landmark.border.top;
_leftCorner = landmark.border.left;
_rightCorner = landmark.border.right;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
Demo proje github link:
https://github.com/EfnanAkkus/Ml-Kit-Usage-Flutter
Resources:
https://developer.huawei.com/consumer/en/doc/development/HMS-Plugin-Guides/introduction-0000001051432503
https://developer.huawei.com/consumer/en/hms/huawei-mlkit
How much time it takes to integrate ML Kit in flutter ? Does flutter provides native look and feel ?
Related
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone, in this article, we’ll develop a flutter application using the Huawei Ml kit’s text recognition, translation and landmark services. Lets get start it.
About the Service
Flutter ML Plugin enables communication between the HMS Core ML SDK and Flutter platform. This plugin exposes all functionality provided by the HMS Core ML SDK.
HUAWEI ML Kit allows your apps to easily leverage Huawei’s long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei’s technology accumulation, ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
Configure your project on AppGallery Connect
Registering a Huawei ID
You need to register a Huawei ID to use the plugin. If you don’t have one, follow the instructions here.
Preparations for Integrating HUAWEI HMS Core
First of all, you need to integrate Huawei Mobile Services with your application. I will not get into details about how to integrate your application but you can use this tutorial as step by step guide.
Integrating the Flutter Ml Plugin
1. Download the ML Kit Flutter Plugin and decompress it.
2. On your Flutter project directory find and open your pubspec.yaml file and add library to dependencies to download the package from pub.dev. Or if you downloaded the package from the HUAWEI Developer website, specify the library path on your local device. For both ways, after running pub get command, the plugin will be ready to use.
1. Text Recognition
The text recognition service extracts text from images of receipts, business cards, and documents. This service is widely used in office, education, transit, and other apps. For example, you can use this service in a translation app to extract text in a photo and translate the text, improving user experience.
This service can run on the cloud or device, but the supported languages differ in the two scenarios. On-device APIs can recognize text in Simplified Chinese, Japanese, Korean, and Latin-based languages (refer to Latin Script Supported by On-device Text Recognition). When running on the cloud, the service can recognize text in languages such as Simplified Chinese, English, Spanish, Portuguese, Italian, German, French, Russian, Japanese, Korean, Polish, Finnish, Norwegian, Swedish, Danish, Turkish, Thai, Arabic, Hindi, and Indonesian.
Remote Text Analyzer
The text analyzer is on the cloud, which runs a detection model on the cloud after the cloud API is called.
Implementation Procedure
Create an MlTextSettings object and set desired values. The path is mandatory.
Code:
MlTextSettings _mlTextSettings;
@override
void initState() {
_mlTextSettings = new MlTextSettings();
_checkPermissions();
super.initState();
}
Then call analyzeRemotely method by passing the MlTextSettings object you’ve created. This method returns an MlText object on a successful operation. Otherwise it throws exception.
Code:
_startRecognition() async {
_mlTextSettings.language = MlTextLanguage.English;
try {
final MlText mlText = await MlTextClient.analyzeRemotely(_mlTextSettings);
setState(() {
_recognitionResult = mlText.stringValue;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
2. Text Translation
The translation service can translate text into different languages. Currently, this service supports offline translation of text in Simplified Chinese, English, German, Spanish, French, and Russian (automatic model download is supported), and online translation of text in Simplified Chinese, English, French, Arabic, Thai, Spanish, Turkish, Portuguese, Japanese, German, Italian, Russian, Polish, Malay, Swedish, Finnish, Norwegian, Danish, and Korean.
Create an MlTranslatorSettings object and set the values. Source text must not be null.
Code:
MlTranslatorSettings settings;
@override
void initState() {
settings = new MlTranslatorSettings();
super.initState();
}
Then call getTranslateResult method by passing the MlTranslatorSettings object you’ve created. This method returns translated text on a successful operation. Otherwise it throws exception.
Code:
_startRecognition() async {
settings.sourceLangCode = MlTranslateLanguageOptions.English;
settings.sourceText = controller.text;
settings.targetLangCode = MlTranslateLanguageOptions.Turkish;
try {
final String result =
await MlTranslatorClient.getTranslateResult(settings);
setState(() {
_translateResult = result;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
3. Landmark Recognition
The landmark recognition service can identify the names and latitude and longitude of landmarks in an image. You can use this information to create individualized experiences for users. For example, you can create a travel app that identifies a landmark in an image and gives users the location along with everything they need to know about that landmark.
Landmark Recognition
This API is used to carry out the landmark recognition with customized parameters.
Implementation Procedure
Create an MlLandMarkSettings object and set the values. The path is mandatory.
Code:
MlLandMarkSettings settings;
String _landmark = "landmark name";
String _identity = "landmark identity";
dynamic _possibility = 0;
dynamic _bottomCorner = 0;
dynamic _topCorner = 0;
dynamic _leftCorner = 0;
dynamic _rightCorner = 0;
@override
void initState() {
settings = new MlLandMarkSettings();
_checkPermissions();
super.initState();
}
Then call getLandmarkAnalyzeInformation method by passing the MlLandMarkSettings object you’ve created. This method returns an MlLandmark object on a successful operation. Otherwise it throws exception.
Code:
try {
settings.patternType = LandmarkAnalyzerPattern.STEADY_PATTERN;
settings.largestNumberOfReturns = 5;
final MlLandmark landmark =
await MlLandMarkClient.getLandmarkAnalyzeInformation(settings);
setState(() {
_landmark = landmark.landmark;
_identity = landmark.landmarkIdentity;
_possibility = landmark.possibility;
_bottomCorner = landmark.border.bottom;
_topCorner = landmark.border.top;
_leftCorner = landmark.border.left;
_rightCorner = landmark.border.right;
});
} on Exception catch (e) {
print(e.toString());
}
}
Here’s the result.
Demo proje github link:
https://github.com/EfnanAkkus/Ml-Kit-Usage-Flutter
Resources:
https://developer.huawei.com/consumer/en/doc/development/HMS-Plugin-Guides/introduction-0000001051432503
https://developer.huawei.com/consumer/en/hms/huawei-mlkit
Related Links
Original post: https://medium.com/huawei-developers/usage-of-ml-kit-services-in-flutter-42cdc1bc67d
Preface
Free Translation is a real-time translation app that provides a range of services, including speech recognition, text translation, and text-to-speech (TTS).
Developing an AI app like Free Translation tends to require complex machine learning know-how, but integrating ML Kit makes the development quick and easy.
Use Scenarios
Free Translation is equipped to handle a wide range of user needs, for example translating content at work, assisting during travel in a foreign country, or helping with communicating with a foreign friend or learning a new language.
Development Preparations
1. Configure the Huawei Maven repository address.
2. Add build dependencies for the ML SDK.
Open the build.gradle file in the app directory of your project.
Code:
dependencies {
// Import the automatic speech recognition (ASR) plug-in.
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.0.3.300'
// Import the text translation SDK.
implementation 'com.huawei.hms:ml-computer-translate:2.0.4.300'
// Import the text translation algorithm package.
implementation 'com.huawei.hms:ml-computer-translate-model:2.0.4.300'
// Import the TTS SDK.
implementation 'com.huawei.hms:ml-computer-voice-tts:2.0.4.300'
// Import the bee voice package of on-device TTS.
implementation 'com.huawei.hms:ml-computer-voice-tts-model-bee:2.0.4.300'
For more details, please refer to Preparations.
Open the AndroidManifest.xml file in the main directory, and add the relevant permissions above the <application/> line.
Code:
<uses-permission android:name="android.permission.INTERNET" /> <!-- Accessing the Internet. -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <!-- Obtaining the network status. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /><!-- Upgrading the algorithm version. -->
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" /><!-- Obtaining the Wi-Fi status. -->
<uses-permission android:name="android.permission.RECORD_AUDIO" /><!-- Recording audio data by using the recorder. -->
Development Procedure
UI Design
Customize the app UI according to your needs, and based on activity_main.xml, the layout file.
Tap on START RECOGNITION to load the ASR module, which recognizes what the user says.
Tap on SYNTHETIC VOICE to load the TTS module, which reads out the resulting translation.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Function Development
You can integrate an ASR plug-in to quickly integrate the ASR service.
Code:
public void startAsr(View view) {
// Use Intent for recognition settings.
Intent intent = new Intent(this, MLAsrCaptureActivity.class)
// Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Languages supported include the following: "zh-CN": Chinese; "en-US": English; "fr-FR": French; "de-DE": German; "it-IT": Italian.
.putExtra(MLAsrCaptureConstants.LANGUAGE, Constants.ASR_SOURCE[spinnerInput.getSelectedItemPosition()])
// Set whether to display the recognition result on the speech pickup UI. MLAsrCaptureConstants.FEATURE_ALLINONE: no; MLAsrCaptureConstants.FEATURE_WORDFLUX: yes.
.putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX);
// 100: request code between the current activity and speech pickup UI activity. You can use this code to obtain the processing result of the speech pickup UI.
startActivityForResult(intent, 100);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
String text;
// 100: request code between the current activity and speech pickup UI activity, which is defined above.
if (requestCode == 100) {
switch (resultCode) {
// MLAsrCaptureConstants.ASR_SUCCESS: Recognition is successful.
case MLAsrCaptureConstants.ASR_SUCCESS:
if (data != null) {
Bundle bundle = data.getExtras();
// Obtain the text information recognized from speech.
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT);
// Process the recognized text information.
textViewInput.setText(text);
Translation.run(this, textViewOutput, spinnerInput.getSelectedItemPosition(),
spinnerOutput.getSelectedItemPosition(), text);
}
}
break;
}
}
}
Create the Translation class to use the text translation service.
Step 1 Define the public method, which decides whether to use real-time or on-device translation.
Code:
public static void run(Activity activity, TextView textView, int sourcePosition, inttargetPosition, String sourceText) {
Log.d(TAG, Constants.TRANSLATE[sourcePosition] + ", " + Constants.TRANSLATE[targetPosition] + ", " + sourceText);
if (isOffline) {
onDeviceTranslation(activity, textView, sourcePosition, targetPosition, sourceText);
} else {
realTimeTranslation(textView, sourcePosition, targetPosition, sourceText);
}
}
Step 2 Call the real-time or on-device translation method.
Code:
private static void realTimeTranslation(final TextView textView, int sourcePosition, final int targetPosition, String sourceText) {
Log.d(TAG, "realTimeTranslation");
}
private static void onDeviceTranslation(final Activity activity, final TextView textView, final int sourcePosition, final int targetPosition, final String sourceText) {
Set<String> result = MLTranslateLanguage.syncGetLocalAllLanguages();
Log.d(TAG, "Languages supported by on-device translation: " +Arrays.toString(result.toArray()));
}
Create the TTS class to use the text-to-speech service.
Step 1 Just as with Step 1 in Translation, define the public method, which decides whether to use real-time or on-device TTS.
Code:
public static void run(Activity activity, int targetPosition, String sourceText) {
Log.d(TAG, sourceText);
if (isNotAuto || sourceText.isEmpty()) {
return;
}
if (isOffline) {
if (0 == targetPosition) {
Toast.makeText(activity, ,
Toast.LENGTH_SHORT).show();
return;
}
offlineTts(activity, Constants.TTS_TARGET[targetPosition],
Constants.TTS_TARGET_SPEAKER_OFFLINE[targetPosition], sourceText);
} else {
onlineTts(Constants.TTS_TARGET[targetPosition], Constants.TTS_TARGET_SPEAKER[targetPosition], sourceText);
}
}
Step 2 Call the real-time or on-device TTS method.
Code:
private static void onlineTts(String language, String person, String sourceText) {
Log.d(TAG, language + ", " + person + ", " + sourceText);
}
private static void offlineTts(final Activity activity, String language, final String person, finalString sourceText) {
// Use customized parameter settings to create a TTS engine.
// For details about the speaker names, please refer to the Timbres section.
final MLTtsConfig mlTtsConfig = new MLTtsConfig().setLanguage(language)
.setPerson(person)
// Set the TTS mode to on-device mode. The default mode is real-time mode.
.setSynthesizeMode(MLTtsConstants.TTS_OFFLINE_MODE);
}
Final Effects
More Information
To join in on developer discussion forums, go to Reddit.
To download the demo app and sample code, go to GitHub.
For solutions to integration-related issues, go to Stack Overflow.
More details
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we will learn about Huawei Map Kit in HarmonyOs. Map Kit is an SDK for map development. It covers map data of more than 200 countries and regions, and supports over 70 languages. With this SDK, you can easily integrate map-based functions into your HarmonyOs application.
Development Overview
You need to install DevEcho Studio IDE and I assume that you have prior knowledge about the Harmony Os and java.
Hardware Requirements
A computer (desktop or laptop) running windows 10.
A HarmonyOs Smart Watch (with the USB cable), which is used for debugging.
Software Requirements
Java JDK installation package.
DevEcho Studio installed.
Steps:
Step 1: Create a HarmonyOs Application.
Step 1: Create a project in AppGallery
Step 2: Configure App in AppGallery
Step 3: Follow the SDK integration steps
Let's start coding
MapAbilitySlice.java
Java:
public class MapAbilitySlice extends AbilitySlice {
private static final HiLogLabel LABEL_LOG = new HiLogLabel(3, 0xD001100, "TAG");
private MapView mMapView;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
CommonContext.setContext(this);
// Declaring and Initializing the HuaweiMapOptions Object
HuaweiMapOptions huaweiMapOptions = new HuaweiMapOptions();
// Initialize Camera Properties
CameraPosition cameraPosition =
new CameraPosition(new LatLng(12.972442, 77.580643), 10, 0, 0);
huaweiMapOptions
// Set Camera Properties
.camera(cameraPosition)
// Enables or disables the zoom function. By default, the zoom function is enabled.
.zoomControlsEnabled(false)
// Sets whether the compass is available. The compass is available by default.
.compassEnabled(true)
// Specifies whether the zoom gesture is available. By default, the zoom gesture is available.
.zoomGesturesEnabled(true)
// Specifies whether to enable the scrolling gesture. By default, the scrolling gesture is enabled.
.scrollGesturesEnabled(true)
// Specifies whether the rotation gesture is available. By default, the rotation gesture is available.
.rotateGesturesEnabled(false)
// Specifies whether the tilt gesture is available. By default, the tilt gesture is available.
.tiltGesturesEnabled(true)
// Sets whether the map is in lite mode. The default value is No.
.liteMode(false)
// Set Preference Minimum Zoom Level
.minZoomPreference(3)
// Set Preference Maximum Zoom Level
.maxZoomPreference(13);
// Initialize the MapView object.
mMapView = new MapView(this,huaweiMapOptions);
// Create the MapView object.
mMapView.onCreate();
// Obtains the HuaweiMap object.
mMapView.getMapAsync(new OnMapReadyCallback() {
@Override
public void onMapReady(HuaweiMap huaweiMap) {
HuaweiMap mHuaweiMap = huaweiMap;
mHuaweiMap.setOnMapClickListener(new OnMapClickListener() {
@Override
public void onMapClick(LatLng latLng) {
new ToastDialog(CommonContext.getContext()).setText("onMapClick ").show();
}
});
// Initialize the Circle object.
Circle mCircle = new Circle(this);
if (null == mHuaweiMap) {
return;
}
if (null != mCircle) {
mCircle.remove();
mCircle = null;
}
mCircle = mHuaweiMap.addCircle(new CircleOptions()
.center(new LatLng(12.972442, 77.580643))
.radius(500)
.fillColor(Color.GREEN.getValue()));
new ToastDialog(CommonContext.getContext()).setText("color green: " + Color.GREEN.getValue()).show();
int strokeColor = Color.RED.getValue();
float strokeWidth = 15.0f;
// Set the edge color of a circle
mCircle.setStrokeColor(strokeColor);
// Sets the edge width of a circle
mCircle.setStrokeWidth(strokeWidth);
}
});
// Create a layout.
ComponentContainer.LayoutConfig config = new ComponentContainer.LayoutConfig(ComponentContainer.LayoutConfig.MATCH_PARENT, ComponentContainer.LayoutConfig.MATCH_PARENT);
PositionLayout myLayout = new PositionLayout(this);
myLayout.setLayoutConfig(config);
ShapeElement element = new ShapeElement();
element.setShape(ShapeElement.RECTANGLE);
element.setRgbColor(new RgbColor(255, 255, 255));
myLayout.addComponent(mMapView);
super.setUIContent(myLayout);
}
}
Result
Tips and Tricks
Add required dependencies without fail.
Add required images in resources > base > media.
Add custom strings in resources > base > element > string.json.
Define supporting devices in config.json file.
Do not log the sensitive data.
Enable required service in AppGallery Connect.
Use respective Log methods to print logs.
Conclusion
In this article, we have learnt, integration of Huawei Map in HarmonyOs wearable device using Huawei Map Kit. Sample application shows how to implement Map kit in HarmonyOs Wearables device. Hope this articles helps you to understand and integration of map kit, you can use this feature in your HarmonyOs application to display map in wearable devices.
Thank you so much for reading this article and I hope this article helps you to understand Huawei Map Kit in HarmonyOS. Please provide your comments in the comment section and like.
References
Map Kit
Checkout in forum
Programmers are — or should be — voracious readers. To keep up with the latest updates in the world of software, we need to be constantly scrolling through books, forum articles, news, and more.
This process is certainly mentally enriching, but it can also be a tiring and tedious one due to one major obstacle: language. I used to struggle a lot with reading articles written in another language, because I was looking up every word that I couldn't understand in the dictionary in order to make sense of what I was reading — until I developed this.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
No muss, no fuss. Just select the foreign text you don't understand and instantly translate it into a language that you want.
Now let's get to the development part. Not being much of a linguist, I knew that I would struggle to develop a translation feature for my app all on my own.
Luckily, I got a great helper — HMS Core ML Kit's translation service. It supports real-time and on-device translation, making translation possible even in the absence of an Internet connection. With the help of the translation service, language barriers become a thing of the past.
Now, I'll explain how I developed this function, using the source code for the demo above.
Development ProcessPreparationsMake necessary preparations as detailed here. This includes the following:
Configure the app information.
Enable the service.
Integrate the SDK of the service.
Configure the obfuscation scripts.
Declare necessary permissions.
Function Building1. Set the app authentication information via an access token:
Code:
MLApplication.getInstance().setAccessToken("your access token");
Or an API key:
Code:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create a real-time translator.
Code:
MLLocalTranslateSetting setting = new MLLocalTranslateSetting
.Factory()
.setSourceLangCode(mSourceLangCode)
.setTargetLangCode(mTargetLangCode)
.create();
this.localTranslator =
MLTranslatorFactory.getInstance().getLocalTranslator(setting);
3. Query the languages supported by the service.
Code:
MLTranslateLanguage.getCloudAllLanguages().addOnSuccessListener(new OnSuccessListener<Set<String>>() {
@Override
public void onSuccess(Set<String> result) {
// Callback when the supported languages are obtained.
}
});
4. Translate the text.
Code:
localTranslator.preparedModel(downloadStrategy, modelDownloadListener).addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
final Task<String> task = localTranslator.asyncTranslate(input);
task.addOnSuccessListener(new OnSuccessListener<String>() {
@Override
public void onSuccess(String text) {
displaySuccess(text, true);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
displayFailure(e);
}
});
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
displayFailure(e);
}
});
5. Release resources occupied by the translator when the translation is complete.
Code:
if (localTranslator != null) {
localTranslator.stop();
}
And voila, the translation function is built.
Besides e-book readers, there are lots of other apps that can benefit greatly from having a translation function, such as travel apps, which can use the translation service to translate foreign road signs and menus for visitors. Translation is also useful for educational apps, to help users who are not familiar with the language used in the app.
That concludes my development journey for the demo e-book reader. What other ideas and suggestions do you have for using the translation function? Feel free to provide your thoughts in the comments section.
ReferencesWhy Translation is Important In A World Where English is Everywhere
ML Kit
BackgroundQuick question: How many languages are there in the world? Before you rush off to search for the answer, read on.
There are over 7000 languages — astonishing, right? Such diversity highlights the importance of translation, which is valuable to us on so many levels because it opens us up to a rich range of cultures. Psycholinguist Frank Smith said that, "one language sets you in a corridor for life. Two languages open every door along the way."
These days, it is very easy for someone to pick up their phone, download a translation app, and start communicating in another language without having a sound understanding of it. It has taken away the need to really master a foreign language. AI technologies such as natural language processing (NLP) not only simplify translation, but also open up opportunities for people to learn and use a foreign language.
Modern translation apps are capable of translating text into another language in just a tap. That's not to say that developing translation at a tap is as easy as it sounds. An integral and initial step of it is language detection, which tells the software what the language is.
Below is a walkthrough of how I implemented language detection for my demo app, using this service from HMS Core ML Kit. It automatically detects the language of input text, and then returns all the codes and the confidence levels of the detected languages, or returns only the code of the language with the highest confidence level. This is ideal for creating a translation app.
Implementation ProcedurePreparations1. Configure the Maven repository address.
Code:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the SDK of the language detection capability.
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-language-detection:3.4.0.301'
}
Project Configuration1. Set the app authentication information by setting either an access token or an API key.
Call the setAccessToken method to set an access token. Note that this needs to be set only once during app initialization.
Code:
MLApplication.getInstance().setAccessToken("your access token");
Or, call the setApiKey method to set an API key, which is also required only once during app initialization.
Code:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create a language detector using either of these two methods.
Code:
// Method 1: Use the default parameter settings.
MLRemoteLangDetector mlRemoteLangDetector = MLLangDetectorFactory.getInstance()
.getRemoteLangDetector();
// Method 2: Use the customized parameter settings.
MLRemoteLangDetectorSetting setting = new MLRemoteLangDetectorSetting.Factory()
// Set the minimum confidence level for language detection.
.setTrustedThreshold(0.01f)
.create();
MLRemoteLangDetector mlRemoteLangDetector = MLLangDetectorFactory.getInstance()
.getRemoteLangDetector(setting);
3. Detect the text language.
Asynchronous method
Code:
// Method 1: Return detection results that contain language codes and confidence levels of multiple languages. In the code, sourceText indicates the text of which the language is to be detected. The maximum character count of the text is 5000.
Task<List<MLDetectedLang>> probabilityDetectTask = mlRemoteLangDetector.probabilityDetect(sourceText);
probabilityDetectTask.addOnSuccessListener(new OnSuccessListener<List<MLDetectedLang>>() {
@Override
public void onSuccess(List<MLDetectedLang> result) {
// Callback when the detection is successful.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Callback when the detection failed.
try {
MLException mlException = (MLException)e;
// Result code for the failure. The result code can be customized with different popups on the UI.
int errorCode = mlException.getErrCode();
// Description for the failure. Used together with the result code, the description facilitates troubleshooting.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
// Method 2: Return only the code of the language with the highest confidence level. In the code, sourceText indicates the text of which the language is to be detected. The maximum character count of the text is 5000.
Task<String> firstBestDetectTask = mlRemoteLangDetector.firstBestDetect(sourceText);
firstBestDetectTask.addOnSuccessListener(new OnSuccessListener<String>() {
@Override
public void onSuccess(String s) {
// Callback when the detection is successful.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Callback when the detection failed.
try {
MLException mlException = (MLException)e;
// Result code for the failure. The result code can be customized with different popups on the UI.
int errorCode = mlException.getErrCode();
// Description for the failure. Used together with the result code, the description facilitates troubleshooting.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
Synchronous method
Code:
// Method 1: Return detection results that contain language codes and confidence levels of multiple languages. In the code, sourceText indicates the text of which the language is to be detected. The maximum character count of the text is 5000.
try {
List<MLDetectedLang> result= mlRemoteLangDetector.syncProbabilityDetect(sourceText);
} catch (MLException mlException) {
// Callback when the detection failed.
// Result code for the failure. The result code can be customized with different popups on the UI.
int errorCode = mlException.getErrCode();
// Description for the failure. Used together with the result code, the description facilitates troubleshooting.
String errorMessage = mlException.getMessage();
}
// Method 2: Return only the code of the language with the highest confidence level. In the code, sourceText indicates the text of which the language is to be detected. The maximum character count of the text is 5000.
try {
String language = mlRemoteLangDetector.syncFirstBestDetect(sourceText);
} catch (MLException mlException) {
// Callback when the detection failed.
// Result code for the failure. The result code can be customized with different popups on the UI.
int errorCode = mlException.getErrCode();
// Description for the failure. Used together with the result code, the description facilitates troubleshooting.
String errorMessage = mlException.getMessage();
}
4. Stop the language detector when the detection is complete, to release the resources occupied by the detector.
Code:
if (mlRemoteLangDetector != null) {
mlRemoteLangDetector.stop();
}
And once you've done this, your app will have implemented the language detection function, which works as shown in the demo below.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
ConclusionTranslation apps are vital to helping people communicate across cultures, and play an important role in all aspects of our life, from study to business, and particularly travel. Without such apps, communication across different languages would be limited to people who have become proficient in another language.
In order to translate text for users, a translation app must first be able to identify the language of text. One way of doing this is to integrate a language detection service, which detects the language — or languages — of text and then returns either all language codes and their confidence levels or the code of the language with the highest confidence level. This capability improves the efficiency of such apps to build user confidence in translations offered by translation apps.
ReferencesWhat Is Natural Language Processing
What Is Language Detection