{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we will learn about Huawei Map Kit in HarmonyOs. Map Kit is an SDK for map development. It covers map data of more than 200 countries and regions, and supports over 70 languages. With this SDK, you can easily integrate map-based functions into your HarmonyOs application.
Development Overview
You need to install DevEcho Studio IDE and I assume that you have prior knowledge about the Harmony Os and java.
Hardware Requirements
A computer (desktop or laptop) running windows 10.
A HarmonyOs Smart Watch (with the USB cable), which is used for debugging.
Software Requirements
Java JDK installation package.
DevEcho Studio installed.
Steps:
Step 1: Create a HarmonyOs Application.
Step 1: Create a project in AppGallery
Step 2: Configure App in AppGallery
Step 3: Follow the SDK integration steps
Let's start coding
MapAbilitySlice.java
Java:
public class MapAbilitySlice extends AbilitySlice {
private static final HiLogLabel LABEL_LOG = new HiLogLabel(3, 0xD001100, "TAG");
private MapView mMapView;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
CommonContext.setContext(this);
// Declaring and Initializing the HuaweiMapOptions Object
HuaweiMapOptions huaweiMapOptions = new HuaweiMapOptions();
// Initialize Camera Properties
CameraPosition cameraPosition =
new CameraPosition(new LatLng(12.972442, 77.580643), 10, 0, 0);
huaweiMapOptions
// Set Camera Properties
.camera(cameraPosition)
// Enables or disables the zoom function. By default, the zoom function is enabled.
.zoomControlsEnabled(false)
// Sets whether the compass is available. The compass is available by default.
.compassEnabled(true)
// Specifies whether the zoom gesture is available. By default, the zoom gesture is available.
.zoomGesturesEnabled(true)
// Specifies whether to enable the scrolling gesture. By default, the scrolling gesture is enabled.
.scrollGesturesEnabled(true)
// Specifies whether the rotation gesture is available. By default, the rotation gesture is available.
.rotateGesturesEnabled(false)
// Specifies whether the tilt gesture is available. By default, the tilt gesture is available.
.tiltGesturesEnabled(true)
// Sets whether the map is in lite mode. The default value is No.
.liteMode(false)
// Set Preference Minimum Zoom Level
.minZoomPreference(3)
// Set Preference Maximum Zoom Level
.maxZoomPreference(13);
// Initialize the MapView object.
mMapView = new MapView(this,huaweiMapOptions);
// Create the MapView object.
mMapView.onCreate();
// Obtains the HuaweiMap object.
mMapView.getMapAsync(new OnMapReadyCallback() {
@Override
public void onMapReady(HuaweiMap huaweiMap) {
HuaweiMap mHuaweiMap = huaweiMap;
mHuaweiMap.setOnMapClickListener(new OnMapClickListener() {
@Override
public void onMapClick(LatLng latLng) {
new ToastDialog(CommonContext.getContext()).setText("onMapClick ").show();
}
});
// Initialize the Circle object.
Circle mCircle = new Circle(this);
if (null == mHuaweiMap) {
return;
}
if (null != mCircle) {
mCircle.remove();
mCircle = null;
}
mCircle = mHuaweiMap.addCircle(new CircleOptions()
.center(new LatLng(12.972442, 77.580643))
.radius(500)
.fillColor(Color.GREEN.getValue()));
new ToastDialog(CommonContext.getContext()).setText("color green: " + Color.GREEN.getValue()).show();
int strokeColor = Color.RED.getValue();
float strokeWidth = 15.0f;
// Set the edge color of a circle
mCircle.setStrokeColor(strokeColor);
// Sets the edge width of a circle
mCircle.setStrokeWidth(strokeWidth);
}
});
// Create a layout.
ComponentContainer.LayoutConfig config = new ComponentContainer.LayoutConfig(ComponentContainer.LayoutConfig.MATCH_PARENT, ComponentContainer.LayoutConfig.MATCH_PARENT);
PositionLayout myLayout = new PositionLayout(this);
myLayout.setLayoutConfig(config);
ShapeElement element = new ShapeElement();
element.setShape(ShapeElement.RECTANGLE);
element.setRgbColor(new RgbColor(255, 255, 255));
myLayout.addComponent(mMapView);
super.setUIContent(myLayout);
}
}
Result
Tips and Tricks
Add required dependencies without fail.
Add required images in resources > base > media.
Add custom strings in resources > base > element > string.json.
Define supporting devices in config.json file.
Do not log the sensitive data.
Enable required service in AppGallery Connect.
Use respective Log methods to print logs.
Conclusion
In this article, we have learnt, integration of Huawei Map in HarmonyOs wearable device using Huawei Map Kit. Sample application shows how to implement Map kit in HarmonyOs Wearables device. Hope this articles helps you to understand and integration of map kit, you can use this feature in your HarmonyOs application to display map in wearable devices.
Thank you so much for reading this article and I hope this article helps you to understand Huawei Map Kit in HarmonyOS. Please provide your comments in the comment section and like.
References
Map Kit
Checkout in forum
Related
This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
In this post, we will learn Kotlin with data binding in Android. It gives you the ability to communicate between your view and model. It keeps the code clean and sorted.
Note: Configuration part check previous article
How to Use Data Binding Library with Kotlin – A Step By Step Guide
It’s a library that allows you to bind the data of your models directly to the xml views in a very flexible way.
Kotlin was recently introduced as a secondary ‘official’ Java language. It is similar to Java in many ways but is a little easier to learn and get to grips with. Some of the biggest companies have adopted Kotlin and seen amazing results.
If you want to use data binding and Kotlin, here are a few things to keep in mind:
· Data binding is a support library, so it can be used with all Android platform versions all the way back to Android 2.1 (API level 7+).
· To use data binding, you need Android Plugin for Gradle 1.5.0-alpha1 or higher. You can see here how to update the Android Plugin for Gradle.
· Android Studio 3.0 fully supports kotlin
First of all, create an Android Studio project and add a dependency for Kotlin and few changes for your App level build.gradle
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
· Add below line into Root level build.gradle
Code:
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
Data Binding:
· Android offers support to write declarative layouts using data binding.
· This minimizes the necessary code in your application logic to connect to the user interface elements.
· The usage of data binding requires changes in your layout files. Such layout files starts with a layout root tag followed by a data element and a view root element.
· The data elements describe data which is available for binding. This view element contains your root hierarchy similar to layout files which are not used with data binding.
· References to the data elements or expressions within the layout are written in the attribute properties using the @{} or @={}
1. The user variable within data describes a property that may be used within this layout
2. Normal view hierarchy.
3. Binding Input data.
Code Implementation:
1.SignInActivity.kt
Code:
class SignInActivity : AppCompatActivity() {
private var mInstance: HiAnalyticsInstance? = null
private lateinit var mDataBinding: ActivitySigninBinding
var viewmodel: SignInViewModel? = null
var customeProgressDialog: CustomeProgressDialog? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
mDataBinding = DataBindingUtil.setContentView(this, R.layout.activity_signin)
viewmodel = ViewModelProviders.of(this).get(SignInViewModel::class.java)
mDataBinding.setLifecycleOwner(this);
mDataBinding?.viewmodel = viewmodel
customeProgressDialog = CustomeProgressDialog(this)
initObservables()
init();
}
private fun initObservables() {
viewmodel?.userLogin?.observe(this, Observer { userInfo ->
Toast.makeText(this, "welcome, ${userInfo?.email}", Toast.LENGTH_LONG).show()
val bundle = Bundle()
bundle.putString("email", userInfo?.email)
bundle.putString("password", userInfo?.password)
mInstance!!.onEvent(HAEventType.SIGNIN, bundle)
val intent = Intent(this, ProfileScreen::class.java)
startActivity(intent)
})
viewmodel?.progressDialog?.observe(this, Observer {
if (it!!) customeProgressDialog?.show() else customeProgressDialog?.dismiss()
})
}
private fun init() {
HiAnalyticsTools.enableLog()
mInstance = HiAnalytics.getInstance(this)
mInstance?.setAnalyticsEnabled(true)
mInstance?.regHmsSvcEvent()
}
}
Viewmodel Class:
· The view model coordinates the view's interaction with the model.
· It may convert or manipulate data so that it can be easily consumed by the view and may implement additional properties that may not be present on the model.
· The view model may define logical states that the view can represent visually to the user.
Code:
class SignInViewModel(application: Application) : AndroidViewModel(application) {
var btnSelected: ObservableBoolean? = null
var email: ObservableField<String>? = null
var password: ObservableField<String>? = null
var userLogin: MutableLiveData<UserInfo>? = null
var progressDialog: SingleLiveEvent<Boolean>? = null
init {
btnSelected = ObservableBoolean(false)
email = ObservableField("")
password = ObservableField("")
userLogin = MutableLiveData()
progressDialog = SingleLiveEvent<Boolean>()
}
fun onEmailChanged(s: CharSequence, start: Int, befor: Int, count: Int) {
btnSelected?.set(password?.get()!!.length != 0)
}
fun onPasswordChanged(s: CharSequence, start: Int, befor: Int, count: Int) {
btnSelected?.set(s.toString().length >= 8)
}
fun onLoginClick() {
progressDialog?.value = false
val userInfo = UserInfo(email?.get(), password?.get())
userLogin?.value = userInfo
}
}
Result:
Overview of application:
Event Analysis:
· Event Analysis to collect data about interactions with your content.
· An Event hit includes a value of each component and these values are displayed in report.
Any questions about this, you can acquire answers from HUAWEI Developer Forum.
More information like this, you can visit HUAWEI Developer Forum
Introduction
This article shows the setp to integrate HMS Flutter Location plugin with a news app.
The news app will obtain the country that the user is located in. It will then fetch that country's news headlines and display them onto a list.
For example, if the user is currently located in Hong Kong, the app will show Hong Kong's news headlines on launch. The user may switch to read other countries' news headlines afterwards.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The datasource of the news headlines are obtained from the NewsAPI.
Configuration
Assuming that there is already a running Flutter app,
*Update: for 1) and 2), the plugin is now uploaded to pub.dev, there is no need to download and configure it manually.
Add this to your pubspec.yaml is the preferred way.
Code:
dependencies:
huawei_location: ^4.0.4+300
1) Download the huawei_location flutter plugin and unzip it. For this project, it is placed under the project's root.
2) Configure pubspec.yaml to add the following under dependencies. Replace the path to your own's.
Code:
huawei_location:
path: 'hms/location/huawei_location'
3) Configure AndroidManifest.xml for location permission.
Code:
<uses-permission android:name="android.permission.ACCESS_COARES_LOCATION"/>
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>
Project Structure
The architecture of the flutter project is shown below, which is adapted from this repo.
https://github.com/FilledStacks/flutter-tutorials/tree/master/010-provider-architecture
The project is layered into three parts, namely UI, Services and Business Logic.
The UI is dumb and display only what it is given. It will not contain any logic to process the data.
The Service provides various services and exposes APIs for others to use.
The business logic contains view models and models etc.
Since the UI is pretty simple and does not contain any logic to process data, we will mainly focus on Services and Business Logic.
Services
1) Permission Service
Accessing user's location generally requires permission at runtime.
The permission services expose two methods to achieve this. For convenience, the built-in PermissionHandler from Huawei-location plugin is used.
Code:
class PermissionServicesImpl implements PermissionServices {
PermissionHandler _permissionHandler = PermissionHandler();
@override
Future<bool> hasLocationPermission() async {
return _permissionHandler.hasLocationPermission();
}
@override
Future<bool> requestLocationPermission() async {
return _permissionHandler
.requestLocationPermission();
}
}
For this app, the permission to obtain user's location information is acquired during app launch. If the user has allowed, the country code obtained is saved with SharedPreferences.
Code:
Future<void> _initApp() async {
final isPermitted = await _permissionServices.requestLocationPermission();
if(isPermitted == null) {
await _sharedPreferencesServices.saveCountryCode(CommonString.defaultCountry);}
if(isPermitted) {
final hwLocation = await _locationService.getHWLocation();
if(hwLocation != null) {
await _sharedPreferencesServices.saveCountryCode(hwLocation.countryCode);
}
} else {
await _sharedPreferencesServices.saveCountryCode(CommonString.defaultCountry);
}
}
2) Location Service
The location service provides only one method to return HWLocation using huawei_location plugin.
It utilizes location plugin's FusedLocationProviderClient.getLastLocationWithAddress(LocationRequest) to acquire HWLocation.
Remember to set LocationRequest.needAddress to true if you need to obtain the address information also.
Code:
class LocationServicesImpl implements LocationService {
final PermissionServices _permissionServices =
serviceLocator<PermissionServices>();
@override
Future<HWLocation> getHWLocation() async {
if (await _permissionServices.hasLocationPermission()) {
FusedLocationProviderClient locationService =
FusedLocationProviderClient();
LocationRequest locationRequest = LocationRequest();
locationRequest.needAddress = true;
final hwLocation = locationService
.getLastLocationWithAddress(locationRequest);
return hwLocation;
}
}
}
3) SharedPreferences Service
The shared preferences service allows the ability to store and retreive country code. For simplicity, only the abstract class is shown.
Code:
abstract class SharedPreferencesServices {
Future<void> saveCountryCode(String countryCode);
Future<String> getCountryCode();
}
4) TopHeadlines Service
TopHeadlines service allows the ability to fetch a specific countries' news headlines and also to change to another countries' news headlines. For simplicity, only the abstract class is shown.
Code:
abstract class TopHeadlinesService {
Future<List<Article>> getTopHeadlines(String countryCode);
Future<List<Article>> changeHeadlinesLanguage(String countryCode);
}
Business Logic
1) HeadlinesScreenViewModel:
This viewmodel is responsible for managing the state of the headlines screen, processing data and gluing all parts together.
In case of a change in state, the viewmodel will notify its listneres (the UI), so that they could act on the event.
To complete the picture, in the loadData method, the country code is first retreived from SharedPreferences, the country code is then used to call getTopHeadlines(String countryCode) to fetch the headline news of that particular country.
The app will call loadData at the initState lifecycle method of the headlines screen.
Code:
void loadData() async {
_setIsLoading(true);
final _countryCode = await _sharedPreferencesServices
.getCountryCode()
.timeout(Duration(milliseconds: 2000), onTimeout: () => CommonString.defaultCountry);
_headlines = await _topHeadlinesService
.getTopHeadlines(_countryCode)
.timeout(Duration(milliseconds: 2000), onTimeout: () => null);
_setIsLoading(false);
}
The above explains the necessary parts of using HMS Location kit for Flutter.
If you are interested in the details, feel free to visit the github repo.
https://github.com/lkhe/news_app
Introduction
To help understand the image content, the scene detection service can classify the scenario content of images and add labels, such as outdoor scenery, indoor places, and buildings.You can create more customised experiences for users based on the data detected from image. Currently Huawei supports detection of 102 scenarios. For details about the scenarios, refer to List of Scenario Identification Categories.
This service can be used to identify image sets by scenario and create intelligent album sets. You can also select various camera parameters based on the scene in your app, to help users to take better-looking photos.
Prerequisite
The scene detection service supports integration with Android 6.0 and later versions.
The scene detection needs READ_EXTERNAL_STORAGE and CAMERA in AndroidManifest.xml.
Implementation of dynamic permission for camera and storage is not covered in this article. Please make sure to integrate dynamic permission feature.
Development
1. Register as a developer account in AppGallery Connect.
2. Create an application and enable ML kit from AppGallery connect.
3. Refer to Service Enabling. Integrate AppGallery connect SDK. Refer to AppGallery Connect Service Getting Started.
4. Add Huawei Scene detection dependencies in app-level build.gradle.
Code:
// ML Scene Detection SDK
implementation 'com.huawei.hms:ml-computer-vision-scenedetection:2.0.3.300'
// Import the scene detection model package.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection-model:2.0.3.300'
implementation 'com.huawei.hms:ml-computer-vision-cloud:2.0.3.300'
5. Sync the gradle.
We have an Activity (MainActivity.java) which has floating buttons to select static scene detection and live scene detection.
Static scene detection is used to detect scene in static images. When we select a photo, the scene detection service returns the results.
Camera stream (Live) scene detection can process camera streams, convert video frames into an MLFrame object, and detect scenarios using the static image detection method.
Implementation of Static scene detection
Code:
private void sceneDetectionEvaluation(Bitmap bitmap) {
//Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer(setting);
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
for( MLSceneDetection sceneDetection : result) {
sb.append("Detected Scene : " + sceneDetection.getResult() + " , " + "Confidence : " + sceneDetection.getConfidence() + "\n");
tvResult.setText(sb.toString());
if (analyzer != null) {
analyzer.stop();
}
}
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
Log.e(TAG, "MLException : " + errorMessage +", error code: "+ String.valueOf(errorCode));
} else {
// Other errors.
Log.e(TAG, "Exception : " + e.getMessage());
}
if (analyzer != null) {
analyzer.stop();
}
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
if(analyzer != null) {
analyzer.stop();
}
}
We can set the settings MLSceneDetectionAnalyzerSetting() and set the confidence level for scene detection. setConfidence() methods needs to get float value. After settings are fixed, we can create the analyzer with settings value. Then, we can set the frame with bitmap. Lastly, we have created a task for list of MLSceneDetection object. We have listener functions for success and failure. The service returns list of results. The results have two parameter which are result and confidence. We can set the response to textView tvResult.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
More details, you can visit https://forums.developer.huawei.com/forumPortal/en/topic/0204400184662360088
we will get accuracy result ?
Introduction
Huawei provides Remote Configuration service to manage parameters online, with this service you can control or change the behavior and appearance of you app online without requiring user’s interaction or update to app. By implementing the SDK you can fetch the online parameter values delivered on the AG-console to change the app behavior and appearance.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Functional features
1. Parameter management: This function enables user to add new parameter, delete, update existing parameter and setting conditional values.
2. Condition management: This function enables user to adding, deleting and modifying conditions and copy and modify existing conditions. Currently, you can set the following conditions version, country/region, audience, user attribute, user percentage, time and language. You can expect more conditions in the future.
3. Version management: This feature function supports user to manage and rollback up to 90 days of 300 historical versions for parameters and conditions.
4. Permission management: This feature function allows account holder, app administrator, R&D personnel, and administrator and operations personals to access Remote Configuration by default.
Service use cases
Change app language by Country/Region
Show Different Content to Different Users
Change the App Theme by Time
Development Overview
You need to install Unity software and I assume that you have prior knowledge about the unity and C#.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
A Huawei phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.7 or later.
Unity software installed.
Visual Studio/Code installed.
HMS Core (APK) 4.X or later.
Integration Preparations
1. Create a project in AppGallery Connect.
2. Create Unity project.
3. Huawei HMS AGC Services to project.
4. Download and save the configuration file.
Add the agconnect-services.json file following directory Assests > Plugins > Android
5. Add the following plugin and dependencies in LaucherTemplate.
Code:
apply plugin:'com.huawei.agconnect'
Code:
implementation 'com.huawei.agconnect:agconnect-remoteconfig:1.4.1.300'
implementation 'com.huawei.agconnect:agconnect-core:1.4.2.301'
6. Add the following dependencies in MainTemplate.
Code:
apply plugin: 'com.huawei.agconnect'
Code:
implementation 'com.huawei.agconnect:agconnect-remoteconfig:1.4.1.300'
implementation 'com.huawei.agconnect:agconnect-core:1.4.2.301'
7. Add dependencies in build script repositories and all project repositories & class path in BaseProjectTemplate.
Code:
maven { url 'https://developer.huawei.com/repo/' }
8. Configuring project in AGC
9. Create Empty Game object rename to RemoteConfigManager, UI canvas texts and button and assign onclick events to respective text and button as shown below.
RemoteConfigManager.cs
C#:
using UnityEngine;
using HuaweiService.RemoteConfig;
using HuaweiService;
using Exception = HuaweiService.Exception;
using System;
public class RemoteConfigManager : MonoBehaviour
{
public static bool develporMode;
public delegate void SuccessCallBack<T>(T o);
public delegate void SuccessCallBack(AndroidJavaObject o);
public delegate void FailureCallBack(Exception e);
public void SetDeveloperMode()
{
AGConnectConfig config;
config = AGConnectConfig.getInstance();
develporMode = !develporMode;
config.setDeveloperMode(develporMode);
Debug.Log($"set developer mode to {develporMode}");
}
public void showAllValues()
{
AGConnectConfig config = AGConnectConfig.getInstance();
if(config!=null)
{
Map map = config.getMergedAll();
var keySet = map.keySet();
var keyArray = keySet.toArray();
foreach (var key in keyArray)
{
Debug.Log($"{key}: {map.getOrDefault(key, "default")}");
}
}else
{
Debug.Log(" No data ");
}
config.clearAll();
}
void Start()
{
SetDeveloperMode();
SetXmlValue();
}
public void SetXmlValue()
{
var config = AGConnectConfig.getInstance();
// get res id
int configId = AndroidUtil.GetId(new Context(), "xml", "remote_config");
config.applyDefault(configId);
// get variable
Map map = config.getMergedAll();
var keySet = map.keySet();
var keyArray = keySet.toArray();
config.applyDefault(map);
foreach (var key in keyArray)
{
var value = config.getSource(key);
//Use the key and value ...
Debug.Log($"{key}: {config.getSource(key)}");
}
}
public void GetCloudSettings()
{
AGConnectConfig config = AGConnectConfig.getInstance();
config.fetch().addOnSuccessListener(new HmsSuccessListener<ConfigValues>((ConfigValues configValues) =>
{
config.apply(configValues);
Debug.Log("===== ** Success ** ====");
showAllValues();
config.clearAll();
}))
.addOnFailureListener(new HmsFailureListener((Exception e) =>
{
Debug.Log("activity failure " + e.toString());
}));
}
public class HmsFailureListener:OnFailureListener
{
public FailureCallBack CallBack;
public HmsFailureListener(FailureCallBack c)
{
CallBack = c;
}
public override void onFailure(Exception arg0)
{
if(CallBack !=null)
{
CallBack.Invoke(arg0);
}
}
}
public class HmsSuccessListener<T>:OnSuccessListener
{
public SuccessCallBack<T> CallBack;
public HmsSuccessListener(SuccessCallBack<T> c)
{
CallBack = c;
}
public void onSuccess(T arg0)
{
if(CallBack != null)
{
CallBack.Invoke(arg0);
}
}
public override void onSuccess(AndroidJavaObject arg0)
{
if(CallBack !=null)
{
Type type = typeof(T);
IHmsBase ret = (IHmsBase)Activator.CreateInstance(type);
ret.obj = arg0;
CallBack.Invoke((T)ret);
}
}
}
}
10. Click to Build apk, choose File > Build settings > Build, to Build and Run, choose File > Build settings > Build And Run
Result
Tips and Tricks
Add agconnect-services.json file without fail.
Make sure dependencies added in build files.
Make sure that you released once parameters added/updated.
Conclusion
We have learnt integration of Huawei Remote Configuration Service into Unity Game development. Remote Configuration service lets you to fetch configuration data from local xml file and online i.e. AG-Console,changes will reflect immediately once you releases the changes.Conclusion is service lets you to change your app behaviour and appearance without app update or user interaction.
Thank you so much for reading article, hope this article helps you.
Reference
Unity Manual
GitHub Sample Android
Huawei Remote Configuration service
Read in huawei developer forum
I don't know if it's the same for you, but I always get frustrated when sorting through my phone's album. It seems to take forever before I can find the image that I want to use. As a coder, I can't help but wonder if there's a solution for this. Is there a way to organize an entire album? Well, let's take a look at how to develop an image classifier using a service called image classification.
Development Preparations1. Configure the Maven repository address for the SDK to be used.
Java:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the image classification SDK.
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-classification:3.3.0.300'
// Import the image classification model package.
implementation 'com.huawei.hms:ml-computer-vision-image-classification-model:3.3.0.300'
Project Configuration1. Set the authentication information for the app.
This information can be set through an API key or access token.
Use the setAccessToken method to set an access token during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create an image classification analyzer in on-device static image detection mode.
Java:
// Method 1: Use customized parameter settings for device-based recognition.
MLLocalClassificationAnalyzerSetting setting =
new MLLocalClassificationAnalyzerSetting.Factory()
.setMinAcceptablePossibility(0.8f)
.create();
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer(setting);
// Method 2: Use default parameter settings for on-device recognition.
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer();
3. Create an MLFrame object.
Java:
// Create an MLFrame object using the bitmap which is the image data in bitmap format. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image dimensions be greater than or equal to 112 x 112 px.
MLFrame frame = MLFrame.fromBitmap(bitmap);
4. Call asyncAnalyseFrame to classify images.
Java:
Task<List<MLImageClassification>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Recognition success.
// Callback when the MLImageClassification list is returned, to obtain information like image categories.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize relevant messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
5. Stop the analyzer after recognition is complete.
Java:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
Demo
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
RemarksThe image classification capability supports the on-device static image detection mode, on-cloud static image detection mode, and camera stream detection mode. The demo here illustrates only the first mode.
I came up with a bunch of application scenarios to use image classification, for example: education apps. With the help of image classification, such an app enables its users to categorize images taken in a period into different albums; travel apps. Image classification allows such apps to classify images according to where they are taken or by objects in the images; file sharing apps. Image classification allows users of such apps to upload and share images by image category.
References>>Image classification Development Guide
>>Reddit to join developer discussions
>>GitHub to download the sample code
>>Stack Overflow to solve integration problems