How to develop a Card Ability - Huawei Developers

Card abilities are intelligent cards which are displayed on the Huawei Assintant Page and show relevant information about your service to the users, that provides a great user experience, even out of your app. If a users subscribes to your Card Ability, an intelligent Card will be added to the Huawei Assistant page, waitng for the user to start an action. You can use the mutiple areas of the card or adding buttons to trigger different acctions in your app or Quick App.
In previous posts I've told you about how to perform the account binding and trigger events for Card Abilities. Now, lets talk about How to develop a Card Ability.
Previous requirements
An Enterprise Developer Account
Huawei QuickApp IDE
Huawei Ability Test Tool (Find it on AppGallery)
ADB
Notes:
Currently, only the Enterprise Account has access to the Ability Gallery Console. Members of a Team account won't be able to configure abilities, even if are added to the team by the Enterprise account.
You don´t need an Enterprise account to develop cards, but you will need it to release them.
Starting the Card project
Card Abilities use the Quick App technology, if you are familiar with Quick App development or Front End development, you will find the card development process so easy.
Open the Huawei Quick App IDE
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Go to, File > New Project > New JS Widget Project
Choose the name, package name, a place to store your project and finally choose a tempalte to start developing your Card.
The source file of a Card has the ux extension. A Card file will be packed in a forder with the same name as the ux source file, in this directory you will also find an i18n dir where you can put different translations of your static content.
A card file is diveded on 3 segments:
Template: Here you define the visual components of Your Card with HTML
Style: Here you define the component appearance by using CSS. You can create define your styles in a separated file by using the .scss file extension.
Script: Contains the business logic of your Card by using JavaScript.
Handling different locales
You can display your cards in different languages by adding the js file corresponding to each locale you want to support. To add locales, go to the i18n dir of your card, rigth click and select new file.
Create a js file and name it with the language code you want to support. Then, create a message object with your desired strings.
Code:
//es.js
export const message={
title: "Tech Zone",
textArray: ['Microprocesador', 'Stock Disponible'],
url: "https://definicion.de/wp-content/uploads/2019/01/microprocesador.jpg",
buttonArray: ['Comprar']
}
//en.js
export const message={
title: "Tech Zone",
textArray: ['Microprocessor', 'New Item Available'],
url: "https://definicion.de/wp-content/uploads/2019/01/microprocesador.jpg",
buttonArray: ['Buy']
}
On the Script part of your card file, import your locales and select the one which matches with the system locale. You can also define a default locale in case of the user's default language is unsupported by you.
Code:
const locales = {
"en": require('./i18n/en.js'),
"es": require('./i18n/es.js')
// TODO:Like the example above, you can add other languages
}
const localeObject = configuration.getLocale();
let local = localeObject.language;
const $i18n = new I18n({ locale: local, messages: locales, fallbackLocale: 'en' }); //use the fallbackLocale to define a default locale
Output:
Card in spanish
Card in english
Data fetching
You can fetch data from remote APIs to perform internal operations or displaying custom messages by using the fetch interface.
Code:
onInit: function () {
var fetch = require("@system.fetch")
var mainContext=this;
fetch.fetch({
url:"https://gsajql2q17.execute-api.us-east-2.amazonaws.com/Prod",
success:function(data){
const response=JSON.parse(data.data)
//Do something with the remote data
},
fail: function(data, code) {
console.log("handling fail, code=" + code);
}
})
}
If you want to display the received data, add properties to your exported data object
Code:
<script>
//import
//locale configs
module.exports = {
data: {
i18n: $i18n,
apiString:'loading'// property for receiving the remote parameter
},
onInit: function () {
var fetch = require("@system.fetch")
var mainContext=this;
fetch.fetch({
url:"https://gsajql2q17.execute-api.us-east-2.amazonaws.com/Prod",
success:function(data){
const res=JSON.parse(data.data)
mainContext.apiString=res.body; //updating the apiString with the received value
},
fail: function(data, code) {
console.log("handling fail, code=" + code);
}
})
}
})
}
}
</script>
From the Template part, display the received String by using the next notation:
"{{KEY}}"
Code:
<template>
<div class="imageandtext1_box" widgetid="38bf7c88-78b5-41ea-84d7-cc332a1c04fc">
<!-- diplaying the received value on the Card Title-->
<card_title title="{{apiString}}" logoUrl="{{logoUrl}}"></card_title>
<div>
<div style="flex: 1;align-items: center;">
<b2_0 text-array="{{$t('message.textArray')}}" lines-two="{{linesTwo}}"></b2_0>
</div>
<f1_1 url="{{$t('message.url')}}"></f1_1>
</div>
<card_bottom_3 button-array="{{$t('message.buttonArray')}}" menu-click={{onClick}}></card_bottom_3>
</div>
</template>
Output:
Card with dynamic title
Handling event parameters
If your card will be triggered by a Push Event, it can be prepared to receive event parameters and using it to perform internal operations and displaying personalized messages. By using Push Events, the event parameters will be transparently delivered to the card on the onInit function.
Code:
onInit: function (params) {
//Do something
}
To display the received params, define an exported props array on the Script part of your Card.
Code:
<script>
//imports
//locale configs
module.exports = {
props: [ //Define all the desired properties to be displayed
'title',
'big_text',
'small_text'
],
data: {
'screenDensity': 3,
'resolution': 1080,
'posterMargin': 0,
'width': 0,
'height': 0,
i18n: $i18n,
},
onInit: function (params) {
//Parameter assignation
this.title=params.title;
this.big_text=params.big_text;
this.small_text=params.small_text;
this.margin = Math.ceil(this.dpConvert(16));
},
dpConvert: function (dpValue) {
return (dpValue * (750 / (this.resolution / this.screenDensity)));
}
}
</script>
On the Script part of your Card, use {{}} to display your received parameters.
Code:
<template>
<div class="maptype_box" widgetid="12e7d1f4-68ec-4dd3-ab17-dde58e6c94c6">
<!--Custom title-->
<card_title id='title' title="{{title}}"></card_title>
<div class="maptype_content_box">
<image src="{{$t('message.url')}}" class="maptype_img" style="width: {{width}}px ;height:{{height}}px;"></image>
<div class="maptype_one">
<!--Custom text-->
<text class="maptype_text_one">{{big_text}}</text>
</div>
<div class="maptype_two">
<text class="maptype_textFour">{{small_text}}</text>
</div>
</div>
<card_bottom_2 dataSource="Source"></card_bottom_2>
</div>
</template>
The IDE allows you defining testing parameters, so you can check your Card behavior. Go to Config from the Card Selector.
On the opened dialog, choose the Card you want to prepare for receiving parameters and then, modify the Startup Parameter List.
Testing parameters configuration
Output
Conclusion
Card Abilities are intelliget cards able to receive event parameters or download remote data to display dynamic content. You can use this capabilities to display personalized cards and improve the user experience even out of your app.
Reference
Card Ability Development Guide

Related

Search places by keywords with HMS Site Kit

Hello everyone, in this article series, I will tell you about what is HMS Site Kit and how to use it’s features. With HMS Site Kit, you can basically provide users with easy and reliable access to services to related to locations and places.
With the HMS Site Kit, we can provide them make use of this features while helping users to to discover world quickly;
We can take place suggestions according to the keywords that we have determined,
According to the location of the user’s device, we can search for nearby places,
We can get detailed information about a location,
We can learn the human readable address information of a coordinate point,
We can learn the time period where a coordinate point is found.
In the first article of Site Kit series, I will give you an information about how to setup Site Kit to Android project and how to use Keyword Search.
How to integrate HMS Site Kit to project?
First of all, we need to create a signed Bundle/APK of our project. For this, you can follow the steps which have shown in the picture below.
We can start to creating signature by clicking on the Build->Generate Signed Bundle/APK menu on Android Studio. Then, we can start the process of generating a signed APK by selecting the APK option on screen that comes up.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
By pressing the Create New button on the screen that comes up, we can determine the path, password, key alias and key password values of the .jks file to be created for our application. Informations in the Certificate field is not mandatory.
After clicking OK button, we can continue the APK signing process by entering our password informations.
Then, we can complete the signing process by choosing which version of our application we want to sign such as release and debug, together with the signature versions.
After completing this process, we need to make configurations of this signature file in the build.gradle file at the app level of our application.
Code:
signingConfigs {
release {
storeFile file('SiteKitDemo.jks')
keyAlias 'SiteKitDemo'
keyPassword '****'
storePassword '****'
v1SigningEnabled true
v2SigningEnabled true
}
}
buildTypes {
release {
signingConfig signingConfigs.release
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
debug {
signingConfig signingConfigs.release
debuggable true
}
}
After performing this process, we can perform the operations related to Gradle synchronization by clicking on the sync now button.
After performing this process, we need to add ‘SHA-256 certificate fingerprint’ value of our application to AppGallery Connect.
We can learn value of ‘SHA-256 certificate fingerprint’ via termianl/CMD. For this, we need to go by terminal to the bin folder under the jre(Java Runtime Environment) installed on our system and run the command below.
Code:
C:\Program Files (x86)\Java\jre1.8.0_251\bin>keytool -list -v -keystore <keystore-path>
We will see a screen output as follows. Here, we need to get the value of SHA256.
We need to add this SHA256 value which we have received to the General Information field in the Project Settings menu on AppGalleryConnect. After performing this process, we can download the agconnect-services.json file.
After performing this process, we need to enable Site Kit service in Manage APIs tab on Project Settings tab.
We need to add agconnect-services.json file which we have downloaded, to under app folder on Android Studio of our project as follows.
After performing this process, we need to add Maven libraries and dependency values on build.gradle files.
First of all, we need to open build.gradle file at the project level and add the lines shown below:
Code:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
ext.kotlin_version = '1.3.61'
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' } // HUAWEI Maven repository
}
dependencies {
classpath 'com.android.tools.build:gradle:3.6.1'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
def agcpVersion = "1.3.1.300"
classpath "com.huawei.agconnect:agcp:${agcpVersion}" // HUAWEI agcp plugin
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' } // HUAWEI Maven repository
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
After performing this process, we need to add plugin and dependency values in the build.gradle file at the app level as follows.
Code:
apply plugin: 'com.huawei.agconnect'
...
...
dependencies {
...
def siteKitVersion = "4.0.3.300"
implementation "com.huawei.hms:site:${siteKitVersion}"
}
To prevent HMS SDK from hiding, we need to add following lines of code to proguard-rules.pro file which is located under the app folder.
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.hianalytics.android.**{*;}
-keep class com.huawei.updatesdk.**{*;}
-keep class com.huawei.hms.**{*;}
HMS Site Kit — Keyword Search
Thanks to this feature provided by Site Kit, we can search for many places such as tourist attractions, restaurants, schools and hotels by entering information such as keywords, coordinates.
Thanks to the data from this search result, we can easily access many different information about places such as name, address, coordinates, phone numbers, pictures, address details. Within the AddressDetail model, we can easily access information about the address piece by piece through different variables and change the way the address’ notation as we wish.
First of all, after sharing sharing the method what we can search by keyword, we will examine these pieces of code and parameters one by one.
Code:
fun keywordSearch(coordinate: Coordinate, keyword: String, radius: Int,
pageIndex: Int, pageSize: Int, countryCode: String){
val searchService = SearchServiceFactory.create(context,
URLEncoder.encode(
"Your-API-KEY",
"utf-8"))
var request = TextSearchRequest()
request.query = keyword
request.location = coordinate
request.radius = radius
request.pageIndex = pageIndex
request.pageSize = pageSize
request.language = Locale.getDefault().language // Getting system language
request.countryCode = countryCode
searchService.textSearch(request, object: SearchResultListener<TextSearchResponse>{
override fun onSearchError(searchStatus: SearchStatus?) {
Log.e("SITE_KIT","${searchStatus?.errorCode} - ${searchStatus?.errorMessage}")
}
override fun onSearchResult(textSearchResponse: TextSearchResponse?) {
var siteList = textSearchResponse?.sites
siteList?.let {
for(site in siteList){
Log.i("SITE_KIT", "Name => ${site.name}," +
"Format address => ${site.formatAddress}, " +
"Coordinate => ${site.location.lat} - ${site.location.lng}, " +
"Phone => ${site.poi.phone}, " +
"Photo URLS => ${site.poi.photoUrls}, " +
"Rating => ${site.poi.rating}, " +
"Address Detail => ${site.address.thoroughfare}, ${site.address.subLocality}, " +
"${site.address.locality}, ${site.address.adminArea}, ${site.address.country}")
}
} ?: kotlin.run {
Log.e("SITE_KIT","There is not any site")
}
}
})
}
First, we need to create a SearchService object from the SearchServiceFactory class. For this, we can use the create() method of the SearchServiceFactory class. We need to declare two parameters in create() method.
The first of these parameters is context value. It is recommended that Context value should be in Activity type. Otherwise, when HMS Core(APK) needs to be updated, we can not receive any notification about it.
The second parameter is API Key value that we can access via AppGallery Connect. This value is generated automatically by AppGallery Connect when a new app is created. We need to encode API parameter as encodeURI.
We need to create a TextSearchRequest object to perform searching by keyword. We will perform the related search criteria on this TextSearchRequest object.
While performing the searching operation, we can set many different criteria as we see in the code snippet. Let us examine the duties of these criteria one by one if you want to:
Query: Used to specify the keyword that we will use during the search process.
Location: It is used to specify latitude and longitude values with a Coordinate object to ensure that search results are searched as closely to the location that we want.
Radius: It is used to make the search results within in a radius determined in meters. It can take values between 1 and 50000, and its default value is 50000.
CountryCode: It is using to limit search results according to certain country borders.
Language: It is used to specify the language that search results have to be returned. If this parameter is not specified, the language of the query field we have specified in the query field is accepted by default. In example code snippet in above, the language of device has been added automatically in order to get a healthy result.
PageSize: Results return with the Pagination structure. This parameter is used to determine the number of Sites to be found in each page.
PageIndex: It is used to specify the number of the page to be returned with the Pagination structure.
Once we have determined our criteria, we can start the search using textSearch() method of SearchService object. textSearch() method takes two parameters.
First parameter is TextSearchRequest object which we have defined above. Here, our search criterias are defined.
For the second parameter, we need to implement SearchResultListener interface. Since the SearchResultListener interface is generic, we need the data type that results will return here. Here we can specify TextSearchResponse class that is model provided by Huawei Site Kit. This interface contains two methods, onSearchError() and onSearchResult() methods. We can configure these methods according to logic of our program and take necessary actions.

The Ultimate Scanning App From Huawei

Have you ever wanted to be able to scan and create 13+ different types of barcodes from your app? Hopefully you have, otherwise I'm not sure why you're reading this. If you do, then Huawei has an SDK for you: Scan Kit.
Scan Kit allows you to scan and generate various different one-dimensional and two-dimensional barcodes with ease, and you don't even need a Huawei device to use it.
Interested? No? Well, get interested. Now? Good. Let's get started.
Preparation
First up, make sure you have a Huawei Developer Account. This process can take a couple days, and you'll need one to use this SDK, so be sure to start that as soon as possible. You can sign up at https://developer.huawei.com.
Next, you'll want to obtain the SHA-256 representation of your app's signing key. If you don't have a signing key yet, be sure to create one before continuing. To obtain your signing key's SHA-256, you'll need to use Keytool which is part of the JDK installation. Keytool is a command-line program. If you're on Windows, open CMD. If you're on Linux, open Terminal.
On Windows, you'll need to "cd" into the directory containing the Keytool executable. For example, if you have JDK 1.8 v231 installed, Keytool will be located at the following path:
Code:
C:\Program Files\Java\jdk1.8.0_231\bin\
Once you find the directory, "cd" into it:
Code:
C: #Make sure you're in the right drive
cd C:\Program Files\Java\jdk1.8.0_231\bin\
Next, you need to find the location of your keystore. Using Android's debug keystore as an example, where the Android SDK is hosted on the "E:" drive in Windows, the path will be as follows:
Code:
E:\AndroidSDK\.android\debug.keystore
(Keytool also supports JKS-format keystores.)
Now you're ready to run the command. On Windows, it'll look something like this:
Code:
keytool -list -v -keystore E:\AndroidSDK\.android\debug.keystore
On Linux, the command should be similar, just using UNIX-style paths instead.
Enter the keystore password, and the key name (if applicable), and you'll be presented with something similar to the following:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Make note of the SHA256 field.
SDK Setup
Now we're ready to add the Scan Kit SDK to your Android Studio project. Go to your Huawei Developer Console and click the HUAWEI AppGallery tile. Agree to the terms of use if prompted.
Click the "My projects" tile here. If you haven't already added your project to the AppGallery, add it now. You'll be asked for a project name. Make it something descriptive so you know what it's for.
Now, you should be on a screen that looks something like the following:
Click the "Add app" button. Here, you'll need to provide some details about your app, like its name and package name.
Once you click OK, some SDK setup instructions will be displayed. Follow them to get everything added to your project.
You'll also need to add one of the following to the "dependencies" section of your app-level build.gradle file:
Code:
implementation 'com.huawei.hms:scan:1.2.0.301'
implementation 'com.huawei.hms:scanplus:1.2.0.301'
The difference between these dependencies is fairly simple. The basic Scan Kit SDK is only about 0.8MB in size, while the Scan Kit Plus SDK is about 3.3MB. The basic SDK will function identically to the Plus SDK on Huawei devices. On non-Huawei devices, you'll lose the enhanced recognition capabilities with the basic SDK.
If your app has size restrictions, and you only need basic QR code recognition, then the basic SDK is the best choice. Otherwise, pick the Plus SDK.
If you ever need to come back to these instructions, you can always click the "Add SDK" button after "App information" on the "Project setting" page.
Now you should be back on the "Project setting" page. Find the "SHA-256 certificate fingerprint" field under "App information," click the "+" button, and paste your SHA-256.
Now, if you're using obfuscation in your app, you'll need to whitelist a few things for HMS to work properly.
For ProGuard:
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.hianalytics.android.**{*;}
-keep class com.huawei.updatesdk.**{*;}
-keep class com.huawei.hms.**{*;}
For AndResGuard:
Code:
"R.string.hms*",
"R.string.agc*",
"R.string.connect_server_fail_prompt_toast",
"R.string.getting_message_fail_prompt_toast",
"R.string.no_available_network_prompt_toast",
"R.string.third_app_*",
"R.string.upsdk_*",
"R.layout.hms*",
"R.layout.upsdk_*",
"R.drawable.upsdk*",
"R.color.upsdk*",
"R.dimen.upsdk*",
"R.style.upsdk*
That's it! The Scan Kit SDK should now be available in your project.
Basic Usage
In order to properly use Scan Kit, you'll need to declare some permissions in your app:
XML:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
These are both dangerous-level permissions, so make sure you request access to them at runtime on Marshmallow and later.
You should also declare a couple required device features, to make sure distribution platforms can hide your app on incompatible devices:
XML:
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Now that you've got the permissions and features declared, it's time to start scanning. There are four modes you can use: Default View, Customized View, Bitmap, and MultiProcessor.
Default View
The Default View is the quickest way to get set up and running. Scan Kit provides the UI and does all the scanning work for you. This mode is able to scan barcodes live through the camera viewfinder, or by scanning existing images.
The first thing you'll need to do to use this is to declare the Scan Kit's scanner Activity in your Manifest:
XML:
<activity android:name="com.huawei.hms.hmsscankit.ScanKitActivity" />
Next, simply call the appropriate method to start the Activity. This must be done from an Activity context, since it makes use of Android's onActivityResult() API.
Code:
//The options here are optional. You can simply pass null
//to the startScan() method below.
//This is useful if you're only looking to scan certain
//types of codes. Scan Kit currently supports 13. You can
//see the list here: https://developer.huawei.com/consumer/en/doc/HMSCore-Guides-V5/barcode-formats-supported-0000001050043981-V5
val options = HmsScanAnalyzerOptions.Creator()
.setHmsScanTypes(
HmsScan.QRCODE_SCAN_TYPE,
HmsScan.DATAMATRIX_SCAN_TYPE
)
.create()
ScanUtil.startScan(activity, REQUEST_CODE, options)
Now you just need to handle the result:
Code:
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode != Activity.RESULT_OK || data == null) {
//Scanning wasn't successful. Nothing to handle.
return
}
if (requestCode == REQUEST_CODE) {
//Scan succeeded. Let's get the data.
val hmsScan = data.getParcelableExtra<HmsScan>(ScanUtil.RESULT)
if (hmsScan != null) {
//Handle appropriately.
}
}
}
We'll talk more about handling the result later.
Customized View
Customized View is similar to Default View in that Scan Kit will handle the scanning logic for you. The difference is that this mode lets you create your own UI. You're also limited to using the live viewfinder.
The following code should be placed inside your Activity's onCreate() method:
Code:
override fun onCreate(savedInstanceState: Bundle) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_scan)
//A ViewGroup to hold the scanning View that HMS will create
val container = findViewById<FrameLayout>(R.id.scanning_container)
//The position/size of the scanning area inside the scanning
//View itself. This is optional.
val scanBounds = Rect()
//The scanning View itself.
val scanView = RemoteView.Builder()
.setContext(this)
.setBoundingBox(scanBounds) //optional
.setFormat(HmsScan.QR_SCAN_TYPE) //optional, accepts a vararg of formats
.build()
//Initialize the scanning View.
scanView.onCreate(savedInstanceState)
container.addView(scanView)
scanView.setOnResultCallback { result ->
//`result` is an Array<HmsScan>
//Handle accordingly.
}
}
We'll talk about how to handle the HmsScan object later.
In a real application, the "scanView" variable should be a global variable. It's lifecycle-aware, so it has all the lifecycle methods, which you should probably call:
Code:
override fun onStart() {
super.onStart()
scanView.onStart()
}
//etc
Bitmap
This mode is useful if you have a Bitmap and you want to decode it. You can get that Bitmap however you want. All that matters is that you have one.
The example below shows how to obtain a Bitmap either from a camera capture frame or from an existing image in internal storage, and how to pass that Bitmap to Scan Kit for processing.
Code:
val img = YuvImage(
//A byte array representation of an image.
data,
//Depends on your image type: https://developer.android.com/reference/android/graphics/ImageFormat
ImageFormat.NV21,
//The width of the image data.
width,
//The height of the image data.
height
)
val stream = ByteArrayOutputStream()
//Copy the data to an output stream.
img.compressToJpeg(
//Size of the JPEG, in a Rect representation.
Rect(0, 0, width, height),
//Quality.
100,
//Output stream.
stream
)
//Create a Bitmap
val bmp = BitmapFactory.decodeByteArray(
stream.asByteArray(),
0,
stream.asByteArray().size
)
//If you want to use a pre-existing image, just obtain
//a Bitmap for that. This example would be placed in
//onActivityResult() after using ACTION_GET_CONTENT for an image.
val bmp = MediaStore.Images.Media.getBitmap(contentResolver, data.data)
//Set up the options.
val options = HmsScanAnalyzerOptions.Creator()
//Scan types are optional.
.setHmsScanTypes(
HmsScan.QRCODE_SCAN_TYPE,
HmsScan.DATAMATRIX_SCAN_TYPE
)
//If you're retrieving a Bitmap from an existing photo,
//set this to `true`.
.setPhotoMode(false)
.create()
//Send the scanning request and get the result.
val scans: Array<HmsScan> = ScanUtil.decodeWithBitmap(context, bmp, options)
if (!scans.isNullOrEmpty()) {
//Scanning was successful.
//Check individual scanning results.
scans.forEach {
//Handle.
//If zoomValue != 1.0, make sure you update
//the focal length of your camera interface,
//if you're using a camera. See Huawei's
//documentation for details on this.
//https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/android-bitmap-camera-0000001050042014
val zoomValue = it.zoomValue
}
}
We'll go over how to handle the HmsScan later on.
MultiProccessor
The MultiProcessor mode supports both existing images and using a viewfinder. It can take image frames and analyze them using machine learning to simultaneously detect multiple barcodes, along with other objects, like faces.
There are two ways to implement this mode: synchronously, and asychronously.
Here's an example showing how to implement it synchronously.
Code:
//Create the options (optional).
val options = HmsScanAnalyzerOptions.Creator()
.setHmsScanTypes(
HmsScan.QRCODE_SCAN_TYPE,
HmsScan.DATAMATRIX_SCAN_TYPE
)
.create()
//Set up the barcode detector.
val barcodeDetector = HmsScanAnalyzer(options /* or null */)
//You'll need some sort of Bitmap reference.
val bmp = /* However you want to obtain it */
//Convert that Bitmap to an MLFrame.
val frame = MLFrame.fromBitmap(bmp)
//Analyze and handle result(s).
val results: SparseArray<HmsScan> = barcodeDetector.analyseFrame(frame)
results.forEach {
//Handle accordingly.
}
To analyze asynchronously, just change the method called on "barcodeDetector":
Code:
barcodeDetector.analyzInAsyn(frame)
.addOnSuccessListener { scans ->
//`scans` is a List<HmsScan>
scans?.forEach {
//Handle accordingly.
}
}
.addOnFailureListener { error ->
//Scanning failed. Check Exception.
}
Parsing Barcodes
It's finally time to talk about parsing the barcodes! Yay!
The HmsScan object has a bunch of methods you can use to retrieve information about barcodes scanned by each method.
Code:
val scan: HmsScan = /* some HmsScan instance */
//The value contained in the code. For instance,
//if it's a QR code for an SMS, this value will be
//something like: "smsto:1234567890:Hi!".
val originalValue = scan.originalValue
//The type of code. For example, SMS_FORM, URL_FORM,
//or WIFI_CONNTENT_INFO_FORM. You can find all the forms
//here: https://developer.huawei.com/consumer/en/doc/development/HMSCore-References-V5/scan-hms-scan4-0000001050167739-V5.
val scanTypeForm = scan.scanTypeForm
//If it's a call or SMS, you can retrieve the
//destination phone number. This will be something
//like "1234567890".
val destPhoneNumber = scan.destPhoneNumber
//If it's an SMS, you can retrieve the message
//content. This will be something like "Hi!".
val smsContent = scan.smsContent
There are plenty more methods in the HmsScan class to help you parse various barcodes. You can see them all in Huawei's API reference.
Barcode Generation
Finally, let's talk about how to make your own barcodes using Scan Kit. This example will show you how to make a QR code, although you should be able to create any of the formats supported by Scan Kit.
Code:
//The code's content.
val content = "Some Content"
//Can be any supported type.
val type = HmsScan.QRCODE_SCAN_TYPE
//Output dimensions.
val width = 400
val height = 400
//Create the options.
val options = HmsBuildBitmapOption.Creator()
//The background of the barcode image. White is the default.
.setBitmapBackgroundColor(Color.RED)
//The color of the barcode itself. Black is the default
.setBitmapColor(Color.BLUE)
//Margins around the barcode. 1 is the default.
.setBitmapMargin(3)
.create()
try {
//Build the output Bitmap.
val output = ScanUtil.buildBitmap(content, type, width, height, options)
//Save it to internal storage, upload it somewhere, etc.
} catch (e: WriterException) {
//Creation failed. This can happen if one of the arguments for `buildBitmap()` is invalid.
}
Conclusion
And that's it!
Scan Kit is a powerful SDK that doesn't even require a Huawei device to work. Hopefully, this guide helped you get up and running. For more details on implementation, along with the API reference, be sure to check out Huawei's full documentation.

Integrate AppGallery Connect's Cloud DB Service in Three Easy Steps

Cloud DB is a device-cloud synergy database product that provides data synergy management capabilities between the device and cloud, unified data models, and various data management APIs. In addition to ensuring data availability, reliability, consistency, and security, Cloud DB enables seamless data synchronization between the device and cloud, and supports offline app operations, helping you quickly develop device-cloud and multi-device synergy apps. For more information about Cloud DB, please click here.
Cloud DB can be easily integrated into apps through its SDK and APIs, ensuring security and reliability. During integration, the SDK and APIs ease your workload by performing server setup, deployment, and O&M for you.
Now, let’s first take a look at how to quickly integrate Cloud DB in Android apps. You only need to perform the following three steps:
1. Create an object type and a Cloud DB zone.
2. Export the object type and perform account authentication.
3. Integrate the Cloud DB SDK into your Android project and call APIs to add, delete, modify, or query data.
1. Creating an Object Type and a Cloud DB Zone​Cloud DB is still in beta, so you’ll need to send an email to apply for the service. For more details on how to do so, please read the following documentation.
1.1. What Is an Object Type?​Simply put, each object type corresponds to a table used to store data in your database. Think of it as creating an Excel file to store data, whereby each sheet in the Excel file is equivalent to an object type in Cloud DB.
1.2. What Is a Cloud DB Zone?​A Cloud DB zone is an independent data storage zone. Multiple Cloud DB zones are independent of each other. Going back to the Excel file analogy, imagine that you’re a teacher responsible for multiple classes, and you record the scores of students in each class in an Excel file. Each Excel file contains a student information sheet and a score sheet that are independent of each other. A Cloud DB zone is akin to an Excel file in this analogy.
1.3. Creating an Object Type​Before you start, apply for and enable Cloud DB by performing the following:
1. Sign in to AppGallery Connect and click My projects. Select your project and app. Go to Build > Cloud DB, click the ObjectTypes tab, and create an object type.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2. Create an object type named StudentInfo. Please remember to set the primary key and select the upsert and delete permissions for the Authenticated user.
1.4. Creating a Cloud DB Zone​On the Cloud DB Zones tab page, click Add, enter the name of the Cloud DB zone, and click OK.
Before You Start​When using Cloud DB, you need to export the object type first.
You need to export the object type created in the previous step to your local Android project. This synchronizes data between your Android project and Cloud DB.
Remember to place the exported object type in your Android project. In the example below, I placed the object type in the model directory.
Integrating Cloud DB in Your Android Project​1. Integrating the Cloud DB SDK​1. In AppGallery Connect, click My projects, and click a project card. Go to Project settings > General information, and download the agconnect-services.json file in the App information area. Then, place the JSON file under the app directory in your Android project.
2.Configure the project-level build.gradle file.
XML:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath 'com.android.tools.build:gradle:4.0.1'
classpath 'com.huawei.agconnect:agcp:1.4.2.301'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
3. Configure the app-level build.gradle file.
XML:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
...
android {
......
compileOptions {
sourceCompatibility = 1.8
targetCompatibility = 1.8
}
}
dependencies {
...
implementation 'com.huawei.agconnect:agconnect-auth:1.4.2.300' // Auth Service of AppGallery Connect, for account authentication.
implementation 'com.huawei.agconnect:agconnect-database:1.2.3.301' // Cloud DB SDK.
}
1.1 Preparations
Cloud DB places operation restrictions on users. Only authenticated users can add, delete, and modify data. Therefore, you’ll need to integrate Auth Service first.
Enable Auth Service in AppGallery Connect. Using anonymous account authentication as an example, click My projects, click a project card, and go to Build > Auth Service to enable it.
Sample code for enabling anonymous account authentication:
Java:
AGConnectAuth.getInstance().signInAnonymously().addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@Override
public void onSuccess(SignInResult signInResult) {
// onSuccess
AGConnectUser user = signInResult.getUser();
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// onFail
}
});
2. Completing Initialization​The initialization consists of three parts: initializing Cloud DB, creating an object type, and creating and opening a Cloud DB zone.
1. Define required parameters in onCreate:
Java:
private AGConnectCloudDB mCloudDB;
private CloudDBZoneConfig mConfig;
private CloudDBZone mCloudDBZone;
2.Initialize AGConnectCloudDB.
Java:
AGConnectCloudDB.initialize(this);
3.Obtain an AGConnectCloudDB instance and create an object type.
Java:
mCloudDB = AGConnectCloudDB.getInstance();
try {
mCloudDB.createObjectType(ObjectTypeInfoHelper.getObjectTypeInfo());
} catch (AGConnectCloudDBException e) {
Log.e("CloudDB", "createObjectType Failed " + e.getMessage());
}
4. Create and open a Cloud DB zone.
Java:
Config = new CloudDBZoneConfig("classs1",
CloudDBZoneConfig.CloudDBZoneSyncProperty.CLOUDDBZONE_CLOUD_CACHE,
CloudDBZoneConfig.CloudDBZoneAccessProperty.CLOUDDBZONE_PUBLIC);
mConfig.setPersistenceEnabled(true);
try {
CloudDBZone = mCloudDB.openCloudDBZone(mConfig, true);
} catch (AGConnectCloudDBException e) {
Log.e("CloudDB", "openCloudDBZone failed: " + e.getMessage());
}
Note that you can also use the asynchronous openCloudDBZone2 method. The detailed operations are not described here. For details, please refer to:
https://developer.huawei.com/consumer/en/doc/development/AppGallery-connect-Guides/agc-clouddb-get-started#h1-1594008398022
3. Using Cloud DB Functions​Once you have completed authentication, you can perform related operations on Cloud DB. The following uses the query operation as an example:
1. In AppGallery Connect, insert two records — test1 and test2 — for testing.
2. Go back to your Android project and call executeQuery to query all data.
Java:
Task<CloudDBZoneSnapshot<StudentInfo>> queryTask = mCloudDBZone.executeQuery(
CloudDBZoneQuery.where(StudentInfo.class),
CloudDBZoneQuery.CloudDBZoneQueryPolicy.POLICY_QUERY_FROM_CLOUD_ONLY);
queryTask.addOnSuccessListener(new OnSuccessListener<CloudDBZoneSnapshot<StudentInfo>>() {
@Override
public void onSuccess(CloudDBZoneSnapshot<StudentInfo> snapshot) {
//
CloudDBZoneObjectList<StudentInfo> InfoCursor = snapshot.getSnapshotObjects();
ArrayList<StudentInfo> infoList = new ArrayList<>();
StudentInfo studentInfo = new StudentInfo();
try {
while (InfoCursor.hasNext()) {
studentInfo = InfoCursor.next();
infoList.add(studentInfo);
}
Log.i("CloudDB", "query success: " + JSONArray.toJSONString(studentInfo));
} catch (AGConnectCloudDBException e) {
Log.e("CloudDB", "query failed: " + e.getMessage());
}
snapshot.release();
}
});
3. Then, you can view the corresponding query data in Logcat.
4. You can refer to the configuration guide and API reference to learn more about the add, delete, and modify operations:
Configuration guide:
https://developer.huawei.com/consum...allery-connect-Guides/agc-clouddb-insert-data
API reference:
https://developer.huawei.com/consumer/en/doc/development/AppGallery-connect-References/clouddb
Summary​You can use Cloud DB after performing just three easy steps:
1. Create an object type and a Cloud DB zone.
2. Export the object type to an Android project.
3. Integrate the SDK into the Android project and call related APIs.
Once you have completed these steps, you will have integrated a database system into your app, without having to perform any setup or deployment operations. Currently, Cloud DB is still free to use.
Cloud DB links:
Development guide:
https://developer.huawei.com/consum...llery-connect-Guides/agc-clouddb-introduction
API reference:
https://developer.huawei.com/consumer/en/doc/development/AppGallery-connect-References/clouddb
Demo:
https://github.com/AppGalleryConnect/agc-demos/tree/main/Android/clouddb-java

HarmonyOS Distributed Scheduler - How to launch an ability from one device to another?

Introduction
HarmonyOS is a future-proof distributed operating system oriented to all-scenario smart lifestyles. For consumers, HarmonyOS integrates their various smart devices into "One Super Device" that delivers the best possible interaction experience through ultra-fast connection, multi-device collaboration, and resource sharing between different devices.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Users are using two or more devices to experience an all-scenario, multi-device lifestyle. Each type of device has its unique advantages and disadvantages specific to scenarios. For example, a watch provides straightforward access to information. A TV is excellent at providing immersive watching experience, but is terrible at text input. If multiple devices can sense each other and be integrated into "One Super Device" in a distributed operating system, each device can play its strengths and avoid its weaknesses, providing users with natural and frictionless distributed experiences. In HarmonyOS, distributed experience is called distributed hop.
Distributed Scheduler
In HarmonyOS, the Distributed Scheduler provides unified component management for the "super virtual device" built by multiple devices running HarmonyOS. The Distributed Scheduler defines a unified capability baseline, API format, data structure, and service description language for applications to adapt to different hardware. You can perform various distributed tasks using the Distributed Scheduler, such as remote startup, remote calling, and seamless migration of abilities.
The Distributed Scheduler allows Ability instances (basic components for distributed scheduling) to be started, stopped, connected, disconnected, and migrated across devices, enabling cross-device component management.
Starting or stopping an ability
The Distributed Scheduler provides remote ability management. You can start an FA (Feature Ability, that is, an ability using the Page template) and start or stop a PA (Particle Ability, that is, an ability using either the Service or Data template) from a remote device.
Connecting to or disconnecting from an ability
The Distributed Scheduler provides cross-device PA control. By connecting to a remote PA, you can obtain the cross-device client for task scheduling. After the cross-device task is completed, you can then disconnect from the remote PA to unregister this client.
Migrating an ability
The Distributed Scheduler enables cross-device ability migration. You can call a migration method to seamlessly migrate an FA from the local device to a specified remote device.
When to Use Distributed Scheduler?
With the help of APIs provided by the Distributed Scheduler, you can integrate the distributed scheduling capabilities into your application to implement cross-device collaboration. Based on different ability templates and intentions, the Distributed Scheduler allows you to start a remote FA or PA, stop a remote PA, connect to a remote PA, disconnect from a remote PA, or migrate an FA to another device. The following uses device A (local device) and device B (remote device) as an example to describe when and how to use these distributed scheduling capabilities:
Device A starts an FA on device B.
On device A, the user touches the startup button provided by the local application to start a particular FA on device B. For example, to enable your users to open the photo gallery application installed on device B, all you need is to specify the action of opening the gallery application in the Intent.
How to Develop
1. Create a new Java HarmonyOS project.
2. Add the cross-device collaboration permission to the reqPermissions attribute in the config.json file for the particular ability to enable distributed scheduling.
Code:
"reqPermissions": [
{
"name": "ohos.permission.DISTRIBUTED_DEVICE_STATE_CHANGE"
},
{
"name": "ohos.permission.GET_DISTRIBUTED_DEVICE_INFO"
},
{
"name": "ohos.permission.DISTRIBUTED_DATA"
},
{
"name": "ohos.permission.DISTRIBUTED_DATASYNC"
}
]
3. Add device types where the application runs on config.json file
Code:
"deviceType": [
"phone",
"wearable"
]
4. Create the main layout, this application is very simple, it is just an AbilitySlice that contains a button, from where we will launch the Feature Ability (FA) from device A to device B and vice versa.
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:alignment="center"
ohos:orientation="vertical">
<Button
ohos:id="$+id:select_device_button"
ohos:height="45vp"
ohos:width="match_parent"
ohos:background_element="$graphic:button_background"
ohos:layout_alignment="horizontal_center"
ohos:margin="18vp"
ohos:text="Selecciona un dispositivo"
ohos:text_size="$float:button_text_size"/>
</DirectionalLayout>
Using multi-device preview:
5. Explicitly declare the required permission
In the onStart method of the MainAbility.java we explicitly request the permissions to the user.
Code:
<pre style="background-color: rgb(255,255,255);font-family: JetBrains Mono , monospace;font-size: 9.8pt;">@Override
public void onStart(Intent intent) {
// Explicitly request user permissions
requestPermissionsFromUser(new String[]{SystemPermission.DISTRIBUTED_DATASYNC}, 0);
super.onStart(intent);
super.setMainRoute(MainAbilitySlice.class.getName());
}</pre>
6. Obtain a list of available devices.
Add a listener to the Button select_device_button to obtain all the remote devices information:
Code:
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
devicesButton = (Button) findComponentById(ResourceTable.Id_select_device_button);
devicesButton.setClickedListener(button -> {
showDevicesList();
});
}
To obtain information about all remote devices on the distributed network, call DeviceManager#getDeviceList(int). To obtain information about a specified remote device, call DeviceManager#getDeviceInfo(String).
Indicates the flag used for querying specified devices. The value DeviceInfo#FLAG_GET_ALL_DEVICE means to query all online and offline devices on the distributed network; DeviceInfo#FLAG_GET_ONLINE_DEVICE means to query all online devices on the distributed network; and DeviceInfo#FLAG_GET_OFFLINE_DEVICE means to query all offline devices on the distributed network.
Code:
List<DeviceInfo> deviceList = DeviceManager.getDeviceList(DeviceInfo.FLAG_GET_ONLINE_DEVICE);
if (deviceList == null || deviceList.isEmpty()) {
new ToastDialog(getContext()).setContentText("Device not found").show();
}
7. After obtain all devices list, print it on a ListDialog as follows.
Code:
private void showDevicesList() {
List<DeviceInfo> deviceList = DeviceManager.getDeviceList(DeviceInfo.FLAG_GET_ONLINE_DEVICE);
if (deviceList == null || deviceList.isEmpty()) {
new ToastDialog(getContext()).setContentText("Device not found").show();
}
String[] deviceNameList = new String[deviceList.size()];
int pos = 0;
for (DeviceInfo deviceInfo : deviceList) {
deviceNameList[pos] = deviceInfo.getDeviceName();
pos++;
}
ListDialog listDialog = new ListDialog(getContext());
listDialog.setItems(deviceNameList);
listDialog.setOnSingleSelectListener((iDialog, i) -> {
DeviceInfo deviceInfo = deviceList.get(i);
if (deviceInfo == null || TextTool.isNullOrEmpty(deviceInfo.getDeviceId())) {
//Dispositivo invalido
return;
}
onRemoteDeviceSelected(deviceInfo);
});
listDialog.setTransparent(true);
listDialog.setAutoClosable(true);
listDialog.setAlignment(LayoutAlignment.CENTER);
listDialog.show();
}
8. Set a function on the click listener for the item selected for starting a remote FA and implement the remote FA startup capability.
Code:
private void onRemoteDeviceSelected(DeviceInfo deviceInfo) {
Intent intent = new Intent();
Operation operation = new Intent.OperationBuilder()
.withDeviceId(deviceInfo.getDeviceId())
.withBundleName(getBundleName())
.withAbilityName(MainAbility.class)
.withFlags(Intent.FLAG_ABILITYSLICE_MULTI_DEVICE)
.build();
intent.setOperation(operation);
try {
List<AbilityInfo> abilityInfoList = getBundleManager().queryAbilityByIntent(intent, 0, 0);
if (abilityInfoList != null && !abilityInfoList.isEmpty()) {
startAbility(intent);
}
} catch (RemoteException e) {
// Excepción
}
}
Use the OperationBuilder class of Intentto construct an Operation object and set the deviceId (left empty if a local ability is required), bundleName, and abilityName attributes for the object.
FLAG_ABILITYSLICE_MULTI_DEVICE Supports multi-device startup in the distributed scheduling system.
You can find the complete code:
https://github.com/jordanrsas/TalentLandProject
Tips and Tricks
All the application resource files, such as strings, images, and audio files, are stored in the resources directory, allowing you to easily access, use, and maintain them. The resources directory consists of two types of sub-directories: the base sub-directory and qualifiers sub-directories, and the rawfile sub-directory.
The name of a qualifiers sub-directory consists of one or more qualifiers that represent the application scenarios or device characteristics, covering the mobile country code (MCC), mobile network code (MNC), language, script, country or region, screen orientation, device type, night mode, and screen density. The qualifiers are separated using underscores (_) or hyphens (-). When creating a qualifiers sub-directory, you need to understand the directory naming conventions and the rules for matching qualifiers sub-directories and the device status.
Device type
Indicates the device type. The value can be:
phone: smartphones
tablet: tablets
car: head units
tv: smart TVs
wearable: wearables
Then, to configure different font sizes to fit the different devices that we have set up in the project, we create the wearable directory within resources directory, where we will have an element directory that contains the float.json file where we will add the different measurements depending on of the device.
References
Distributed Scheduler Overview
https://developer.harmonyos.com/en/docs/documentation/doc-guides/ability-distributed-overview-0000001050419345
DeviceManager
https://developer.harmonyos.com/en/docs/documentation/doc-references/devicemanager-0000001054358820
ListDialog
https://developer.harmonyos.com/en/docs/documentation/doc-references/listdialog-0000001054120087
Resource File Categories
https://developer.harmonyos.com/en/docs/documentation/doc-guides/basic-resource-file-categories-0000001052066099
Intent
https://developer.harmonyos.com/en/docs/documentation/doc-references/intent-0000001054120019
Intent.OperationBuilder
https://developer.harmonyos.com/en/docs/documentation/doc-references/intent_operationbuilder-0000001054119948
Original Source

Intermediate: Thread Management in Harmony OS

Introduction
Huawei provides various services for developers to make ease of development and provides best user experience to end users. In this article, we will cover Thread Management with Java in Harmony OS.
Thread is a lightweight process allows a program to operate more efficiently by doing multiple things at the same time. Threads can be used to perform complicated tasks in the background without interrupting the main program.
The system creates a main thread for an application at runtime. The main thread is created or deleted in accordance with the application, so it is regarded as the core thread for an application. All UI-specific operations, such as UI display and update, are running in the main thread. Therefore, the main thread is also called the UI thread. By default, all operations of an application run in the main thread. If there are time-consuming tasks required by the application, such as downloading files and querying the database, you can create other threads to execute such tasks.
When to Use
If an application contains complex service logic, you may need to create multiple threads to execute various tasks, which causes complex interactions between tasks and threads. This may result in more complicated code and higher maintenance cost. To avoid such issues, you can utilize TaskDispatcher to optimize the dispatch of different tasks.
Available APIs
TaskDispatcher is the basic API for Ability instances to dispatch tasks, and it hides the implementation details of the thread where the task is located. TaskDispatcher. By default, tasks running in the UI thread have higher priorities, and tasks without the need of any results to return usually have lower priorities.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
We have multiple type of TaskDispatcher major type of task dispatcher are as follows.
1. GlobalTaskDispatcher
The global task dispatcher is obtained by an ability by calling getGlobalTaskDispatcher().
Code:
TaskDispatcher globalTaskDispatcher = getGlobalTaskDispatcher(TaskPriority.DEFAULT);
2. ParallelTaskDispatcher
The parallel task dispatcher is created and returned by an ability by calling createParallelTaskDispatcher().
Code:
String dispatcherName = "parallelTaskDispatcher";
TaskDispatcher parallelTaskDispatcher = createParallelTaskDispatcher(dispatcherName, TaskPriority.DEFAULT);
3. SerialTaskDispatcher
The serial task dispatcher is created and returned by an ability by calling createSerialTaskDispatcher().
Code:
String dispatcherName = "serialTaskDispatcher";
TaskDispatcher serialTaskDispatcher = createSerialTaskDispatcher(dispatcherName, TaskPriority.DEFAULT);
4. SpecTaskDispatcher
The dedicated task dispatcher is dedicated to a specific thread, which currently refers to the UI thread. Tasks in the UI thread are dispatched using the UITaskDispatcher.
Code:
TaskDispatcher uiTaskDispatcher = getUITaskDispatcher();
Development Overview
You need to install DevEcho studio IDE and I assume that you have prior knowledge about the Harmony OS and java.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
A Huawei phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK installation package.
DevEcho studio installed.
HMS Core (APK) 4.X or later.
Follows the steps.
1. Create Unity Project.
Open DevEcho studio.
Click NEW Project, select a Project Template.
Select ability template and click Next as per below image.
Enter Project and Package Name and click on Finish.
2. Once you have created the project, DevEco Studio will automatically sync it with Gradle files. Find the below image after synchronization is successful.
3. Update Permission and app version in config.json file as per your requirement, otherwise retain the default values.
4. Create New Ability as follows.
5. Development Procedure.
Create new Ability MainAbilitySlice.java
Java:
package com.hms.multithread.slice;
import com.hms.multithread.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Button;
import ohos.agp.components.Text;
import ohos.app.dispatcher.Group;
import ohos.app.dispatcher.TaskDispatcher;
import ohos.app.dispatcher.task.TaskPriority;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class MainAbilitySlice extends AbilitySlice {
HiLogLabel LABEL_LOG;
final long delayTime = 5L;
Text result;
String resultString;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
result = (Text) findComponentById(ResourceTable.Id_result);
Button globalDispatcher = (Button) findComponentById(ResourceTable.Id_global_disp);
Button parallelDispatcher = (Button) findComponentById(ResourceTable.Id_parallel_disp);
Button uiThreadDispatcher = (Button) findComponentById(ResourceTable.Id_ui_thread_disp);
// Set a click event listener for the button.
globalDispatcher.setClickedListener((listener -> present(new GlobleTaskDispatcher(), new Intent())));
parallelDispatcher.setClickedListener(component -> startParallelThread());
uiThreadDispatcher.setClickedListener(component -> startUIThread());
LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
}
private void startParallelThread() {
resultString="// The execution result may be as follows:";
String dispatcherName = "parallelTaskDispatcher";
TaskDispatcher dispatcher = createParallelTaskDispatcher(dispatcherName, TaskPriority.DEFAULT);
// Create a task group for group dispatch.
Group group = dispatcher.createDispatchGroup();
// Add a task (task1) to the group and return an instance used to revoke this task.
dispatcher.asyncGroupDispatch(group, () -> {
resultString=resultString+"\n"+"//download task1 is running";
HiLog.info(LABEL_LOG, "download task1 is running");
});
// Add task2 (associated with task1) to the group.
dispatcher.asyncGroupDispatch(group, () -> {
resultString=resultString+"\n"+"//download task1 is running";
HiLog.info(LABEL_LOG, "download task2 is running");
});
// Close the application after executing all tasks in the task group.
dispatcher.groupDispatchNotify(group, () -> {
resultString=resultString+"\n"+"//the close task is running after all tasks in the group are completed";
//result.setText(resultString);
HiLog.info(LABEL_LOG, "the close task is running after all tasks in the group are completed");
//result.setText("Update UI" + resultString);
});
}
private void startUIThread() {
TaskDispatcher uiTaskDispatcher = getUITaskDispatcher();
// Create a task group for group dispatch.'
Group group = uiTaskDispatcher.createDispatchGroup();
uiTaskDispatcher.delayDispatch(() -> result.setText("Hi Lokesh Kumar this is UI Thread"), delayTime);
// Add task2 (associated with task1) to the group.
uiTaskDispatcher.asyncGroupDispatch(group, () -> HiLog.info(LABEL_LOG, "download task2 is running"));
uiTaskDispatcher.asyncGroupDispatch(group, () -> HiLog.info(LABEL_LOG, "download task3 is running"));
// Close the application after executing all tasks in the task group.
uiTaskDispatcher.groupDispatchNotify(group, () -> HiLog.info(LABEL_LOG, "the close task is running after all tasks in the group are completed"));
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
The following snippet shows how to use a GlobalTaskDispatcher to perform a synchronous dispatch:
private void syncDispather() {
TaskDispatcher globalTaskDispatcher = getGlobalTaskDispatcher(TaskPriority.DEFAULT);
//Group group = globalTaskDispatcher.createDispatchGroup();
stringBuffer=new StringBuffer();
stringBuffer.append("\n");
globalTaskDispatcher.syncDispatch(() -> {
HiLog.info(LABEL_LOG, "sync task1 run");
stringBuffer.append("// sync task1 run");
});
stringBuffer.append("\n");
HiLog.info(LABEL_LOG, "after sync task1");
stringBuffer.append("// after sync task1");
stringBuffer.append("\n");
globalTaskDispatcher.syncDispatch(() -> {
HiLog.info(LABEL_LOG, "sync task2 run");
stringBuffer.append("// sync task2 run");
});
stringBuffer.append("\n");
HiLog.info(LABEL_LOG, "// after sync task2");
stringBuffer.append("// after sync task2");
globalTaskDispatcher.syncDispatch(() ->{
HiLog.info(LABEL_LOG, "sync task3 run");
stringBuffer.append("// sync task3 run");
});
stringBuffer.append("\n");
HiLog.info(LABEL_LOG, "after sync task3");
stringBuffer.append("// after sync task3");
result.setText("Result"+ stringBuffer );
}
The following code snippet shows how to execute a task for multiple times:
private void countDownLeach() {
final int total = 10;
final CountDownLatch latch = new CountDownLatch(total);
final List<Long> indexList = new ArrayList<>(total);
TaskDispatcher dispatcher = getGlobalTaskDispatcher(TaskPriority.DEFAULT);
// Execute the task multiple times, as specified by the parameter total.
dispatcher.applyDispatch((index) -> {
indexList.add(index);
latch.countDown();
}, total);
// Set the task timeout.
try {
latch.await();
} catch (InterruptedException exception) {
HiLog.error(LABEL_LOG, "latch exception");
}
HiLog.info(LABEL_LOG, "list size matches, %{public}b", (total == indexList.size()));
StringBuffer sb=new StringBuffer();
// result.setText("list size matches, %{public}b"+" "+ (total == indexList.size()));
sb.append("Execute the task multiple times as per below").append("\n");
for (Long i: indexList){
sb.append(i+" ");
if(i%4==0)
sb.append("\n");
}
result.setText(String.valueOf(sb));
// The execution result is as follows:
// list size matches, true
}
Create a layout file under entry > src > main > resources > base > layout > ability_main.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:alignment="top"
ohos:orientation="vertical">
<Text
ohos:id="$+id:text_time"
ohos:height="match_content"
ohos:width="match_content"
ohos:top_margin="200vp"
ohos:background_element="$graphic:background_ability_main"
ohos:layout_alignment="horizontal_center"
ohos:text="$string:mainability_thread_management"
ohos:text_size="36fp"
/>
<Button
ohos:id="$+id:global_disp"
ohos:height="match_content"
ohos:width="match_parent"
ohos:text_size="25fp"
ohos:padding="10vp"
ohos:start_margin="20vp"
ohos:end_margin="20vp"
ohos:top_margin="90vp"
ohos:bottom_margin="20vp"
ohos:background_element="$graphic:background_button"
ohos:text="$string:mainability_globle_thread"/>
<Button
ohos:id="$+id:parallel_disp"
ohos:height="match_content"
ohos:width="match_parent"
ohos:text_size="25fp"
ohos:padding="10vp"
ohos:margin="20vp"
ohos:background_element="$graphic:background_button"
ohos:text="$string:mainability_parallel_thread"/>
<Button
ohos:id="$+id:ui_thread_disp"
ohos:height="match_content"
ohos:width="match_parent"
ohos:text_size="25fp"
ohos:padding="10vp"
ohos:margin="20vp"
ohos:background_element="$graphic:background_button"
ohos:text="$string:mainability_ui_thread"/>
<Text
ohos:id="$+id:result"
ohos:height="match_content"
ohos:width="match_content"
ohos:top_margin="20vp"
ohos:background_element="$graphic:background_ability_main"
ohos:layout_alignment="horizontal_center"
ohos:text="$string:mainability_result"
ohos:text_size="35vp"
/>
</DirectionalLayout>
6. To build apk and run in device, choose Build > Generate Key and CSR Build for Hap(s)\ APP(s) or Build and Run into connected device, follow the steps.
Result
1. Click on UI Thread|SpecTaskDispatcher Button. It’s bound to the main thread of an application and send result back to main thread and update UI as per below screen.
Pros: Its will update result in UI thread.Tasks in the UI thread are dispatched using the UITaskDispatcher.
For example if you want fetch data form server and update in UI then you can use this method.
Cons: If you trying to update UI other than UI thread its will throw “attempt to update UI in non-UI thread” exception
2. Click on Global Task Dispatcher Button. Its will navigate into other screen, then click on respective button you can separate result.
3. Click on Sync Dispach Button the syncDispatch method dispatches a task synchronously and waits for the task execution in the current thread. The current thread remains blocked until the execution result is returned. As per below result.
Pros : The syncDispatch method will execute all task synchronously. The current thread remains blocked until the execution result is returned. All synchronized blocks synchronized on the same object can only have one thread executing inside them at a time.
Cons: If syncDispatch is used incorrectly, a deadlock will occur.
4. Click on delayDispatch button the applyDispatch executes a specified task on multiple times.
Pros: The delayDispatch method asynchronously dispatches a task with delay and proceeds to the next operation immediately. You can delay your task as per your requirement.
For example if you want execute a method A after 10 second of method B then you can use this delayDispatcher and complete your task easily.
Cons: If delayDispatch is used incorrectly, its will block your script or can throw ANR.
pros:- The applyDispatch executes a specified task multiple times. As per your requirement you can use applyDispatch method to execute a task multiple time like as fetching data from list of data from database or server.
Cons:- If applyDispatch is used incorrectly, its will execute your script infinitely and can block your UI and other resources
Tips and Tricks
Always use the latest version of DevEcho Studio.
Use Harmony Device Simulator from HVD section.
Conclusion
In this article, we have learnt Thread Management in Harmony OS. If an application contains complex service logic, you may need to create multiple threads to execute various tasks, which causes intricate interactions between tasks and threads. This may result in more complicated code and higher maintenance cost. To avoid such issues, you can utilize TaskDispatcher to optimize the dispatch of different tasks.
Thanks for reading the article, please do like and comment your queries or suggestions.
References
Harmony OS: https://www.harmonyos.com/en/develop/
Harmony OS Thread Management: https://developer.harmonyos.com/en/docs/documentation/doc-guides/thread-mgmt-overview-0000000000032127
Original Source
how unused objects are detroying via garbage collector?

Categories

Resources