HarmonyOS QR Code Generation - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
A QR code (abbreviated from Quick Response code) is a type of matrix barcode (or two-dimensional barcode).
HarmonyOS provides code generation to return the byte stream of a quick response (QR) code image based on a given string and QR code image size. The caller can use the QR code byte stream to generate a QR code image.
When to Use
You can use this capability to generate QR code images based on a given string. Common application scenarios include:
Social or communication applications: generating a QR code for contacts based on the input information
Shopping or payment applications: generating a QR code for online payment based on the input link.
How to Develop
Create UI
It is a very simple user interphase, is just a button and a View where we will print the QR Code Image.
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:padding="$float:margin">
<Button
ohos:id="$+id:getCodeButton"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_button"
ohos:padding="$float:marginS"
ohos:text="$string:getGenCodeButtonLabel"
ohos:text_size="$float:buttonTextSize"
ohos:top_margin="$float:margin"/>
<Image
ohos:id="$+id:qrCodeImage"
ohos:height="match_content"
ohos:width="match_parent"
ohos:top_margin="$float:margin"/>
</DirectionalLayout>
In the MainAbilitySlice implements ConnectionCallback interface to implement the operation after a connection with the capability engine is successfully established or fails to be established.
Java:
import ohos.aafwk.ability.AbilitySlice;
import ohos.ai.cv.common.ConnectionCallback;
public class MainAbilitySlice extends AbilitySlice implements ConnectionCallback {
private boolean isConnect = false;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
}
@Override
public void onServiceConnect() {
isConnect = true;
}
@Override
public void onServiceDisconnect() {
isConnect = false;
}
}
Call the VisionManager.init() method based on context and connectionCallback to establish a connection with the capability engine. Ensure that context is an ohos.aafwk.ability.Ability or ohos.aafwk.ability.AbilitySlice instance, or their child class instance.
Code:
VisionManager.init(MainAbilitySlice.this, this);
Obtain an IBarcodeDetector object with the context of your project passed to the context parameter.
Code:
IBarcodeDetector barcodeDetector = VisionManager.getBarcodeDetector(MainAbilitySlice.this);
Define the size of the expected QR code image and allocate the space for the byte stream array based on the image size.
Code:
int QR_LENGTH = 400;
byte[] bytes = new byte[QR_LENGTH * QR_LENGTH * 4];
Call the detect() method on barcodeDetector to generate the byte stream of the QR code image based on the given string.
Java:
int detect = barcodeDetector.detect("https://developer.harmonyos.com/en", bytes, QR_LENGTH, QR_LENGTH);
If the return value is HwHiAIResultCode.AIRESULT_SUCCESS (0), the method is successfully called and you can print the QR Code in the Image Component as PixelMap.
Java:
if (detect == HwHiAIResultCode.AIRESULT_SUCCESS) {
ImageSource imageSource = ImageSource.create(bytes, null);
PixelMap pixelmap = imageSource.createPixelmap(null);
qrImage.setPixelMap(pixelmap);
barcodeDetector.release();
}
Call the VisionManager.destroy() method to disconnect from the code generation capability engine.
Code:
VisionManager.destroy();
The complete MainAbilitySlice is as follow:
Java:
import com.dtse.cjra.ohosqrdemo.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Image;
import ohos.ai.cv.common.ConnectionCallback;
import ohos.ai.cv.common.VisionManager;
import ohos.ai.cv.qrcode.IBarcodeDetector;
import ohos.ai.engine.resultcode.HwHiAIResultCode;
import ohos.media.image.ImageSource;
import ohos.media.image.PixelMap;
public class MainAbilitySlice extends AbilitySlice implements ConnectionCallback {
private Image qrImage;
private boolean isConnect = false;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
initViews();
VisionManager.init(MainAbilitySlice.this, this);
}
private void initViews() {
qrImage = (Image) findComponentById(ResourceTable.Id_qrCodeImage);
findComponentById(ResourceTable.Id_getCodeButton).setClickedListener(click -> createQrCode());
}
private void createQrCode() {
if (isConnect) {
int QR_LENGTH = 400;
byte[] bytes = new byte[QR_LENGTH * QR_LENGTH * 4];
IBarcodeDetector barcodeDetector = VisionManager.getBarcodeDetector(MainAbilitySlice.this);
int detect = barcodeDetector.detect("https://developer.harmonyos.com/en", bytes, QR_LENGTH, QR_LENGTH);
if (detect == HwHiAIResultCode.AIRESULT_SUCCESS) {
ImageSource imageSource = ImageSource.create(bytes, null);
PixelMap pixelmap = imageSource.createPixelmap(null);
qrImage.setPixelMap(pixelmap);
barcodeDetector.release();
}
}
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
@Override
protected void onStop() {
super.onStop();
VisionManager.destroy();
}
@Override
public void onServiceConnect() {
isConnect = true;
}
@Override
public void onServiceDisconnect() {
isConnect = false;
}
}
We have the next result:
You can find the complete code, click here
Tips and Tricks
Only QR codes can be generated. Due to the restrictions of the QR code algorithm, the input string cannot exceed 2953 characters.
The width of the expected QR code image cannot exceed 1920 pixels, and the height cannot exceed 1680 pixels. As a QR code is a type of square matrix barcode, it is recommended that QR code images be square. If QR code images are rectangular, blank areas will be left around QR code information.
To install in a real HarmonyOS device you need add the UDID on the AGC, a UDID is a string of 64 characters, including letters and digits
Go to Settings > About on your device and keep tapping Build number until a message indicating that you have entered the developer mode is displayed.
Connect the PC to the device and then start the command line tool on the PC. After hdc shell is displayed, run bm get --udid to obtain the UDID of the watch. (Also adb shell command is available).
References:
QR Code Wiki: click here
Code Generation: click here
Original Source

Well explained.

Very good sharing, thank you

Related

Capture Image and convert to text by using Text Recognition ML Kit

More articles like this, you can visit HUAWEI Developer Forum and Medium.​
How to integrate Text Recognition ML Kit ?
How to extract text from image of receipts, business cards, and documents?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
1.Add dependencies in build.gradle file
Code:
implementation 'com.huawei.hms:ml-computer-vision-ocr:1.0.3.300' // Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:1.0.3.300' // Import the Chinese and English model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-jk-model:1.0.3.300' // Import the Japanese and Korean model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-latin-model:1.0.3.300' // Import the Latin-based language model package.
2.In MainActivity.click btn_detect to AndroidCameraApi to capture photo
Code:
btn_detect.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent cameraActivity = new Intent(MainActivity.this, AndroidCameraApi.class);
startActivity(cameraActivity);
}
});
3.Create an AndroidCameraApi Activity to capture photo and get result with bytearray of image.
convert the bytearray to bitmap and save in sharedpreference pref.
Code:
Intent intent = new Intent(AndroidCameraApi.this, MainActivity.class);
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, baos);
byte[] b = baos.toByteArray();
String temp = Base64.encodeToString(b, Base64.DEFAULT);
SharedPreferences pref = getApplicationContext().getSharedPreferences("MyPref", 0); // 0 - for private mode
SharedPreferences.Editor editor = pref.edit();
editor.putString("data", temp);
editor.commit();
startActivity(intent);
4.Back to MainActivity.use the bitmap string and convert to bitmap by Base64 decode and set the bitmap in localAnalyzer function
Code:
SharedPreferences pref = getApplicationContext().getSharedPreferences("MyPref", 0); // 0 - for private mode
String data = pref.getString("data", null);
if (data != null) {
byte[] encodeByte = Base64.decode(data, Base64.DEFAULT);
Bitmap bitmap = BitmapFactory.decodeByteArray(encodeByte, 0,
encodeByte.length);
Log.d("Bitmap", bitmap.toString());
imageView.setImageBitmap(bitmap);
localAnalyzer(bitmap);
}
5.Pass bitmap in localAnalyzer(Local text recognition). Create the text analyzer MLTextAnalyzer to recognize text in images.
You can set MLLocalTextSetting to specify languages that can be recognized.
Create an MLFrame object using android.graphics.Bitmap. Support JPG, JPEG, PNG, and BMP .
The success convert text will return in onSuccess callback.
Code:
private MLTextAnalyzer analyzer;
Code:
private void localAnalyzer(Bitmap bitmaps) {
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("en")
.create();
this.analyzer = MLAnalyzerFactory.getInstance()
.getLocalTextAnalyzer(setting);
MLFrame frame = MLFrame.fromBitmap(bitmaps);
Task<MLText> task = this.analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText text) {
MainActivity.this.displaySuccess(text);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
MainActivity.this.displayFailure();
}
});
}
6.make use the return MLText and display on your textfield.(Block,line)
Code:
private void displaySuccess(MLText mlText) {
String result = "";
List<MLText.Block> blocks = mlText.getBlocks();
for (MLText.Block block : blocks) {
for (MLText.TextLine line : block.getContents()) {
result += line.getStringValue() + "\n";
}
}
this.mTextView.setText(result);
}
7 and also permission in manifest
Code:
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
8.Text Recognition from the capture image on the Cloud Result* (more accurate!)
Code:
private void remoteAnalyzer(Bitmap bitmaps) {
MLRemoteTextSetting setting =
new MLRemoteTextSetting.Factory()
.setTextDensityScene(MLRemoteTextSetting.OCR_COMPACT_SCENE)
.setLanguageList(new ArrayList<String>() {{
this.add("zh");
this.add("en");
}})
.setBorderType(MLRemoteTextSetting.ARC)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getRemoteTextAnalyzer(setting);
MLFrame frame = MLFrame.fromBitmap(bitmaps);
Task<MLText> task = this.analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText text) {
MainActivity.this.remoteDisplaySuccess(text);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
MainActivity.this.displayFailure();
}
});
}
Reference:
https://developer.huawei.com/consumer/en/doc/HMSCore-Guides-V5/text-recognition-0000001050040053-V5
What languages are supported ?
Can you please share me the official document of Huawei ML Kit.
Thank you

Validate your news: Feat. Huawei ML Kit (Text Image Super-Resolution)

Introduction
Quality improvement has become crucial in this era of digitalization where all our documents are kept in the folders, shared over the network and read on the digital device.
Imaging the grapple of an elderly person who has no way to read and understand an old prescribed medical document which has gone blurred and deteriorated.
Can we evade such issues??
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s unbind what Huawei ML Kit offers to overcome such challenges of our day to day life.
Huawei ML Kit provides Text Image Super-Resolution API to improvise the quality and visibility of old and blurred text on an image.
Text Image Super-Resolution can zoom in an image that contains the text and significantly improve the definition of the text.
Limitations
The text image super-resolution service requires images with the maximum resolution 800 x 800 px and the length greater than or equal to 64 px.
Development Overview
Prerequisite
Must have a Huawei Developer Account
Must have Android Studio 3.0 or later
Must have a Huawei phone with HMS Core 4.0.2.300 or later
EMUI 3.0 or later
Software Requirements
Java SDK 1.7 or later
Android 5.0 or later
Preparation
Create an app or project in the Huawei app gallery connect.
Provide the SHA Key and App Package name of the project in App Information Section and enable the ML Kit API.
Download the agconnect-services.json file.
Create an Android project.
Integration
Add below to build.gradle (project)file, under buildscript/repositories and allprojects/repositories.
Code:
Maven {url 'http://developer.huawei.com/repo/'}
Add below to build.gradle (app) file, under dependencies.
To use the Base SDK of ML Kit-Text Image Super Resolution, add the following dependencies:
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-textimagesuperresolution:2.0.3.300'
}
Adding permissions
Code:
<uses-permission android:name="android.permission.CAMERA " />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Automatically Updating the Machine Learning Model
Add the following statements to the AndroidManifest.xml file to automatically install the machine learning model on the user’s device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "tisr"/>
Development Process
This article focuses on demonstrating the capabilities of Huawei’s ML Kit: Text Image Super- Resolution API’s.
Here is the example which explains how can we integrate this powerful API to leverage the benefits of improvising the Text-Image quality and provide full accessibility to the reader to read the old and blur newspapers from an online news directory.
TextImageView Activity : Launcher Activity
This is main activity of “The News Express “application.
Code:
package com.mlkitimagetext.example;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import com.mlkitimagetext.example.textimagesuperresolution.TextImageSuperResolutionActivity;
public class TextImageView extends AppCompatActivity {
Button NewsExpress;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_text_image_view);
NewsExpress = findViewById(R.id.bt1);
NewsExpress.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
startActivity(new Intent(TextImageView.this, TextImageSuperResolutionActivity.class));
}
});
}
}
Activity_text_image_view.xml
This is the view class for the above activity class.
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/im3">
<LinearLayout
android:id="@+id/ll_buttons"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="200dp"
android:orientation="vertical">
<Button
android:id="@+id/bt1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@android:color/transparent"
android:layout_gravity="center"
android:text="The News Express"
android:textAllCaps="false"
android:textStyle="bold"
android:textSize="34dp"
android:textColor="@color/mlkit_bcr_text_color_white"></Button>
<TextView
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:textStyle="bold"
android:text="Validate Your News"
android:textSize="20sp"
android:layout_gravity="center"
android:textColor="#9fbfdf"/>
</LinearLayout>
</RelativeLayout>
TextImageSuperResolutionActivity
This activity class performs following actions:
Image picker implementation to pick the image from the gallery
Convert selected image to Bitmap
Create a text image super-resolution analyser.
Create an MLFrame object by using android.graphics.Bitmap.
Perform super-resolution processing on the image with text.
Stop the analyser to release detection resources.
Code:
package com.mlkitimagetext.example;
import android.content.Intent;
import android.graphics.Bitmap;
import android.net.Uri;
import android.os.Bundle;
import android.provider.MediaStore;
import android.view.View;
import android.widget.ImageView;
import android.widget.Toast;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.common.MLException;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolution;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolutionAnalyzer;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolutionAnalyzerFactory;
import com.mlkitimagetext.example.R;
import androidx.appcompat.app.AppCompatActivity;
import java.io.IOException;
public class TextImageSuperResolutionActivity<button> extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = "TextSuperResolutionActivity";
private MLTextImageSuperResolutionAnalyzer analyzer;
private static final int INDEX_3X = 1;
private static final int INDEX_ORIGINAL = 2;
private ImageView imageView;
private Bitmap srcBitmap;
Uri imageUri;
Boolean ImageSetupFlag = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_text_super_resolution);
imageView = findViewById(R.id.image);
imageView.setOnClickListener(this);
findViewById(R.id.btn_load).setOnClickListener(this);
createAnalyzer();
}
@Override
public void onClick(View view) {
if (view.getId() == R.id.btn_load) {
openGallery();
}else if (view.getId() == R.id.image)
{
if(ImageSetupFlag != true)
{
detectImage(INDEX_3X);
}else {
detectImage(INDEX_ORIGINAL);
ImageSetupFlag = false;
}
}
}
private void openGallery() {
Intent gallery = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(gallery, 1);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data){
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK && requestCode == 1){
imageUri = data.getData();
try {
srcBitmap = MediaStore.Images.Media.getBitmap(this.getContentResolver(), imageUri);
} catch (IOException e) {
e.printStackTrace();
}
//BitmapFactory.decodeResource(getResources(), R.drawable.new1);
imageView.setImageURI(imageUri);
}
}
private void release() {
if (analyzer == null) {
return;
}
analyzer.stop();
}
private void detectImage(int type) {
if (type == INDEX_ORIGINAL) {
setImage(srcBitmap);
return;
}
if (analyzer == null) {
return;
}
// Create an MLFrame by using the bitmap.
MLFrame frame = new MLFrame.Creator().setBitmap(srcBitmap).create();
Task<MLTextImageSuperResolution> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLTextImageSuperResolution>() {
public void onSuccess(MLTextImageSuperResolution result) {
// success.
Toast.makeText(getApplicationContext(), "Success", Toast.LENGTH_SHORT).show();
setImage(result.getBitmap());
ImageSetupFlag = true;
}
})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException) e;
// Get the error code, developers can give different page prompts according to the error code.
int errorCode = mlException.getErrCode();
// Get the error message, developers can combine the error code to quickly locate the problem.
String errorMessage = mlException.getMessage();
Toast.makeText(getApplicationContext(), "Error:" + errorCode + " Message:" + errorMessage, Toast.LENGTH_SHORT).show();
} else {
// Other exception。
Toast.makeText(getApplicationContext(), "Failed:" + e.getMessage(), Toast.LENGTH_SHORT).show();
}
}
});
}
private void setImage(final Bitmap bitmap) {
imageView.setImageBitmap(bitmap);
}
private void createAnalyzer() {
analyzer = MLTextImageSuperResolutionAnalyzerFactory.getInstance().getTextImageSuperResolutionAnalyzer();
}
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
release();
}
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202388336667910498&fid=0101187876626530001
Which all image format is supported?

Beginner: Service Ability features in Huawei Harmony OS

Introduction
In this article, we can create an application showing below features:
1. Service Ability
2. Create service ability
3. Connect page ability with service ability
4. Update result on UI received from service ability
Requirements
1. DevEco IDE
2. Wearable watch (Can use simulator also)
Harmony OS Supports two types of abilities
1. Feature Ability
2. Particle Ability
In this article, we will test Particle Ability template called Service template.
UI Design
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Service Template (Service Abilities):
The Service template is used for Particle Ability that provide background tasks.
A Service ability has only one instance on a device and multiple abilities share this instance.
Service Abilities runs on Main thread, you must create another thread for that operation in the Service ability.
Life cycle methods can be find here:
Document
developer.harmonyos.com
Main uses, it can be used in playing music or downloading files or any other background tasks which doesn’t need UI.
Create Service Ability:
It has two steps:
1. Register ability in config.json
2. Create service class extending Ability
Add the below code in Config.json
JSON:
{
"app": {
"bundleName": "com.example.testserviceability",
"vendor": "example",
"version": {
"code": 1,
"name": "1.0"
},
"apiVersion": {
"compatible": 3,
"target": 3
}
},
"deviceConfig": {},
"module": {
"package": "com.example.testserviceability",
"name": ".MyApplication",
"deviceType": [
"wearable"
],
"distro": {
"deliveryWithInstall": true,
"moduleName": "entry",
"moduleType": "entry"
},
"abilities": [
{
"skills": [
{
"entities": [
"entity.system.home"
],
"actions": [
"action.system.home"
]
}
],
"orientation": "landscape",
"name": "com.example.testserviceability.MainAbility",
"icon": "$media:icon",
"description": "$string:mainability_description",
"label": "TestServiceAbility",
"type": "page",
"launchType": "standard"
},
{
"name": ".ServiceAbility",
"type": "service",
"visible": true
}
]
}
}
Add the below code in ServiceAbility.java
Java:
package com.example.testserviceability;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.ability.LocalRemoteObject;
import ohos.aafwk.content.Intent;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
import ohos.rpc.IRemoteObject;
public class ServiceAbility extends Ability {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
@Override
public void onStart(Intent intent) {
super.onStart(intent);
HiLog.info(LABEL_LOG, "inside onStart!!");
}
@Override
public void onCommand(Intent intent, boolean restart, int startId) {
super.onCommand(intent, restart, startId);
HiLog.info(LABEL_LOG, "inside onCommand!!");
}
@Override
public IRemoteObject onConnect(Intent intent) {
super.onConnect(intent);
HiLog.info(LABEL_LOG, "inside onConnect!!");
//return super.onConnect(intent);
return new MyRemoteObject();
}
String sayHello(String name){
return "Hello "+name;
}
@Override
public void onDisconnect(Intent intent) {
super.onDisconnect(intent);
HiLog.info(LABEL_LOG, "inside onDisconnect!!");
}
@Override
public void onStop() {
super.onStop();
HiLog.info(LABEL_LOG, "inside onStop!!");
}
public class MyRemoteObject extends LocalRemoteObject {
public MyRemoteObject() {
super();
}
public String callHello(String name){
return sayHello(name);
}
}
}
Connect Page Ability with Service Ability:
This can be done using LocalRemoteObject, like shown below.
Add the below code in MainAbilitySlice.java
Java:
package com.example.testserviceability.slice;
import com.example.testserviceability.ResourceTable;
import com.example.testserviceability.ServiceAbility;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.ability.IAbilityConnection;
import ohos.aafwk.content.Intent;
import ohos.aafwk.content.Operation;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.bundle.ElementName;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
import ohos.rpc.IRemoteObject;
public class MainAbilitySlice extends AbilitySlice {
static final HiLogLabel LABEL_LOG = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
ServiceAbility.MyRemoteObject myRemoteObject;
Text text;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
text = (Text) findComponentById(ResourceTable.Id_text);
Button btnStartService = (Button) findComponentById(ResourceTable.Id_button_start_service);
btnStartService.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
startBackGroundService();
}
});
Button btnCallServiceFunction = (Button) findComponentById(ResourceTable.Id_button_call_service_function);
btnCallServiceFunction.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
if(myRemoteObject != null){
String valueFromService = myRemoteObject.callHello("pavan");
HiLog.info(LABEL_LOG, "valueFromService-->"+valueFromService);
text.setText(valueFromService);
}else{
HiLog.info(LABEL_LOG, "myRemoteObject is null!!");
}
}
});
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
private void startBackGroundService(){
HiLog.info(LABEL_LOG, "inside startBackGroundService!!");
Intent intent = new Intent();
Operation operation = new Intent.OperationBuilder()
.withDeviceId("")
.withBundleName("com.example.testserviceability")
.withAbilityName("com.example.testserviceability.ServiceAbility")
.build();
intent.setOperation(operation);
connectAbility(intent, connection);
}
// Create an IAbilityConnection instance.
private IAbilityConnection connection = new IAbilityConnection() {
// Override the callback invoked when the Service ability is connected.
@Override
public void onAbilityConnectDone(ElementName elementName, IRemoteObject iRemoteObject, int resultCode) {
// The client must implement the IRemoteObject interface in the same way as the Service ability does. You will receive an IRemoteObject object from the server and can then parse information from it.
myRemoteObject= (ServiceAbility.MyRemoteObject) iRemoteObject;
HiLog.info(LABEL_LOG, "service connection made successful!!");
}
// Override the callback invoked when the Service ability is disconnected.
@Override
public void onAbilityDisconnectDone(ElementName elementName, int resultCode) {
}
};
}
Add the below code in ability_main.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:background_element="#8c7373"
ohos:padding="32">
<Text
ohos:multiple_lines="true"
ohos:id="$+id:text"
ohos:height="match_content"
ohos:width="200"
ohos:layout_alignment="horizontal_center"
ohos:text="Text"
ohos:text_size="10fp"/>
<Button
ohos:id="$+id:button_start_service"
ohos:height="match_content"
ohos:width="match_content"
ohos:background_element="$graphic:background_button"
ohos:layout_alignment="horizontal_center"
ohos:padding="5"
ohos:text="Start service"
ohos:text_size="30"
ohos:top_margin="5"/>
<Button
ohos:id="$+id:button_call_service_function"
ohos:height="match_content"
ohos:width="match_content"
ohos:background_element="$graphic:background_button"
ohos:layout_alignment="horizontal_center"
ohos:padding="5"
ohos:text="Call service function"
ohos:text_size="30"
ohos:top_margin="5"/>
</DirectionalLayout>
Connect Page ability with Service ability
Java:
private void startBackGroundService(){
HiLog.info(LABEL_LOG, "inside startBackGroundService!!");
Intent intent = new Intent();
Operation operation = new Intent.OperationBuilder()
.withDeviceId("")
.withBundleName("com.example.testserviceability")
.withAbilityName("com.example.testserviceability.ServiceAbility")
.build();
intent.setOperation(operation);
connectAbility(intent, connection);
}
Call function of service from page ability
Code:
Button btnCallServiceFunction = (Button) findComponentById(ResourceTable.Id_button_call_service_function);
btnCallServiceFunction.setClickedListener(new Component.ClickedListener() {
@Override
public void onClick(Component component) {
if(myRemoteObject != null){
String valueFromService = myRemoteObject.callHello("pavan");
HiLog.info(LABEL_LOG, "valueFromService-->"+valueFromService);
text.setText(valueFromService);
}else{
HiLog.info(LABEL_LOG, "myRemoteObject is null!!");
}
}
});
Tips and Tricks
1. All Abilities must be registered into Config.json.
2. Service Ability runs on main thread, you must create other thread to handle work.
Conclusion
In this article, we have UI components communicating with background running service. Calling a function of background service and getting result back on UI.
Reference
1. Harmony Official document
2. DevEco Studio User guide
3. JS API Reference
Checkout in forum
Well done, can you please upload the sources files, because i am new developing in DevEco, and need to know the distribution of the files in the project, thanks in advance
Can we show progress inside Service Ability?
Does it support dialog?
In which thread its will work

Intermediate: How to Verify Phone Number and Anonymous Account Login using Huawei Auth Service-AGC in Unity

Introduction
In this article, we will looking that how Huawei Auth Service-AGC provides secure and reliable user authentication system to your application. However building such systems is very difficult process, using Huawei Auth Service SDK only need to access Auth Service capabilities without implementing on the cloud.
Here I am covering sign in anonymously which is anonymous account sign to access your app as guest when you wish to login anonymously, Auth Service provides you unique id to uniquely identify user. And authenticating mobile number by verifying through OTP.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Overview
You need to install Unity software and I assume that you have prior
knowledge about the unity and C#.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
A Huawei phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.7 or later.
Unity software installed.
Visual Studio/Code installed.
HMS Core (APK) 4.X or later.
Integration Preparations
1. Create a project in AppGallery Connect.
2. Create Unity project.
3. Huawei HMS AGC Services to project.
4. Download and save the configuration file.
Add the agconnect-services.json file following directory Assests > Plugins > Android
5. Add the following plugin and dependencies in LaucherTemplate.
Code:
apply plugin: 'com.huawei.agconnect'
6. Add the following dependencies in MainTemplate.
Code:
apply plugin: 'com.huawei.agconnect'
implementation 'com.huawei.agconnect:agconnect-auth:1.4.2.301'
implementation 'com.huawei.hms:base:5.2.0.300'
implementation 'com.huawei.hms:hwid:5.2.0.300'
7. Add dependencies in build script repositories and all project repositories & class path in BaseProjectTemplate.
Code:
maven { url 'https://developer.huawei.com/repo/' }
8. Create Empty Game object rename to GameManager, UI canvas input text fields and buttons and assign onclick events to respective components as shown below.
MainActivity.java
Code:
package com.huawei.AuthServiceDemo22;
import android.content.Intent;
import android.os.Bundle;
import com.hw.unity.Agc.Auth.ThirdPartyLogin.LoginManager;
import com.unity3d.player.UnityPlayerActivity;
import android.util.Log;
import com.huawei.agconnect.auth.AGConnectAuth;
import com.huawei.agconnect.auth.AGConnectAuthCredential;
import com.huawei.agconnect.auth.AGConnectUser;
import com.huawei.agconnect.auth.PhoneAuthProvider;
import com.huawei.agconnect.auth.SignInResult;
import com.huawei.agconnect.auth.VerifyCodeResult;
import com.huawei.agconnect.auth.VerifyCodeSettings;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hmf.tasks.TaskExecutors;
import java.util.Locale;
public class MainActivity extends UnityPlayerActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
LoginManager.getInstance().initialize(this);
Log.d("DDDD"," Inside onCreate ");
}
public static void AnoniomousLogin(){
AGConnectAuth.getInstance().signInAnonymously().addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@Override
public void onSuccess(SignInResult signInResult) {
AGConnectUser user = signInResult.getUser();
String uid = user.getUid();
Log.d("DDDD"," Login Anonymous UID : "+uid);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.d("DDDD"," Inside ERROR "+e.getMessage());
}
});
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
LoginManager.getInstance().onActivityResult(requestCode, resultCode, data);
}
public static void sendVerifCode(String phone) {
VerifyCodeSettings settings = VerifyCodeSettings.newBuilder()
.action(VerifyCodeSettings.ACTION_REGISTER_LOGIN)
.sendInterval(30) // Shortest sending interval, 30–120s
.build();
String countCode = "+91";
String phoneNumber = phone;
if (notEmptyString(countCode) && notEmptyString(phoneNumber)) {
Task<VerifyCodeResult> task = PhoneAuthProvider.requestVerifyCode(countCode, phoneNumber, settings);
task.addOnSuccessListener(TaskExecutors.uiThread(), new OnSuccessListener<VerifyCodeResult>() {
@Override
public void onSuccess(VerifyCodeResult verifyCodeResult) {
Log.d("DDDD"," ==>"+verifyCodeResult);
}
}).addOnFailureListener(TaskExecutors.uiThread(), new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.d("DDDD"," Inside onFailure");
}
});
}
}
static boolean notEmptyString(String string) {
return string != null && !string.isEmpty() && !string.equals("");
}
public static void linkPhone(String verifyCode1,String phone) {
Log.d("DDDD", " verifyCode1 "+verifyCode1);
String phoneNumber = phone;
String countCode = "+91";
String verifyCode = verifyCode1;
Log.e("DDDD", " verifyCode "+verifyCode);
AGConnectAuthCredential credential = PhoneAuthProvider.credentialWithVerifyCode(
countCode,
phoneNumber,
null, // password, can be null
verifyCode);
AGConnectAuth.getInstance().getCurrentUser().link(credential).addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@Override
public void onSuccess(SignInResult signInResult) {
String phoneNumber = signInResult.getUser().getPhone();
String uid = signInResult.getUser().getUid();
Log.d("DDDD", "phone number: " + phoneNumber + ", uid: " + uid);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.e("DDDD", "Login error, please try again, error:" + e.getMessage());
}
});
}
}
GameManager.cs
Code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class GameManager : MonoBehaviour
{
public InputField OtpField,inputFieldPhone;
string otp=null,phone="";
// Start is called before the first frame update
void Start()
{
inputFieldPhone.text = "9740424108";
}
public void onClickButton(){
phone = inputFieldPhone.text;
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("sendVerifCode",phone);
}
}
public void LinkPhone(){
otp = OtpField.text;
Debug.Log(" OTP "+otp);
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("linkPhone",otp,phone);
}
}
public void AnoniomousLogin(){
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("AnoniomousLogin");
}
}
}
10. Click to Build apk, choose File > Build settings > Build, to Build and Run, choose File > Build settings > Build And Run.
Result
Tips and Tricks
Add agconnect-services.json file without fail.
Make sure dependencies added in build files.
Make sure that you enabled the Auth Service in AG-Console.
Make sure that you enabled the Authentication mode in Auth Service.
Conclusion
We have learnt integration of Huawei Auth Service-AGC anonymous account login and mobile number verification through OTP in Unity Game development. Conclusion is Auth Service provides secure and reliable user authentication system to your application.
Thank you so much for reading article, hope this article helps you.
Reference
Official documentation service introduction
Unity Auth Service Manual
Auth Service CodeLabs
Checkout in forum

MVVM Architecture On HarmonyOS Using Retrofit And RxJava

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this tutorial, we will be discussing and implementing the HarmonyOS MVVM Architectural pattern in our Harmony app.
This project is available on github, link can be found at the end of the article
Table of contents
What is MVVM
Harmony MVVM example project structure
Adding dependencies
Model
Layout
Retrofit interface
ViewModel
Tip and Tricks
Conclusion
Recommended resources
What is MVVM
MVVM stands for Model, View, ViewModel:
Model: This holds the data of the application. It cannot directly talk to the View. Generally, it’s recommended to expose the data to the ViewModel through ActiveDatas (Observables ).
View: It represents the UI of the application devoid of any Application Logic. It observes the ViewModel.
ViewModel: It acts as a link between the Model and the View. It’s responsible for transforming the data from the Model. It provides data streams to the View. It also uses hooks or callbacks to update the View. It’ll ask for the data from the Model.
MVVM can be achieved in two ways:
Using Data binding
RxJava
In this tutorial we will implement MVVM in Harmony using RXjava, as Data binding is still under development and not ready to use in Harmony.
Harmony MVVM example project structure​
We will create packages by features. It will make your code more modular and manageable.
Adding the Dependencies
Add the following dependencies in your module level build.gradle file:
Code:
dependencies {
//[...]
//RxJava
implementation "io.reactivex.rxjava2:rxjava:2.2.17"
//retrofit
implementation 'com.squareup.retrofit2:retrofit:2.6.0'
implementation "com.squareup.retrofit2:converter-moshi:2.6.0"
implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
//RxJava adapter for retrofit
implementation 'com.squareup.retrofit2:adapter-rxjava2:2.7.1'
}
Model
The Model would hold the user’s email and password. The following User.java class does it:
Code:
package com.megaache.mvvmdemo.model;
public class User {
private String email;
private String password;
public User(String email, String password) {
this.email = email;
this.password = password;
}
public void setEmail(String email) {
this.email = email;
}
public String getEmail() {
return email;
}
public void setPassword(String password) {
this.password = password;
}
public String getPassword() {
return password;
}
@Override
public String toString() {
return "User{" +
"email='" + email + '\'' +
", password='" + password + '\'' +
'}';
}
}
Layout
NOTE: For this tutorial, I have decided to create the layout for smart watch devices, however it will work fine on all devices, you just need to re-arrange the components and modify the alignment.
The layout will consist of login button, two text fields and two error texts, each will be shown or hidden depending on the value of the text box above it, after clicking the login button. the final UI will like the screenshot below:
Before we create layout lets add some colors:
First create file color.json under resources/base/element and add the following json content:
Code:
{
"color": [
{
"name": "primary",
"value": "#283148"
},
{
"name": "primaryDark",
"value": "#283148"
},
{
"name": "accent",
"value": "#06EBBF"
},
{
"name": "red",
"value": "#FF406E"
}
]
}
Then, lets design background elements for the Text Fields and the Button:
Create file background_text_field.xml and background_text_button.xml under resources/base/graphic as shown in below screenshot :
Then add the following code:
Background_text_field.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="20"/>
<solid
ohos:color="#ffffff"/>
<stroke
ohos:width="2"
ohos:color="$color:accent"/>
</shape>
Background_button.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="20"/>
<solid
ohos:color="$color:accent"/>
</shape>
Now lets create the background element for the main layout, let’s called background_ability_login.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<solid
ohos:color="$color:primaryDark"/>
</shape>
Finally, let’s create the layout file ability_login.xml:
Code:
<?xml version="1.0" encoding="utf-8"?>
<ScrollView
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:id="$+id:scrollview"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:background_element="$graphic:background_ability_login"
ohos:layout_alignment="horizontal_center"
ohos:rebound_effect="true"
>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:padding="20vp"
>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:layout_alignment="center"
ohos:orientation="vertical"
>
<TextField
ohos:id="$+id:tf_email"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_text_field"
ohos:hint="email"
ohos:left_padding="10vp"
ohos:min_height="40vp"
ohos:multiple_lines="false"
ohos:text_alignment="vertical_center"
ohos:text_color="black"
ohos:text_input_type="pattern_number"
ohos:text_size="15fp"/>
<Text
ohos:id="$+id:t_email_invalid"
ohos:height="match_content"
ohos:width="match_content"
ohos:layout_alignment="center"
ohos:text="invalid email"
ohos:text_color="$color:red"
ohos:text_size="15fp"
/>
</DirectionalLayout>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:layout_alignment="center"
ohos:orientation="vertical"
ohos:top_margin="10vp">
<TextField
ohos:id="$+id:tf_password"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_text_field"
ohos:hint="password"
ohos:left_padding="10vp"
ohos:min_height="40vp"
ohos:multiple_lines="false"
ohos:text_alignment="vertical_center"
ohos:text_color="black"
ohos:text_input_type="pattern_password"
ohos:text_size="15fp"
/>
<Text
ohos:id="$+id:t_password_invalid"
ohos:height="match_content"
ohos:width="match_content"
ohos:layout_alignment="center"
ohos:padding="0vp"
ohos:text="invalid password"
ohos:text_color="$color:red"
ohos:text_size="15fp"
/>
</DirectionalLayout>
<Button
ohos:id="$+id:btn_login"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_button"
ohos:bottom_margin="30vp"
ohos:min_height="40vp"
ohos:text="login"
ohos:text_color="#fff"
ohos:text_size="18fp"
ohos:top_margin="10vp"/>
</DirectionalLayout>
</ScrollView>
Retrofit interface
Before we move to the ViewModel, we have to setup our Retrofit service and repository class.
To keep the project clean, I will create class config.java which will hold our API URLs:
Code:
package com.megaache.mvvmdemo;
public class Config {
//todo: update base url variable with valid url
public static final String BASE_URL = "https://example.com";
public static final String API_VERSION = "/api/v1";
public static final String LOGIN_URL="auth/login";
}
Note: The url's are just for demonstration. For the demo to work, you must replace the urls.
First create interface APIServices.java:
For this tutorial, we assume the method of login EndPoint is Post, you may changes depending on your API, the method login will return an Observable, that will be observed in the ViewModel using RxJava.
Code:
package com.megaache.mvvmdemo.network;
import com.megaache.mvvmdemo.Config;
import com.megaache.mvvmdemo.network.request.LoginRequest;
import com.megaache.mvvmdemo.network.response.LoginResponse;
import io.reactivex.Observable;
import retrofit2.http.Body;
import retrofit2.http.Headers;
import retrofit2.http.POST;
public interface APIServices {
@POST(Config.LOGIN_URL)
@Headers("Content-Type: application/json;charset=UTF-8")
public Observable<LoginResponse> login(@Body LoginRequest loginRequest);
}
Note: the class LoginRequest which you will see later in this tutorial, must be equal to the request that the server expects in The names of variables and their types, otherwise the server will fail to process the request.
Then, add method createRetrofitClient() to MyApplication.java, which will create and return retrofit instance, the instance will use Moshi converter to handle the conversion of JSON to our java class, and RxJava2 adapter to return observables that can work with RxJava instead of the default Call class which requires callbacks:
Code:
package com.megaache.mvvmdemo;
import com.megaache.mvvmdemo.network.APIServices;
import ohos.aafwk.ability.AbilityPackage;
import okhttp3.OkHttpClient;
import retrofit2.Retrofit;
import retrofit2.adapter.rxjava2.RxJava2CallAdapterFactory;
import retrofit2.converter.moshi.MoshiConverterFactory;
import java.util.concurrent.TimeUnit;
public class MyApplication extends AbilityPackage {
@Override
public void onInitialize() {
super.onInitialize();
}
public static APIServices createRetrofitClient() {
OkHttpClient client = new OkHttpClient.Builder()
.connectTimeout(60L, TimeUnit.SECONDS)
.build();
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(Config.BASE_URL + Config.API_VERSION)
.addCallAdapterFactory(RxJava2CallAdapterFactory.create())
.addConverterFactory(MoshiConverterFactory.create()).client(client)
.build();
return retrofit.create(APIServices.class);
}
}
NOTE: For cleaner code, you can create file RetrofitClient.java and move the method createRetrofitClient() to it.
Now, let’s work on the Login feature, we going to first create request and response classes, then move to the ViewModel and the view:
We need LoginRequest and LoginResponse which both will extends BaseRequest and BaseRespnonse, code is shown below:
Create BaseRequest.java:
In real life project, your API may expect some parameter to be sent with every request. For example: accessToken, language, deviceId, pushToken etc. which will depende on your API. for this tutorial I added one field called deviceType with static value.
Code:
package com.megaache.mvvmdemo.network.request;
public class BaseRequest {
private String deviceType;
public BaseRequest() {
deviceType = "harmony-watch";
}
public String getDeviceType() {
return deviceType;
}
public void setDeviceType(String deviceType) {
this.deviceType = deviceType;
}
}
Create class LoginRequest.java, which will extend BaseRequest and have two fields Email and Password, which will provided by the end user:
Code:
package com.megaache.mvvmdemo.network.request;
public class LoginRequest {
private String email;
private String password;
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
}
Then for the response, Create BaseResponse.java first:
Code:
package com.megaache.mvvmdemo.network.response;
import com.megaache.mvvmdemo.MyApplication;
import java.io.Serializable;
public class BaseResponse implements Serializable {
}
Then LoginResponse.java extending BaseResponse:
Code:
package com.megaache.mvvmdemo.network.response;
import com.megaache.mvvmdemo.model.User;
import com.squareup.moshi.Json;
public class LoginResponse extends BaseResponse {
@Json(name = "user")
private User user;
@Json(name = "accessToken")
private String accessToken;
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
public String getAccessToken() {
return accessToken;
}
public void setAccessToken(String accessToken) {
this.accessToken = accessToken;
}
}
Note: this class must be equal to the response you get from server, otherwise Retrofit Gson converter will fail to convert the response to LoginResponse class, both the type of variables and their names must equal the those in the JSON response.
ViewModel
In ViewModel, we will wrap the data which was loaded with Retrofit inside class LoggedIn in LoginViewState, and observe states Observable defined in BaseViewModel in our Ability (or AbilitySlice). Whenever the value in states changes, the ability will be notified without checking whether the ability is alive or not.
The code for LoginViewState.java extending empty class BaseViewState.java, and ErrorData.java (used in LoginViewState.java) is given below:
ErrorData.java:
Code:
package com.megaache.mvvmdemo.model;
import java.io.Serializable;
public class ErrorData implements Serializable {
private String message;
private int statusCode;
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
public int getStatusCode() {
return statusCode;
}
public void setStatusCode(int statusCode) {
this.statusCode = statusCode;
}
}
LoginViewState.java:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.base.BaseViewState;
import com.megaache.mvvmdemo.model.ErrorData;
import com.megaache.mvvmdemo.network.response.LoginResponse;
public class LoginViewState extends BaseViewState {
public static class Loading extends LoginViewState {
}
public static class Error extends LoginViewState {
private ErrorData message;
public Error(ErrorData message) {
this.message = message;
}
public void setMessage(ErrorData message) {
this.message = message;
}
public ErrorData getMessage() {
return message;
}
}
public static class LoggedIn extends LoginViewState {
private LoginResponse userDataResponse;
public LoggedIn(LoginResponse userDataResponse) {
this.userDataResponse = userDataResponse;
}
public LoginResponse getUserDataResponse() {
return userDataResponse;
}
public void setUserDataResponse(LoginResponse userDataResponse) {
this.userDataResponse = userDataResponse;
}
}
}
The code for the LoginViewModel.java is given below:
When the user clicks the login button, the method sendLoginRequest() will setup our retrofit Observable, the request will not be sent until the we call the method subscribe which will be done on the View. notice we are subscribing on the Schedulers.Io() Scheduler, which is will execute the requests in a background thread to avoid freezing the UI, and because of that we have to create our custom Observer that will invoke the callback code in UI thread after we receive data, more on this later:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.base.BaseViewModel;
import com.megaache.mvvmdemo.MyApplication;
import com.megaache.mvvmdemo.model.ErrorData;
import com.megaache.mvvmdemo.network.request.LoginRequest;
import io.reactivex.Observable;
import io.reactivex.schedulers.Schedulers;
import ohos.aafwk.abilityjet.activedata.ActiveData;
public class LoginViewModel extends BaseViewModel<LoginViewState> {
private static final int MIN_PASSWORD_LENGTH = 6;
public ActiveData<Boolean> emailValid = new ActiveData<>();
public ActiveData<Boolean> passwordValid = new ActiveData<>();
public ActiveData<Boolean> loginState = new ActiveData<>();
public LoginViewModel() {
super();
}
public void login(String email, String password) {
boolean isEmailValid = isEmailValid(email);
emailValid.setData(isEmailValid);
if (!isEmailValid)
return;
boolean isPasswordValid = isPasswordValid(email);
passwordValid.setData(isPasswordValid);
if (!isPasswordValid)
return;
LoginRequest loginRequest = new LoginRequest();
loginRequest.setEmail(email);
loginRequest.setPassword(password);
super.subscribe(sendLoginRequest(loginRequest));
}
private Observable<LoginViewState> sendLoginRequest(LoginRequest loginRequest) {
return MyApplication.createRetrofitClient()
.login(loginRequest)
.doOnError(Throwable::printStackTrace)
.map(LoginViewState.LoggedIn::new)
.cast(LoginViewState.class)
.onErrorReturn(throwable -> {
ErrorData errorData = new ErrorData();
if (throwable.getMessage() != null)
errorData.setMessage(throwable.getMessage());
else
errorData.setMessage(" No internet! ");
return new LoginViewState.Error(errorData);
})
.subscribeOn(Schedulers.io())
.startWith(new LoginViewState.Loading());
}
private boolean isEmailValid(String email) {
return email != null && !email.isEmpty() && email.contains("@");
}
private boolean isPasswordValid(String password) {
return password != null && password.length() > MIN_PASSWORD_LENGTH;
}
}
Settings up the ability (View)
As you know, ability is our view, we have instantiated ViewModel and observer states and ActiveDatas in the method ObserverData(), as mentioned before, retrofit will send the request on background thread, therefore the code in the Observer will run on the same thread (Schedulars.io()), which will cause exceptions if that code attemp to update the UI, to prevent that, we will create a custom UIObserver class which extends Observer, that will run our code in the UI task dispatcher of the ability (UI Thread), code for UiObserver.java as show below:
Code:
package com.megaache.mvvmdemo.utils;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.abilityjet.activedata.DataObserver;
import ohos.app.dispatcher.TaskDispatcher;
public abstract class UiObserver<T> extends DataObserver<T> {
private TaskDispatcher uiTaskDispatcher;
public UiObserver(Ability baseAbilitySlice) {
setLifecycle(baseAbilitySlice.getLifecycle());
uiTaskDispatcher = baseAbilitySlice.getUITaskDispatcher();
}
@Override
public void onChanged(T t) {
uiTaskDispatcher.asyncDispatch(() -> onValueChanged(t));
}
public abstract void onValueChanged(T t);
}
Code for LoginAbility.java is shown below:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.ResourceTable;
import com.megaache.mvvmdemo.utils.UiObserver;
import com.megaache.mvvmdemo.model.ErrorData;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.agp.components.TextField;
import ohos.agp.window.dialog.ToastDialog;
public class LoginAbility extends Ability {
private LoginViewModel loginViewModel;
private TextField emailTF;
private Text emailInvalidT;
private TextField passwordTF;
private Text passwordInvalidT;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
loginViewModel = new LoginViewModel();
initUI();
observeData();
}
private void initUI() {
super.setUIContent(ResourceTable.Layout_ability_login);
Button loginButton = (Button) findComponentById(ResourceTable.Id_btn_login);
loginButton.setClickedListener(c -> attemptLogin());
emailTF = (TextField) findComponentById(ResourceTable.Id_tf_email);
emailInvalidT = (Text) findComponentById(ResourceTable.Id_t_email_invalid);
passwordTF = (TextField) findComponentById(ResourceTable.Id_tf_password);
passwordInvalidT = (Text) findComponentById(ResourceTable.Id_t_password_invalid);
}
private void observeData() {
loginViewModel.emailValid.addObserver(new UiObserver<Boolean>(this) {
@Override
public void onValueChanged(Boolean aBoolean) {
emailInvalidT.setVisibility(aBoolean ? Component.VISIBLE : Component.HIDE);
}
}, false);
loginViewModel.passwordValid.addObserver(new UiObserver<Boolean>(this) {
@Override
public void onValueChanged(Boolean aBoolean) {
passwordInvalidT.setVisibility(aBoolean ? Component.VISIBLE : Component.HIDE);
}
}, false);
loginViewModel.getStates().addObserver(new UiObserver<LoginViewState>(this) {
@Override
public void onValueChanged(LoginViewState loginState) {
if (loginState instanceof LoginViewState.Loading) {
toggleLoadingDialog(true);
} else if (loginState instanceof LoginViewState.Error) {
toggleLoadingDialog(false);
manageError(((LoginViewState.Error) loginState).getMessage());
} else if (loginState instanceof LoginViewState.LoggedIn) {
toggleLoadingDialog(false);
showToast("logging successful!");
}
}
}, false);
}
private void attemptLogin() {
loginViewModel.login(emailTF.getText(), passwordTF.getText());
}
private void toggleLoadingDialog(boolean show) {
//todo: show/hide loading dialog
}
private void manageError(ErrorData errorData) {
showToast(errorData.getMessage());
}
private void showToast(String message) {
new ToastDialog(this)
.setText(message)
.show();
}
@Override
protected void onStop() {
super.onStop();
loginViewModel.unbind();
}
}
Tips And Tricks
If you want your app to work offline, its best to introduce a Repository classes that will handle quering information from server if internet is available or from the cach if not
For cleaner code, try re-using the ViewModel as much as possible, by creating a base class and moving the shared code their
You should not keep a reference to a View (component) or context in the ViewModel, unless you have no option
The ViewModel should not talk directly to the View, instead the View obseve the ViewModel and update itself depending on ViewModel data
A correct implementation of ViewModel should allow you to change the UI with minimal or zero changes to the ViewModel.
Conclusion
MVVM combines the advantages of separation of concerns provided by MVP archichetecture, while leveraging the advantages of RxJava or Data binding. The result is a pattern where the model drives as many of the operations as possible, minimizing the logic in the view.
Finally, talk is cheap, and I strongly advise you to try and learn these things in the code so that you do not need to rely on people like me to tell you what to do.
Clone the project from github, replate the API Urls in class config.java and run it on HarmonyOs Device, you should see a toast that says "Logging succesful" if the credentials are correct, otherwise it should show a toast with the error that says "no internet" or and error returned from the server.
This project is available on GitHub: Click here
Recommended sources:
HarmonyOS (essential topics): Essential Topics
Retrofit: Click here
Rxjava: Click here
Original Source
Comment below if you have any questions or suggestions.
Thank you!
In Android we have Manifest file right there we will declare all the activity names like that do we have any file here?

Categories

Resources