Introduction
Harmony OS is the new operating system introduced by Huawei. Harmony OS is built on a distributed architecture design rather than the conventional OS which runs on the stand alone devices.
The Harmony OS support wide array of devices and works well on Smartphones, tablets, wearables and Smart TV’s and head units.
Lightweight preference Database
Lightweight preference database is a storage mechanism offered for Harmony OS and can be used to store small amount of data/information. Data can be saved in simple key-value pair which will be stored in the device’s memory and enable the faster operation.
In this article, we will create a simple login page for smartwatch to store user’s credential into lightweight preference database.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Requirements
1) DevEco IDE
2) Smartwatch wearable simulator
Development
In order to save and load user’s credential, we need to obtain Preferences instance.
Java:
private void initializeLightPreferenceDB() {
DatabaseHelper databaseHelper = new DatabaseHelper(getApplicationContext() );
String fileName = "MyLightPreferenceDB";
preferences = databaseHelper.getPreferences(fileName);
}
First time when user launches the app, user needs to register. User will enter username and password and click on register to store credential into preference database using put() method.
Java:
preferences.putString("username", tfUsername.getText());
preferences.putString("password", tfPassword.getText());
preferences.flushSync();
flushSync() is used to write preference in file synchronously. To write synchronously, we can use flush().
When user enters credential and click on login button, app will fetch or query the credential from lightweight preference using get() method and compare it with entered credential to validate.
Code:
String username = preferences.getString("username", "");
String password = preferences.getString("password", "");
If validation fails, error message will be shown.
Java:
errorText.setVisibility(Component.VISIBLE);
// Set the TextField style when there is an error.
errorText.setText("Invalid Credential");
ShapeElement errorElement = new ShapeElement(this, ResourceTable.Graphic_background_text_field_error);
TextField textField = (TextField) findComponentById(ResourceTable.Id_name_textField);
textField.setBackground(errorElement);
// Cause the TextField to lose focus.
textField.clearFocus();
Code snippet of MainAblitySlice.java
Code:
package com.ritesh.chanchal.preferenecedb.slice;
import com.ritesh.chanchal.preferenecedb.ResourceTable;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.agp.components.TextField;
import ohos.agp.components.element.ShapeElement;
import ohos.agp.window.dialog.ToastDialog;
import ohos.data.DatabaseHelper;
import ohos.data.preferences.Preferences;
import ohos.hiviewdfx.HiLog;
import ohos.hiviewdfx.HiLogLabel;
public class MainAbilitySlice extends AbilitySlice {
private TextField tfUsername;
private TextField tfPassword;
private Button bRegister;
private Button bLogin;
private Text errorText;
private Preferences preferences;
static final HiLogLabel LABEL = new HiLogLabel(HiLog.LOG_APP, 0x00201, "MY_TAG");
@Override
public void onStart(Intent intent) {
super.onStart(intent);
super.setUIContent(ResourceTable.Layout_ability_main);
initializeLightPreferenceDB();
tfUsername = (TextField) findComponentById(ResourceTable.Id_name_textField);
tfPassword = (TextField) findComponentById(ResourceTable.Id_password_text_field);
errorText = (Text) findComponentById(ResourceTable.Id_error_tip_text);
tfUsername.setFocusChangedListener((component, isFocused) -> {
if (isFocused) {
// The focus is gained.
errorText.setVisibility(Component.HIDE);
ShapeElement shapeElement = new ShapeElement(this, ResourceTable.Graphic_background_text_field);
TextField textField = (TextField) findComponentById(ResourceTable.Id_name_textField);
textField.setBackground(shapeElement);
}
});
tfPassword.setFocusChangedListener((component, isFocused) -> {
if (isFocused) {
// The focus is gained.
errorText.setVisibility(Component.HIDE);
ShapeElement shapeElement = new ShapeElement(this, ResourceTable.Graphic_background_text_field);
TextField textField = (TextField) findComponentById(ResourceTable.Id_name_textField);
textField.setBackground(shapeElement);
}
});
bRegister = (Button) findComponentById(ResourceTable.Id_register_button);
bLogin = (Button) findComponentById(ResourceTable.Id_signin_button);
bRegister.setClickedListener(component -> {
if(!tfPassword.getText().isEmpty() && !tfUsername.getText().isEmpty()) {
preferences.putString("username", tfUsername.getText());
preferences.putString("password", tfPassword.getText());
preferences.flushSync();
HiLog.debug(LABEL, "Registration Successful");
tfPassword.setText("");
tfUsername.setText("");
} else {
// Text of the error message.
errorText.setVisibility(Component.VISIBLE);
// Set the TextField style when there is an error.
errorText.setText("Field cannot be empty");
ShapeElement errorElement = new ShapeElement(this, ResourceTable.Graphic_background_text_field_error);
TextField textField = (TextField) findComponentById(ResourceTable.Id_name_textField);
textField.setBackground(errorElement);
// Cause the TextField to lose focus.
textField.clearFocus();
}
tfUsername.clearFocus();
tfPassword.clearFocus();
});
bLogin.setClickedListener(component -> {
String username = preferences.getString("username", "");
String password = preferences.getString("password", "");
HiLog.debug(LABEL, username + " , " + password);
if(username.isEmpty() || password.isEmpty()) {
// Text of the error message.
errorText.setVisibility(Component.VISIBLE);
// Set the TextField style when there is an error.
errorText.setText("Please Register First");
ShapeElement errorElement = new ShapeElement(this, ResourceTable.Graphic_background_text_field_error);
TextField textField = (TextField) findComponentById(ResourceTable.Id_name_textField);
textField.setBackground(errorElement);
// Cause the TextField to lose focus.
textField.clearFocus();
}
else {
if(username.equals(tfUsername.getText()) && password.equals(tfPassword.getText())){
HiLog.debug(LABEL, "Login Successful");
new ToastDialog(getApplicationContext()).setText("Login Successful").show();
} else {
errorText.setVisibility(Component.VISIBLE);
// Set the TextField style when there is an error.
errorText.setText("Invalid Credential");
ShapeElement errorElement = new ShapeElement(this, ResourceTable.Graphic_background_text_field_error);
TextField textField = (TextField) findComponentById(ResourceTable.Id_name_textField);
textField.setBackground(errorElement);
// Cause the TextField to lose focus.
textField.clearFocus();
}
}
tfUsername.clearFocus();
tfPassword.clearFocus();
});
}
private void initializeLightPreferenceDB() {
DatabaseHelper databaseHelper = new DatabaseHelper(getApplicationContext() );
String fileName = "MyLightPreferenceDB";
preferences = databaseHelper.getPreferences(fileName);
}
@Override
public void onActive() {
super.onActive();
}
@Override
public void onForeground(Intent intent) {
super.onForeground(intent);
}
}
ability_main.xml contains UI design.
XML:
<?xml version="1.0" encoding="utf-8"?>
<DirectionalLayout
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:width="match_parent"
ohos:height="match_parent"
ohos:background_element="$graphic:background_ability_main"
ohos:orientation="vertical">
<StackLayout
ohos:top_margin="50vp"
ohos:width="match_parent"
ohos:height="match_content"
ohos:layout_alignment="center">
<TextField
ohos:id="$+id:name_textField"
ohos:width="150vp"
ohos:height="35vp"
ohos:multiple_lines="false"
ohos:left_padding="24vp"
ohos:right_padding="24vp"
ohos:top_padding="8vp"
ohos:bottom_padding="8vp"
ohos:text_size="10fp"
ohos:layout_alignment="center"
ohos:text_alignment="center_vertical"
ohos:background_element="$graphic:background_text_field"
ohos:hint="Enter Username" />
<Text
ohos:visibility="hide"
ohos:id="$+id:error_tip_text"
ohos:width="150vp"
ohos:height="35vp"
ohos:top_padding="8vp"
ohos:bottom_padding="8vp"
ohos:right_margin="24vp"
ohos:text="Incorrect account or password"
ohos:text_size="10fp"
ohos:text_color="red"
ohos:layout_alignment="right"/>
</StackLayout>
<TextField
ohos:top_margin="10vp"
ohos:id="$+id:password_text_field"
ohos:width="150vp"
ohos:height="35vp"
ohos:multiple_lines="false"
ohos:left_padding="24vp"
ohos:right_padding="24vp"
ohos:top_padding="8vp"
ohos:bottom_padding="8vp"
ohos:text_size="10fp"
ohos:layout_alignment="center"
ohos:text_alignment="center_vertical"
ohos:background_element="$graphic:background_text_field"
ohos:hint="Enter password" />
<Button
ohos:top_margin="15vp"
ohos:id="$+id:register_button"
ohos:width="120vp"
ohos:height="25vp"
ohos:background_element="$graphic:background_btn"
ohos:text="Register"
ohos:text_color="#ffffff"
ohos:text_size="10fp"
ohos:layout_alignment="horizontal_center"/>
<Button
ohos:top_margin="15vp"
ohos:id="$+id:signin_button"
ohos:width="120vp"
ohos:height="25vp"
ohos:background_element="$graphic:background_btn"
ohos:text="Log in"
ohos:text_color="#ffffff"
ohos:text_size="10fp"
ohos:layout_alignment="horizontal_center"/>
</DirectionalLayout>
We will use background_ability_main.xml to define background color and shape of DependentLayout in ability_main.xml
XML:
<?xml version="1.0" encoding="UTF-8" ?>
<shape xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<solid
ohos:color="#192841"/>
</shape>
background_btn.xml is used here to define shape and background color of button.
XML:
<?xml version="1.0" encoding="UTF-8" ?>
<shape xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="35"/>
<solid
ohos:color="#7700cf"/>
</shape>
background_text_field.xml is used to define shape and background color of textfield.
XML:
<?xml version="1.0" encoding="UTF-8" ?>
<shape xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<solid
ohos:color="#192841"/>
</shape>
We have defined style for text field to display error in background_text_field_error.xml
XML:
<?xml version="1.0" encoding="UTF-8" ?>
<shape xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="40"/>
<solid
ohos:color="gray"/>
<stroke
ohos:color="#E74C3C"
ohos:width="6"/>
</shape>
Tips and Tricks
1. flush() and flushsync() is used to write data asynchronously and synchronously respectively.
2. To obtain the targetContext and srcContext in the preceding code, call the getApplicationContext() method in the AbilitySlice or Ability class.
Conclusion
This article is focused on Harmony OS lightweight preferences database which is very helpful for storing light weight data. This article explains the integration of preferences into simple login app for smartwatch to store user’s credential.
If you found this tutorial helpful, then help us by SHARING this post. Thank You!
Reference
Harmony Official document
Original Source
Related
This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
HiAi Image Super Resolution
Upscales an image or reduces image noise and improves image details without changing the resolution.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Upscales an image or reduces image noise and improves image details without changing the resolution.
Base on AI deep learning of CV (Computer Vision)
Utilize Huawei NPU (Neural Processing Unit), 50X faster than CPU
1X & 3X super-resolution furnish images with clearer effect, reducing JPEG compression noise
You can check the offical documentation about HiAi Image super resolution.
Huawei continuous investment on NPU technology
Huawei Phones support HiAI
Software: Huawei EMUI 9.0 & above
Hardware: CPU 970,810, 820,985,990
Codelab
https://developer.huawei.com/consumer/en/codelab/HiAIImageSuperresolution/index.html#0
You can also follow the codelab to implement the HiAi image resolution with the help DevEco IDE plugin in Android Studio.
Project: (HiAi Image Super Resolution)
In this article we are going to make project in which we can implement HiAi Image Super Resolution to improve low resolution image quality which is used in most of the application as a thumbnail images.
1. Implementation:
Download the vision-oversea-release.aar package in the Huawei AI Engine SDKs from the Huawei developer community.
Copy the downloaded vision-oversea-release.aar package to the app/libs directory of the project.
Add the following code to build.gradle in the APP directory of the project, and add vision-release.aar to the project. Dependency on the Gson library must be added, because the conversion of parameters and results between the JavaScript Object Notation (JSON) and Java classes inside vision-release.aar depends on the Gson library.
Code:
repositories {
flatDir {
dirs 'libs'
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation(name: 'vision-oversea-release', ext: 'aar')
implementation 'com.google.code.gson:gson:2.8.6'
}
2. Assets:
In this section we adding some low resolution images in "assets/material/image_super_resolution" directory, to further fetch images for local direction for optimization.
3. Design ListView:
In this section we are design ListView in our layouts to show original images and optimized images.
activity_main.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
tools:context=".MainActivity">
<LinearLayout
android:id="@+id/linearLayout"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="1pt"
android:layout_marginBottom="5pt"
android:orientation="horizontal"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent">
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_weight="1"
android:gravity="center"
android:text="Original Image"
android:textSize="24sp" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_weight="1"
android:gravity="center"
android:text="Improved Image"
android:textSize="24sp" />
</LinearLayout>
<ListView
android:id="@+id/item_listView"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:dividerHeight="3pt"
/>
</LinearLayout>
items.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:orientation="horizontal"
android:layout_width="match_parent"
android:layout_height="match_parent"
>
<ImageView
android:id="@+id/imgOriginal"
android:layout_width="100dp"
android:layout_height="100dp"
app:srcCompat="@drawable/noimage"
android:layout_gravity="start"
android:layout_weight="1"
/>
<TextView
android:id="@+id/imgTitle"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text=" "
android:layout_gravity="center_horizontal"
android:layout_weight="0"
android:textAlignment="center"
/>
<ImageView
android:id="@+id/imgConverted"
android:layout_width="100dp"
android:layout_height="100dp"
app:srcCompat="@drawable/noimage"
android:layout_gravity="end"
android:layout_weight="1"
app:layout_constraintDimensionRatio="h,4:3"
/>
</LinearLayout>
4. Coding: (Adapter, HiAi Image Super Resolution )
Util Class:
We make a util class AssetsFileUtil, to get all the images from local assets directory, get single BitMap image.
Code:
public class AssetsFileUtil {
public static Bitmap getBitmapByFilePath(Context context, String filePath){
try{
AssetManager assetManager = context.getAssets();
InputStream is = assetManager.open(filePath);
Bitmap bitmap = BitmapFactory.decodeStream(is);
return bitmap;
}catch (Exception e){
e.printStackTrace();
return null;
}
}
public static List<Bitmap> getBitmapListByDirPath(Context context, String dirPath){
List<Bitmap> list = new ArrayList<Bitmap>();
try{
AssetManager assetManager = context.getResources().getAssets();
String[] photos = assetManager.list(dirPath);
for(String photo : photos){
if(isFile(photo)){
Bitmap bitmap = getBitmapByFilePath(context,dirPath + "/" + photo);
list.add(bitmap);
}else {
List<Bitmap> childList = getBitmapListByDirPath(context,dirPath + "/" + photo);
list.addAll(childList);
}
}
}catch (Exception e){
e.printStackTrace();
}
return list;
}
public static List<String> getFileNameListByDirPath(Context context, String dirPath){
List<String> list = new ArrayList<String>();
try{
AssetManager assetManager = context.getResources().getAssets();
String[] photos = assetManager.list(dirPath);
for(String photo : photos){
if(isFile(photo)){
list.add(dirPath + "/" + photo);
}else {
List<String> childList = getFileNameListByDirPath(context,dirPath + "/" + photo);
list.addAll(childList);
}
}
}catch (Exception e){
e.printStackTrace();
}
return list;
}
public static boolean isFile(String fileName){
if(fileName.contains(".")){
return true;
}else {
return false;
}
}
}
MainActivity Class:
In this class we are getting local images and attaching images list to our Adapter
Code:
public class MainActivity extends AppCompatActivity {
private String mDirPath;
private ArrayList<Item> itemList;
private List<String> imageList;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
itemList = new ArrayList<Item>();
getLocalImages();
// Setting Adapter and listview
ItemAdapter itemAdapter = new ItemAdapter(getApplicationContext(), R.layout.items, itemList);
ListView listView = findViewById(R.id.item_listView);
listView.setAdapter(itemAdapter);
}
public void getLocalImages(){
mDirPath ="material/image_super_resolution";
imageList = AssetsFileUtil.getFileNameListByDirPath(this,mDirPath);
for(int i=0; i<imageList.size();i++){
itemList.add(new Item(imageList.get(i), " ", imageList.get(i)));
}
}
}
Item Class:
Prepare item data class.
Code:
public class Item {
private String imgOriginal;
private String imgTitle;
private String imgConverted;
public Item(String imgOriginal, String imgTitle, String imgConverted) {
this.imgOriginal = imgOriginal;
this.imgTitle = imgTitle;
this.imgConverted = imgConverted;
}
public String getImgOriginal() {
return imgOriginal;
}
public void setImgOriginal(String imgOriginal) {
this.imgOriginal = imgOriginal;
}
public String getImgTitle() {
return imgTitle;
}
public void setImgTitle(String imgTitle) {
this.imgTitle = imgTitle;
}
public String getImgConverted() {
return imgConverted;
}
public void setImgConverted(String imgConverted) {
this.imgConverted = imgConverted;
}
}
ItemAdapter Class:
In this class we are binding our images array list with our layout ImageView and implement HiAi Image Resolution to optimize image resolution.
ItemAdapter class extends ArrayAdapter with the type of Item class.
Code:
public class ItemAdapter extends ArrayAdapter<Item>
Define some constants
Code:
private ArrayList<Item> itemList;
private final static int SUPERRESOLUTION_RESULT = 110;
private Bitmap bitmapOriginal;
private Bitmap bitmapConverted;
ImageView imgOriginal;
ImageView imgConverted;
private String TAG = "ItemAdapter";
Prepare constructor for the Adpater class
Code:
public ItemAdapter(@NonNull Context context, int resource, @NonNull ArrayList<Item> itemList) {
super(context, resource, itemList);
this.itemList = itemList;
}
Define initHiAi function to check service connected or disconnected
Code:
/**
* init HiAI interface
*/
private void initHiAI() {
/** Initialize with the VisionBase static class and asynchronously get the connection of the service */
VisionBase.init(getContext(), new ConnectionCallback() {
@Override
public void onServiceConnect() {
/** This callback method is invoked when the service connection is successful; you can do the initialization of the detector class, mark the service connection status, and so on */
}
@Override
public void onServiceDisconnect() {
/** When the service is disconnected, this callback method is called; you can choose to reconnect the service here, or to handle the exception*/
}
});
}
Define setHiAi function to perform HiAi operation on Original image and generate the optimized Bitmap.
Code:
/**
* Capability Interfaces
*
* @return
*/
private void setHiAi() {
/** Define class detector, the context of this project is the input parameter */
ImageSuperResolution superResolution = new ImageSuperResolution(getContext());
/** Define the frame, put the bitmap that needs to detect the image into the frame*/
Frame frame = new Frame();
/** BitmapFactory.decodeFile input resource file path*/
// Bitmap bitmap = BitmapFactory.decodeFile(null);
frame.setBitmap(bitmapOriginal);
/** Define and set super-resolution parameters*/
SuperResolutionConfiguration paras = new SuperResolutionConfiguration(
SuperResolutionConfiguration.SISR_SCALE_3X,
SuperResolutionConfiguration.SISR_QUALITY_HIGH);
superResolution.setSuperResolutionConfiguration(paras);
/** Run super-resolution and get result of processing */
ImageResult result = superResolution.doSuperResolution(frame, null);
/** After the results are processed to get bitmap*/
Bitmap bmp = result.getBitmap();
/** Note: The result and the Bitmap in the result must be NULL, but also to determine whether the returned error code is 0 (0 means no error)*/
this.bitmapConverted = bmp;
handler.sendEmptyMessage(SUPERRESOLUTION_RESULT);
}
Define Handler if the optimization completed attach the optimized Bitmap to image.
Code:
private Handler handler = new Handler() {
@Override
public void handleMessage(Message msg) {
super.handleMessage(msg);
switch (msg.what) {
case SUPERRESOLUTION_RESULT:
if (bitmapConverted != null) {
imgConverted.setImageBitmap(bitmapConverted);
} else { // Set the original image
imgConverted.setImageBitmap(bitmapOriginal);
// toast("High Resolution image");
}
break;
}
}
};
Override the getView method attached orginial image to image view and process the original image to attached optimized image.
Code:
@NonNull
@Override
public View getView(int position, @Nullable View convertView, @NonNull ViewGroup parent) {
initHiAI();
int itemIndex = position;
if(convertView == null){
convertView = LayoutInflater.from(getContext()).inflate(R.layout.items,parent, false);
}
imgOriginal = convertView.findViewById(R.id.imgOriginal);
TextView imgTitle = convertView.findViewById(R.id.imgTitle);
imgConverted = convertView.findViewById(R.id.imgConverted);
bitmapOriginal = AssetsFileUtil.getBitmapByFilePath(imgOriginal.getContext(), itemList.get(itemIndex).getImgConverted());
imgOriginal.setImageBitmap(bitmapOriginal);
bitmapConverted =AssetsFileUtil.getBitmapByFilePath(imgConverted.getContext(), itemList.get(itemIndex).getImgOriginal());
imgConverted.setImageBitmap(bitmapConverted);
int height = bitmapOriginal.getHeight();
int width = bitmapOriginal.getWidth();
Log.e(TAG, "width:" + width + ";height:" + height);
if (width <= 800 && height <= 600) {
new Thread() {
@Override
public void run() {
setHiAi();
}
}.start();
} else {
toast("Width and height of the image cannot exceed 800*600");
}
imgTitle.setText(itemList.get(itemIndex).getImgTitle());
return convertView;
}
public void toast(String text) {
Toast.makeText(getContext(), text, Toast.LENGTH_SHORT).show();
}
Coding section has been complete here, now run you project and check the output of the Image Optimization by using HiAi Image Super Resolution.
5. Result
More information like this, you can visit HUAWEI Developer Forum
A geofence is a virtual perimeter set on a real geographic area. Combining a user position with a geofence perimeter, it is possible to know if the user is inside the geofence or if he is exiting or entering the area.
In this article, we will discuss how to use the geofence to notify the user when the device enters/exits an area using the HMS Location Kit in a Xamarin.Android application. We will also add and customize HuaweiMap, which includes drawing circles, adding pointers, and using nearby searches in search places. We are going to learn how to use the below features together:
Geofence
Reverse Geocode
HuaweiMap
Nearby Search
Project Setup
First of all, you need to be a registered Huawei Mobile Developer and create an application in Huawei App Console in order to use HMS Map Location and Site Kits. You can follow these steps to complete the configuration that required for development:
Configuring App Information in AppGallery Connect
Creating Xamarin Android Binding Libraries
Integrating the HMS Map Kit Libraries for Xamarin
Integrating the HMS Location Kit Libraries for Xamarin
Integrating the HMS Site Kit Libraries for Xamarin
Integrating the HMS Core SDK
Setting Package in Xamarin
When we create our Xamarin.Android application in the above steps, we need to make sure that the package name is the same as we entered the Console. Also, don’t forget the enable them in Console.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Manifest & Permissions
We have to update the application’s manifest file by declaring permissions that we need as shown below.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
Also, add a meta-data element to embed your app id in the application tag, it is required for this app to authenticate on the Huawei’s cloud server. You can find this id in agconnect-services.json file.
Code:
<meta-data android:name="com.huawei.hms.client.appid" android:value="appid=YOUR_APP_ID" />
Request location permission
Request runtime permissions in our app in order to use Location and Map Services. The following code checks whether the user has granted the required location permissions in Main Activity.
Code:
private void RequestPermissions()
{
if (ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessCoarseLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessFineLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.WriteExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.ReadExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.Internet) != (int)Permission.Granted)
{
ActivityCompat.RequestPermissions(this,
new System.String[]
{
Manifest.Permission.AccessCoarseLocation,
Manifest.Permission.AccessFineLocation,
Manifest.Permission.WriteExternalStorage,
Manifest.Permission.ReadExternalStorage,
Manifest.Permission.Internet
},
100);
}
else
GetCurrentPosition();
}
public override void OnRequestPermissionsResult(int requestCode, string[] permissions, [GeneratedEnum] Android.Content.PM.Permission[] grantResults)
{
if (requestCode == 100)
{
foreach (var item in permissions)
{
if (ContextCompat.CheckSelfPermission(this, item) == Permission.Denied)
{
if (ActivityCompat.ShouldShowRequestPermissionRationale(this, permissions[0]) || ActivityCompat.ShouldShowRequestPermissionRationale(this, permissions[1]))
Snackbar.Make(FindViewById<RelativeLayout>(Resource.Id.mainLayout), "You need to grant permission to use location services.", Snackbar.LengthLong).SetAction("Ask again", v => RequestPermissions()).Show();
else
Toast.MakeText(this, "You need to grant location permissions in settings.", ToastLength.Long).Show();
}
else
GetCurrentPosition();
}
}
else
{
base.OnRequestPermissionsResult(requestCode, permissions, grantResults);
}
}
Add a Map
Within our UI, a map will be represented by either a MapFragment or MapView object. We will use the MapFragment object in this sample.
Add a <fragment> element to your activity’s layout file, activity_main.xml. This element defines a MapFragment to act as a container for the map and to provide access to the HuaweiMap object.
Also, let’s add other controls to use through this sample. That is two Button and a SeekBar. One button for clearing the map and the other for searching nearby locations. And seekbar is helping us to create a radius for the geofence.
Code:
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:map="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/mainLayout"
android:layout_width="match_parent"
android:layout_height="match_parent">
<fragment
android:id="@+id/mapfragment"
class="com.huawei.hms.maps.MapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<LinearLayout
android:orientation="vertical"
android:layout_width="wrap_content"
android:layout_height="match_parent">
<Button
android:text="Get Geofence List"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_margin="5dp"
android:padding="5dp"
android:background="@drawable/abc_btn_colored_material"
android:textColor="@android:color/white"
android:id="@+id/btnGetGeofenceList" />
<Button
android:text="Clear Map"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_margin="5dp"
android:background="@drawable/abc_btn_colored_material"
android:textColor="@android:color/white"
android:id="@+id/btnClearMap" />
</LinearLayout>
<SeekBar
android:visibility="invisible"
android:min="30"
android:layout_alignParentBottom="true"
android:layout_marginBottom="20dp"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/radiusBar" />
</RelativeLayout>
In our activity’s OnCreate method, set the layout file as the content view, load AGConnectService, set button’s click events, and initialize FusedLocationProviderClient. Get a handle to the map fragment by calling FragmentManager.FindFragmentById. Then use GetMapAsync to register for the map callback.
Also, implement the IOnMapReadyCallback interface to MainActivity and override OnMapReady method which is triggered when the map is ready to use.
Code:
public class MainActivity : AppCompatActivity, IOnMapReadyCallback
{
MapFragment mapFragment;
HuaweiMap hMap;
Marker marker;
Circle circle;
SeekBar radiusBar;
FusedLocationProviderClient fusedLocationProviderClient;
GeofenceModel selectedCoordinates;
List<Marker> searchMarkers;
private View search_view;
private AlertDialog alert;
public static LatLng CurrentPosition { get; set; }
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
Xamarin.Essentials.Platform.Init(this, savedInstanceState);
SetContentView(Resource.Layout.activity_main);
AGConnectServicesConfig config = AGConnectServicesConfig.FromContext(ApplicationContext);
fusedLocationProviderClient = LocationServices.GetFusedLocationProviderClient(this);
mapFragment = (MapFragment)FragmentManager.FindFragmentById(Resource.Id.mapfragment);
mapFragment.GetMapAsync(this);
FindViewById<Button>(Resource.Id.btnGeoWithAddress).Click += btnGeoWithAddress_Click;
FindViewById<Button>(Resource.Id.btnClearMap).Click += btnClearMap_Click;
radiusBar = FindViewById<SeekBar>(Resource.Id.radiusBar);
radiusBar.ProgressChanged += radiusBar_ProgressChanged; ;
RequestPermissions();
}
public void OnMapReady(HuaweiMap map)
{
hMap = map;
hMap.UiSettings.MyLocationButtonEnabled = true;
hMap.UiSettings.CompassEnabled = true;
hMap.UiSettings.ZoomControlsEnabled = true;
hMap.UiSettings.ZoomGesturesEnabled = true;
hMap.MyLocationEnabled = true;
hMap.MapClick += HMap_MapClick;
if (selectedCoordinates == null)
selectedCoordinates = new GeofenceModel { LatLng = CurrentPosition, Radius = 30 };
}
}
As you can see above, with the UiSettings property of the HuaweiMap object we set my location button, enable compass, etc. Other properties like below:
Code:
public bool CompassEnabled { get; set; }
public bool IndoorLevelPickerEnabled { get; set; }
public bool MapToolbarEnabled { get; set; }
public bool MyLocationButtonEnabled { get; set; }
public bool RotateGesturesEnabled { get; set; }
public bool ScrollGesturesEnabled { get; set; }
public bool ScrollGesturesEnabledDuringRotateOrZoom { get; set; }
public bool TiltGesturesEnabled { get; set; }
public bool ZoomControlsEnabled { get; set; }
public bool ZoomGesturesEnabled { get; set; }
Now when the app launch, directly get the current location and move the camera to it. In order to do that we use FusedLocationProviderClient that we instantiated and call LastLocation API.
LastLocation API returns a Task object that we can check the result by implementing the relevant listeners for success and failure.In success listener we are going to move the map’s camera position to the last known position.
Code:
private void GetCurrentPosition()
{
var locationTask = fusedLocationProviderClient.LastLocation;
locationTask.AddOnSuccessListener(new LastLocationSuccess(this));
locationTask.AddOnFailureListener(new LastLocationFail(this));
}
...
public class LastLocationSuccess : Java.Lang.Object, IOnSuccessListener
{
private MainActivity mainActivity;
public LastLocationSuccess(MainActivity mainActivity)
{
this.mainActivity = mainActivity;
}
public void OnSuccess(Java.Lang.Object location)
{
Toast.MakeText(mainActivity, "LastLocation request successful", ToastLength.Long).Show();
if (location != null)
{
MainActivity.CurrentPosition = new LatLng((location as Location).Latitude, (location as Location).Longitude);
mainActivity.RepositionMapCamera((location as Location).Latitude, (location as Location).Longitude);
}
}
}
To change the position of the camera, we must specify where we want to move the camera, using a CameraUpdate. The Map Kit allows us to create many different types of CameraUpdate using CameraUpdateFactory.
Code:
public static CameraUpdate NewCameraPosition(CameraPosition p0);
public static CameraUpdate NewLatLng(LatLng p0);
public static CameraUpdate NewLatLngBounds(LatLngBounds p0, int p1);
public static CameraUpdate NewLatLngBounds(LatLngBounds p0, int p1, int p2, int p3);
public static CameraUpdate NewLatLngZoom(LatLng p0, float p1);
public static CameraUpdate ScrollBy(float p0, float p1);
public static CameraUpdate ZoomBy(float p0);
public static CameraUpdate ZoomBy(float p0, Point p1);
public static CameraUpdate ZoomIn();
public static CameraUpdate ZoomOut();
public static CameraUpdate ZoomTo(float p0);
There are some methods for the camera position changes as we see above. Simply these are:
1. NewLatLng: Change camera’s latitude and longitude, while keeping other properties
2. NewLatLngZoom: Changes the camera’s latitude, longitude, and zoom, while keeping other properties
3. NewCameraPosition: Full flexibility in changing the camera position
We are going to use NewCameraPosition. A CameraPosition can be obtained with a CameraPosition.Builder. And then we can set target, bearing, tilt and zoom properties.
Code:
public void RepositionMapCamera(double lat, double lng)
{
var cameraPosition = new CameraPosition.Builder();
cameraPosition.Target(new LatLng(lat, lng));
cameraPosition.Zoom(1000);
cameraPosition.Bearing(45);
cameraPosition.Tilt(20);
CameraUpdate cameraUpdate = CameraUpdateFactory.NewCameraPosition(cameraPosition.Build());
hMap.MoveCamera(cameraUpdate);
}
Creating Geofence
Now that we’ve created the map, we can now start to create geofences using it. In this article, we will choose the location where we want to set geofence in two different ways. The first is to select the location by clicking on the map, and the second is to search for nearby places by keyword and select one after placing them on the map with the marker.
Set the geofence location by clicking on the map
It is always easier to select a location by seeing it. After this section, we are able to set a geofence around the clicked point when the map’s clicked. We attached the Click event to our map in the OnMapReady method. In this Click event, we will add a marker to the clicked point and draw a circle around it.
After clicking the map, we will add a circle, a marker, and a custom info window to that point like this:
Also, we will use the Seekbar at the bottom of the page to adjust the circle radius.
We set selectedCoordinates variable when adding the marker. Let’s create the following method to create the marker:
Code:
private void HMap_MapClick(object sender, HuaweiMap.MapClickEventArgs e)
{
selectedCoordinates.LatLng = e.P0;
if (circle != null)
{
circle.Remove();
circle = null;
}
AddMarkerOnMap();
}
void AddMarkerOnMap()
{
if (marker != null) marker.Remove();
var markerOption = new MarkerOptions()
.InvokeTitle("You are here now")
.InvokePosition(selectedCoordinates.LatLng);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
marker = hMap.AddMarker(markerOption);
bool isInfoWindowShown = marker.IsInfoWindowShown;
if (isInfoWindowShown)
marker.HideInfoWindow();
else
marker.ShowInfoWindow();
}
With MarkerOptions we can set the title and position properties. And for creating a custom info window, there is SetInfoWindowAdapter method. Adding MapInfoWindowAdapter class to our project for rendering the custom info model. And implement HuaweiMap.IInfoWindowAdapter interface to it.
This interface provides a custom information window view of a marker and contains two methods:
Code:
View GetInfoContents(Marker marker);
View GetInfoWindow(Marker marker);
When an information window needs to be displayed for a marker, methods provided by this adapter are called in any case.
Now let’s create a custom info window layout and named it as map_info_view.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<Button
android:text="Add geofence"
android:width="100dp"
style="@style/Widget.AppCompat.Button.Colored"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/btnInfoWindow" />
</LinearLayout>
And return it after customizing it in GetInfoWindow() method. The full code of the adapter is below:
Code:
internal class MapInfoWindowAdapter : Java.Lang.Object, HuaweiMap.IInfoWindowAdapter
{
private MainActivity activity;
private GeofenceModel selectedCoordinates;
private View addressLayout;
public MapInfoWindowAdapter(MainActivity currentActivity)
{
activity = currentActivity;
}
public View GetInfoContents(Marker marker)
{
return null;
}
public View GetInfoWindow(Marker marker)
{
if (marker == null)
return null;
//update everytime, drawcircle need it
selectedCoordinates = new GeofenceModel { LatLng = new LatLng(marker.Position.Latitude, marker.Position.Longitude) };
View mapInfoView = activity.LayoutInflater.Inflate(Resource.Layout.map_info_view, null);
var radiusBar = activity.FindViewById<SeekBar>(Resource.Id.radiusBar);
if (radiusBar.Visibility == Android.Views.ViewStates.Invisible)
{
radiusBar.Visibility = Android.Views.ViewStates.Visible;
radiusBar.SetProgress(30, true);
}
activity.FindViewById<SeekBar>(Resource.Id.radiusBar)?.SetProgress(30, true);
activity.DrawCircleOnMap(selectedCoordinates);
Button button = mapInfoView.FindViewById<Button>(Resource.Id.btnInfoWindow);
button.Click += btnInfoWindow_ClickAsync;
return mapInfoView;
}
}
This is not the end. For full content, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201357111605920240&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Panorama use a specialized rotating lens camera and a curved daguerreotype plate date from the 1840s onwards.
In 1899, Kodak introduced the #4 Kodak Panorama panoramic camera for amateur photographers. These photographs were about 12” cm long and had a field of view that was 180°.
The story and the evolution of photographic artistry has been beautiful.
Huawei Panorama Kit delivers an immersive and interactive viewing experience.
By integrating Panorama Kit, apps can present users with interactive, 360-degree spherical or cylindrical panoramic images, so they can immerse in exciting 3D environments.
Features
360-degree spherical panorama, partially 360-degree spherical panorama, and cylindrical panorama
Parsing and display of panorama images in JPG, JPEG, and PNG formats
Interactive viewing of panoramic videos
Panorama view adjustment by swiping the screen or rotating the mobile phone to any degree
Interactive viewing of 360-degree spherical panoramas shot by mainstream panoramic cameras
Flexible use of panorama services, such as displaying panoramas in a certain area of your app made possible with in-app panorama display API’s.
Note: The size of a 2D image to be view is not greater than 20 MB, and the resolution is less than 16384 x 8192.
Development Overview
Prerequisite
1. Must have a Huawei Developer Account
2. Must have Android Studio 3.0 or later
3. Must have a Huawei phone with HMS 4.0.0.300 or later.
Software Requirements
1. Java SDK 1.7 or later
2. Android 5.0 or later
Preparation
1. Create an app or project in the Huawei app gallery connect.
2. Provide the SHA Key and App Package name of the project in App Information Section and enable the required API.
3. Create an Android project.
Note: Panorama SDK can be directly called by devices, without connecting to AppGallery Connect and hence it is not mandatory to download and integrate the agconnect-services.json.
Integration
1. Add below to build.gradle (project)file, under buildscript/repositories and allprojects/repositories.
Code:
Maven {url 'http://developer.huawei.com/repo/'}
2. Add below to build.gradle (app) file, under dependencies.
Code:
Implementation 'com.huawei.hms:panorama:5.0.2.301'
Adding permissions
Code:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Development Process
The sample application is Tourism application.
It lists all the glorious places to visit around the world and provide 360 degree view of their beauty using HMS Ring api of Panorama kit.
The application has only 1 module as of now which is to display the 7 wonders of the World.
To achieve it below classes are created.
Activity_main.xml
This class is responsible to hold the Recycler view and to list all the 7 Wonders of the world.
Code:
<?
xml version="1.0" encoding="utf-8"
?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="horizontal"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="center_vertical"
android:paddingLeft="8dp"
android:paddingRight="8dp">
<androidx.recyclerview.widget.RecyclerView
android:id="@+id/recycler_view"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</LinearLayout>
card_Layout.xml
This class is responsible to hold the card view design which is integrated within the Recycler view.
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.cardview.widget.CardView
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/card_view"
android:layout_marginLeft="-3dp"
android:layout_marginRight="0dp"
android:layout_marginBottom="5dp"
app:cardBackgroundColor="#E5E5E5"
app:cardCornerRadius="8dp"
app:cardElevation="3dp"
app:contentPadding="4dp" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="horizontal"
android:id="@+id/rootLayout">
<ImageView
android:layout_width="100dp"
android:layout_height="100dp"
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
android:background="@drawable/gradient"
android:layout_margin="5dp">
<TextView
android:id="@+id/name"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:gravity="center"
android:paddingLeft="5dp"
android:paddingRight="5dp"
android:text="7 Wonders"
android:textColor="#ff25"
android:textSize="18sp"
android:textStyle="bold" />
<TextView
android:id="@+id/location"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="50dp"
android:drawableLeft="@drawable/ic_location_on_black_24dp"
android:gravity="center|bottom"
android:maxLines="1"
android:text="India"
android:textColor="@android:color/black"
android:textSize="14sp" />
</LinearLayout>
</LinearLayout>
</androidx.cardview.widget.CardView>
android:id="@+id/item_image"
android:layout_margin="5dp"
android:scaleType="fitXY"/>
MainActivity.java
This class is the launching activity for this module of the application and holds the implementation for Recycler View and Item click. Also responsible to launch the panorama view of the selected item.
Code:
package com.example.panoramaview; import android.content.Intent;
import android.net.Uri;
import android.os.Bundle;
import android.util.Log;
import android.widget.Toast;
import androidx.appcompat.app.AppCompatActivity;
import androidx.recyclerview.widget.LinearLayoutManager;
import androidx.recyclerview.widget.RecyclerView;
import com.huawei.hms.panorama.Panorama;
import com.huawei.hms.panorama.PanoramaInterface;
import com.huawei.hms.support.api.client.ResultCallback;
public class MainActivity extends AppCompatActivity implements TourRecyclerAdapter.OnTourClickListner {
private static final String LOG_TAG = "MainActivity";
RecyclerView recyclerView;
RecyclerView.LayoutManager layoutManager;
private TourRecyclerAdapter mTourRecyclerAdapter;
int id;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
recyclerView = findViewById(R.id.recycler_view);
layoutManager = new LinearLayoutManager(this);
recyclerView.setLayoutManager(layoutManager);
//Creating object for the adapter class.
mTourRecyclerAdapter = new TourRecyclerAdapter(this);
recyclerView.setAdapter(mTourRecyclerAdapter);
}
@Override
public void onTourClick(int Position) {
id = Position;
Uri uril = SelectImageToConvert(Position);
Panorama.getInstance()
.loadImageInfo(this, uril, PanoramaInterface.IMAGE_TYPE_RING)
.setResultCallback(new MainActivity.ResultCallbackImpl());
}
//Callback method to handle the panorama view.
private class ResultCallbackImpl implements ResultCallback<PanoramaInterface.ImageInfoResult> {
@Override
public void onResult(PanoramaInterface.ImageInfoResult panoramaResult) {
if (panoramaResult == null) {
logAndToast("panoramaResult is null");
return;
}
if (panoramaResult.getStatus().isSuccess()) {
Intent intent = panoramaResult.getImageDisplayIntent();
if (intent != null) {
startActivity(intent);
} else {
logAndToast("unknown error, view intent is null");
}
} else {
logAndToast("error status : " + panoramaResult.getStatus());
}
}
}
//Method for logging.
private void logAndToast(String message) {
Log.e(LOG_TAG, message);
Toast.makeText(MainActivity.this, message, Toast.LENGTH_LONG).show();
}
//Decides the image click drawable
public Uri selectImageToConvert(int Position) {
Uri uri = null;
try{
switch (Position) {
case 0:
uri = Uri.parse("android.resource://" + getPackageName() + "/" + R.drawable.tajmahal);
break;
case 1:
uri = Uri.parse("android.resource://" + getPackageName() + "/" + R.drawable.christ_the_redeemer);
break;
case 2:
uri = Uri.parse("android.resource://" + getPackageName() + "/" + R.drawable.chichenitza);
break;
case 3:
uri = Uri.parse("android.resource://" + getPackageName() + "/" + R.drawable.greatwallchina);
break;
case 4:
uri = Uri.parse("android.resource://" + getPackageName() + "/" + R.drawable.petrajordan);
break;
case 5:
uri = Uri.parse("android.resource://" + getPackageName() + "/" + R.drawable.colosseum_rome);
break;
case 6:
uri = Uri.parse("android.resource://" + getPackageName() + "/" + R.drawable.machu_picchu);
break;
default:
throw new IllegalStateException("Unexpected value: " + Position);
}
}catch(Exception e)
{
logAndToast("Exception occurs, Please check" +e);
}
finally {
return uri;
}
}
}
TourRecyclerAdapter.java
This is an adapter class for the created recycler view. It holds the implementation for the data.
Code:
package com.example.panoramaview;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.ImageView;
import android.widget.TextView;
import androidx.annotation.NonNull;
import androidx.recyclerview.widget.RecyclerView;
public class TourRecyclerAdapter extends RecyclerView.Adapter<TourRecyclerAdapter.ViewHolder>{
private OnTourClickListner mOnTourClickListner;
//Constructor to handle onclickListner
public TourRecyclerAdapter(OnTourClickListner mOnTourClickListner)
{
this.mOnTourClickListner = mOnTourClickListner;
}
private String[] titles = {"Taj Mahal",
"Christ The Redeemer",
"Chichen Itza",
"Great Wall China",
"Petra",
"Colosseum",
"Machu Picchu"};
private int[] images = { R.drawable.tajmahal,
R.drawable.christ_the_redeemer,
R.drawable.chichenitza,
R.drawable.greatwallchina,
R.drawable.petrajordan,
R.drawable.colosseum_rome,
R.drawable.machu_picchu};
private String[] location = new String[]{"India-Agra",
"Rio de Janeiro-Bazil",
"Mexico",
"China-Beijing",
"Jordan",
"Rome-Italy",
"Peru"
};
@NonNull
@Override
public ViewHolder onCreateViewHolder(@NonNull ViewGroup viewGroup, int i) {
View v = LayoutInflater.from(viewGroup.getContext())
.inflate(R.layout.card_layout, viewGroup, false);
ViewHolder viewHolder = new ViewHolder(v,mOnTourClickListner);
return viewHolder;
}
@Override
public void onBindViewHolder(@NonNull ViewHolder viewHolder, int i) {
viewHolder.itemTitle.setText(titles[i]);
viewHolder.itemImage.setImageResource(images[i]);
viewHolder.location.setText(location[i]);
}
@Override
public int getItemCount() {
return titles.length;
}
public class ViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener {
public ImageView itemImage;
public TextView itemTitle;
public TextView location;
OnTourClickListner mOnTourClickListner;
public ViewHolder(View itemView, OnTourClickListner mOnTourClickListner) {
super(itemView);
itemImage = (ImageView) itemView.findViewById(R.id.item_image);
itemTitle = (TextView) itemView.findViewById(R.id.name);
location = (TextView) itemView.findViewById(R.id.location);
this.mOnTourClickListner = mOnTourClickListner;
itemView.setOnClickListener(this);
}
@Override
public void onClick(View view) {
mOnTourClickListner.onTourClick(getAdapterPosition());
System.out.println("inside on click 1");
}
}
//OnClickListner interface
public interface OnTourClickListner{
void onTourClick(int Position);
}
}
Results
Conclusion
This is an awesome application to help the people seeking for in home flexibility to plan their next adventure by having realistic views for the famous places around the world.
References
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/introduction-0000001050042264-V5
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this tutorial, we will be discussing and implementing the HarmonyOS MVVM Architectural pattern in our Harmony app.
This project is available on github, link can be found at the end of the article
Table of contents
What is MVVM
Harmony MVVM example project structure
Adding dependencies
Model
Layout
Retrofit interface
ViewModel
Tip and Tricks
Conclusion
Recommended resources
What is MVVM
MVVM stands for Model, View, ViewModel:
Model: This holds the data of the application. It cannot directly talk to the View. Generally, it’s recommended to expose the data to the ViewModel through ActiveDatas (Observables ).
View: It represents the UI of the application devoid of any Application Logic. It observes the ViewModel.
ViewModel: It acts as a link between the Model and the View. It’s responsible for transforming the data from the Model. It provides data streams to the View. It also uses hooks or callbacks to update the View. It’ll ask for the data from the Model.
MVVM can be achieved in two ways:
Using Data binding
RxJava
In this tutorial we will implement MVVM in Harmony using RXjava, as Data binding is still under development and not ready to use in Harmony.
Harmony MVVM example project structure
We will create packages by features. It will make your code more modular and manageable.
Adding the Dependencies
Add the following dependencies in your module level build.gradle file:
Code:
dependencies {
//[...]
//RxJava
implementation "io.reactivex.rxjava2:rxjava:2.2.17"
//retrofit
implementation 'com.squareup.retrofit2:retrofit:2.6.0'
implementation "com.squareup.retrofit2:converter-moshi:2.6.0"
implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
//RxJava adapter for retrofit
implementation 'com.squareup.retrofit2:adapter-rxjava2:2.7.1'
}
Model
The Model would hold the user’s email and password. The following User.java class does it:
Code:
package com.megaache.mvvmdemo.model;
public class User {
private String email;
private String password;
public User(String email, String password) {
this.email = email;
this.password = password;
}
public void setEmail(String email) {
this.email = email;
}
public String getEmail() {
return email;
}
public void setPassword(String password) {
this.password = password;
}
public String getPassword() {
return password;
}
@Override
public String toString() {
return "User{" +
"email='" + email + '\'' +
", password='" + password + '\'' +
'}';
}
}
Layout
NOTE: For this tutorial, I have decided to create the layout for smart watch devices, however it will work fine on all devices, you just need to re-arrange the components and modify the alignment.
The layout will consist of login button, two text fields and two error texts, each will be shown or hidden depending on the value of the text box above it, after clicking the login button. the final UI will like the screenshot below:
Before we create layout lets add some colors:
First create file color.json under resources/base/element and add the following json content:
Code:
{
"color": [
{
"name": "primary",
"value": "#283148"
},
{
"name": "primaryDark",
"value": "#283148"
},
{
"name": "accent",
"value": "#06EBBF"
},
{
"name": "red",
"value": "#FF406E"
}
]
}
Then, lets design background elements for the Text Fields and the Button:
Create file background_text_field.xml and background_text_button.xml under resources/base/graphic as shown in below screenshot :
Then add the following code:
Background_text_field.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="20"/>
<solid
ohos:color="#ffffff"/>
<stroke
ohos:width="2"
ohos:color="$color:accent"/>
</shape>
Background_button.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<corners
ohos:radius="20"/>
<solid
ohos:color="$color:accent"/>
</shape>
Now lets create the background element for the main layout, let’s called background_ability_login.xml:
Code:
<?xml version="1.0" encoding="UTF-8" ?>
<shape xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:shape="rectangle">
<solid
ohos:color="$color:primaryDark"/>
</shape>
Finally, let’s create the layout file ability_login.xml:
Code:
<?xml version="1.0" encoding="utf-8"?>
<ScrollView
xmlns:ohos="http://schemas.huawei.com/res/ohos"
ohos:id="$+id:scrollview"
ohos:height="match_parent"
ohos:width="match_parent"
ohos:background_element="$graphic:background_ability_login"
ohos:layout_alignment="horizontal_center"
ohos:rebound_effect="true"
>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:orientation="vertical"
ohos:padding="20vp"
>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:layout_alignment="center"
ohos:orientation="vertical"
>
<TextField
ohos:id="$+id:tf_email"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_text_field"
ohos:hint="email"
ohos:left_padding="10vp"
ohos:min_height="40vp"
ohos:multiple_lines="false"
ohos:text_alignment="vertical_center"
ohos:text_color="black"
ohos:text_input_type="pattern_number"
ohos:text_size="15fp"/>
<Text
ohos:id="$+id:t_email_invalid"
ohos:height="match_content"
ohos:width="match_content"
ohos:layout_alignment="center"
ohos:text="invalid email"
ohos:text_color="$color:red"
ohos:text_size="15fp"
/>
</DirectionalLayout>
<DirectionalLayout
ohos:height="match_content"
ohos:width="match_parent"
ohos:layout_alignment="center"
ohos:orientation="vertical"
ohos:top_margin="10vp">
<TextField
ohos:id="$+id:tf_password"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_text_field"
ohos:hint="password"
ohos:left_padding="10vp"
ohos:min_height="40vp"
ohos:multiple_lines="false"
ohos:text_alignment="vertical_center"
ohos:text_color="black"
ohos:text_input_type="pattern_password"
ohos:text_size="15fp"
/>
<Text
ohos:id="$+id:t_password_invalid"
ohos:height="match_content"
ohos:width="match_content"
ohos:layout_alignment="center"
ohos:padding="0vp"
ohos:text="invalid password"
ohos:text_color="$color:red"
ohos:text_size="15fp"
/>
</DirectionalLayout>
<Button
ohos:id="$+id:btn_login"
ohos:height="match_content"
ohos:width="match_parent"
ohos:background_element="$graphic:background_button"
ohos:bottom_margin="30vp"
ohos:min_height="40vp"
ohos:text="login"
ohos:text_color="#fff"
ohos:text_size="18fp"
ohos:top_margin="10vp"/>
</DirectionalLayout>
</ScrollView>
Retrofit interface
Before we move to the ViewModel, we have to setup our Retrofit service and repository class.
To keep the project clean, I will create class config.java which will hold our API URLs:
Code:
package com.megaache.mvvmdemo;
public class Config {
//todo: update base url variable with valid url
public static final String BASE_URL = "https://example.com";
public static final String API_VERSION = "/api/v1";
public static final String LOGIN_URL="auth/login";
}
Note: The url's are just for demonstration. For the demo to work, you must replace the urls.
First create interface APIServices.java:
For this tutorial, we assume the method of login EndPoint is Post, you may changes depending on your API, the method login will return an Observable, that will be observed in the ViewModel using RxJava.
Code:
package com.megaache.mvvmdemo.network;
import com.megaache.mvvmdemo.Config;
import com.megaache.mvvmdemo.network.request.LoginRequest;
import com.megaache.mvvmdemo.network.response.LoginResponse;
import io.reactivex.Observable;
import retrofit2.http.Body;
import retrofit2.http.Headers;
import retrofit2.http.POST;
public interface APIServices {
@POST(Config.LOGIN_URL)
@Headers("Content-Type: application/json;charset=UTF-8")
public Observable<LoginResponse> login(@Body LoginRequest loginRequest);
}
Note: the class LoginRequest which you will see later in this tutorial, must be equal to the request that the server expects in The names of variables and their types, otherwise the server will fail to process the request.
Then, add method createRetrofitClient() to MyApplication.java, which will create and return retrofit instance, the instance will use Moshi converter to handle the conversion of JSON to our java class, and RxJava2 adapter to return observables that can work with RxJava instead of the default Call class which requires callbacks:
Code:
package com.megaache.mvvmdemo;
import com.megaache.mvvmdemo.network.APIServices;
import ohos.aafwk.ability.AbilityPackage;
import okhttp3.OkHttpClient;
import retrofit2.Retrofit;
import retrofit2.adapter.rxjava2.RxJava2CallAdapterFactory;
import retrofit2.converter.moshi.MoshiConverterFactory;
import java.util.concurrent.TimeUnit;
public class MyApplication extends AbilityPackage {
@Override
public void onInitialize() {
super.onInitialize();
}
public static APIServices createRetrofitClient() {
OkHttpClient client = new OkHttpClient.Builder()
.connectTimeout(60L, TimeUnit.SECONDS)
.build();
Retrofit retrofit = new Retrofit.Builder()
.baseUrl(Config.BASE_URL + Config.API_VERSION)
.addCallAdapterFactory(RxJava2CallAdapterFactory.create())
.addConverterFactory(MoshiConverterFactory.create()).client(client)
.build();
return retrofit.create(APIServices.class);
}
}
NOTE: For cleaner code, you can create file RetrofitClient.java and move the method createRetrofitClient() to it.
Now, let’s work on the Login feature, we going to first create request and response classes, then move to the ViewModel and the view:
We need LoginRequest and LoginResponse which both will extends BaseRequest and BaseRespnonse, code is shown below:
Create BaseRequest.java:
In real life project, your API may expect some parameter to be sent with every request. For example: accessToken, language, deviceId, pushToken etc. which will depende on your API. for this tutorial I added one field called deviceType with static value.
Code:
package com.megaache.mvvmdemo.network.request;
public class BaseRequest {
private String deviceType;
public BaseRequest() {
deviceType = "harmony-watch";
}
public String getDeviceType() {
return deviceType;
}
public void setDeviceType(String deviceType) {
this.deviceType = deviceType;
}
}
Create class LoginRequest.java, which will extend BaseRequest and have two fields Email and Password, which will provided by the end user:
Code:
package com.megaache.mvvmdemo.network.request;
public class LoginRequest {
private String email;
private String password;
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
}
Then for the response, Create BaseResponse.java first:
Code:
package com.megaache.mvvmdemo.network.response;
import com.megaache.mvvmdemo.MyApplication;
import java.io.Serializable;
public class BaseResponse implements Serializable {
}
Then LoginResponse.java extending BaseResponse:
Code:
package com.megaache.mvvmdemo.network.response;
import com.megaache.mvvmdemo.model.User;
import com.squareup.moshi.Json;
public class LoginResponse extends BaseResponse {
@Json(name = "user")
private User user;
@Json(name = "accessToken")
private String accessToken;
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
public String getAccessToken() {
return accessToken;
}
public void setAccessToken(String accessToken) {
this.accessToken = accessToken;
}
}
Note: this class must be equal to the response you get from server, otherwise Retrofit Gson converter will fail to convert the response to LoginResponse class, both the type of variables and their names must equal the those in the JSON response.
ViewModel
In ViewModel, we will wrap the data which was loaded with Retrofit inside class LoggedIn in LoginViewState, and observe states Observable defined in BaseViewModel in our Ability (or AbilitySlice). Whenever the value in states changes, the ability will be notified without checking whether the ability is alive or not.
The code for LoginViewState.java extending empty class BaseViewState.java, and ErrorData.java (used in LoginViewState.java) is given below:
ErrorData.java:
Code:
package com.megaache.mvvmdemo.model;
import java.io.Serializable;
public class ErrorData implements Serializable {
private String message;
private int statusCode;
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
public int getStatusCode() {
return statusCode;
}
public void setStatusCode(int statusCode) {
this.statusCode = statusCode;
}
}
LoginViewState.java:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.base.BaseViewState;
import com.megaache.mvvmdemo.model.ErrorData;
import com.megaache.mvvmdemo.network.response.LoginResponse;
public class LoginViewState extends BaseViewState {
public static class Loading extends LoginViewState {
}
public static class Error extends LoginViewState {
private ErrorData message;
public Error(ErrorData message) {
this.message = message;
}
public void setMessage(ErrorData message) {
this.message = message;
}
public ErrorData getMessage() {
return message;
}
}
public static class LoggedIn extends LoginViewState {
private LoginResponse userDataResponse;
public LoggedIn(LoginResponse userDataResponse) {
this.userDataResponse = userDataResponse;
}
public LoginResponse getUserDataResponse() {
return userDataResponse;
}
public void setUserDataResponse(LoginResponse userDataResponse) {
this.userDataResponse = userDataResponse;
}
}
}
The code for the LoginViewModel.java is given below:
When the user clicks the login button, the method sendLoginRequest() will setup our retrofit Observable, the request will not be sent until the we call the method subscribe which will be done on the View. notice we are subscribing on the Schedulers.Io() Scheduler, which is will execute the requests in a background thread to avoid freezing the UI, and because of that we have to create our custom Observer that will invoke the callback code in UI thread after we receive data, more on this later:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.base.BaseViewModel;
import com.megaache.mvvmdemo.MyApplication;
import com.megaache.mvvmdemo.model.ErrorData;
import com.megaache.mvvmdemo.network.request.LoginRequest;
import io.reactivex.Observable;
import io.reactivex.schedulers.Schedulers;
import ohos.aafwk.abilityjet.activedata.ActiveData;
public class LoginViewModel extends BaseViewModel<LoginViewState> {
private static final int MIN_PASSWORD_LENGTH = 6;
public ActiveData<Boolean> emailValid = new ActiveData<>();
public ActiveData<Boolean> passwordValid = new ActiveData<>();
public ActiveData<Boolean> loginState = new ActiveData<>();
public LoginViewModel() {
super();
}
public void login(String email, String password) {
boolean isEmailValid = isEmailValid(email);
emailValid.setData(isEmailValid);
if (!isEmailValid)
return;
boolean isPasswordValid = isPasswordValid(email);
passwordValid.setData(isPasswordValid);
if (!isPasswordValid)
return;
LoginRequest loginRequest = new LoginRequest();
loginRequest.setEmail(email);
loginRequest.setPassword(password);
super.subscribe(sendLoginRequest(loginRequest));
}
private Observable<LoginViewState> sendLoginRequest(LoginRequest loginRequest) {
return MyApplication.createRetrofitClient()
.login(loginRequest)
.doOnError(Throwable::printStackTrace)
.map(LoginViewState.LoggedIn::new)
.cast(LoginViewState.class)
.onErrorReturn(throwable -> {
ErrorData errorData = new ErrorData();
if (throwable.getMessage() != null)
errorData.setMessage(throwable.getMessage());
else
errorData.setMessage(" No internet! ");
return new LoginViewState.Error(errorData);
})
.subscribeOn(Schedulers.io())
.startWith(new LoginViewState.Loading());
}
private boolean isEmailValid(String email) {
return email != null && !email.isEmpty() && email.contains("@");
}
private boolean isPasswordValid(String password) {
return password != null && password.length() > MIN_PASSWORD_LENGTH;
}
}
Settings up the ability (View)
As you know, ability is our view, we have instantiated ViewModel and observer states and ActiveDatas in the method ObserverData(), as mentioned before, retrofit will send the request on background thread, therefore the code in the Observer will run on the same thread (Schedulars.io()), which will cause exceptions if that code attemp to update the UI, to prevent that, we will create a custom UIObserver class which extends Observer, that will run our code in the UI task dispatcher of the ability (UI Thread), code for UiObserver.java as show below:
Code:
package com.megaache.mvvmdemo.utils;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.ability.AbilitySlice;
import ohos.aafwk.abilityjet.activedata.DataObserver;
import ohos.app.dispatcher.TaskDispatcher;
public abstract class UiObserver<T> extends DataObserver<T> {
private TaskDispatcher uiTaskDispatcher;
public UiObserver(Ability baseAbilitySlice) {
setLifecycle(baseAbilitySlice.getLifecycle());
uiTaskDispatcher = baseAbilitySlice.getUITaskDispatcher();
}
@Override
public void onChanged(T t) {
uiTaskDispatcher.asyncDispatch(() -> onValueChanged(t));
}
public abstract void onValueChanged(T t);
}
Code for LoginAbility.java is shown below:
Code:
package com.megaache.mvvmdemo.ui.login;
import com.megaache.mvvmdemo.ResourceTable;
import com.megaache.mvvmdemo.utils.UiObserver;
import com.megaache.mvvmdemo.model.ErrorData;
import ohos.aafwk.ability.Ability;
import ohos.aafwk.content.Intent;
import ohos.agp.components.Button;
import ohos.agp.components.Component;
import ohos.agp.components.Text;
import ohos.agp.components.TextField;
import ohos.agp.window.dialog.ToastDialog;
public class LoginAbility extends Ability {
private LoginViewModel loginViewModel;
private TextField emailTF;
private Text emailInvalidT;
private TextField passwordTF;
private Text passwordInvalidT;
@Override
public void onStart(Intent intent) {
super.onStart(intent);
loginViewModel = new LoginViewModel();
initUI();
observeData();
}
private void initUI() {
super.setUIContent(ResourceTable.Layout_ability_login);
Button loginButton = (Button) findComponentById(ResourceTable.Id_btn_login);
loginButton.setClickedListener(c -> attemptLogin());
emailTF = (TextField) findComponentById(ResourceTable.Id_tf_email);
emailInvalidT = (Text) findComponentById(ResourceTable.Id_t_email_invalid);
passwordTF = (TextField) findComponentById(ResourceTable.Id_tf_password);
passwordInvalidT = (Text) findComponentById(ResourceTable.Id_t_password_invalid);
}
private void observeData() {
loginViewModel.emailValid.addObserver(new UiObserver<Boolean>(this) {
@Override
public void onValueChanged(Boolean aBoolean) {
emailInvalidT.setVisibility(aBoolean ? Component.VISIBLE : Component.HIDE);
}
}, false);
loginViewModel.passwordValid.addObserver(new UiObserver<Boolean>(this) {
@Override
public void onValueChanged(Boolean aBoolean) {
passwordInvalidT.setVisibility(aBoolean ? Component.VISIBLE : Component.HIDE);
}
}, false);
loginViewModel.getStates().addObserver(new UiObserver<LoginViewState>(this) {
@Override
public void onValueChanged(LoginViewState loginState) {
if (loginState instanceof LoginViewState.Loading) {
toggleLoadingDialog(true);
} else if (loginState instanceof LoginViewState.Error) {
toggleLoadingDialog(false);
manageError(((LoginViewState.Error) loginState).getMessage());
} else if (loginState instanceof LoginViewState.LoggedIn) {
toggleLoadingDialog(false);
showToast("logging successful!");
}
}
}, false);
}
private void attemptLogin() {
loginViewModel.login(emailTF.getText(), passwordTF.getText());
}
private void toggleLoadingDialog(boolean show) {
//todo: show/hide loading dialog
}
private void manageError(ErrorData errorData) {
showToast(errorData.getMessage());
}
private void showToast(String message) {
new ToastDialog(this)
.setText(message)
.show();
}
@Override
protected void onStop() {
super.onStop();
loginViewModel.unbind();
}
}
Tips And Tricks
If you want your app to work offline, its best to introduce a Repository classes that will handle quering information from server if internet is available or from the cach if not
For cleaner code, try re-using the ViewModel as much as possible, by creating a base class and moving the shared code their
You should not keep a reference to a View (component) or context in the ViewModel, unless you have no option
The ViewModel should not talk directly to the View, instead the View obseve the ViewModel and update itself depending on ViewModel data
A correct implementation of ViewModel should allow you to change the UI with minimal or zero changes to the ViewModel.
Conclusion
MVVM combines the advantages of separation of concerns provided by MVP archichetecture, while leveraging the advantages of RxJava or Data binding. The result is a pattern where the model drives as many of the operations as possible, minimizing the logic in the view.
Finally, talk is cheap, and I strongly advise you to try and learn these things in the code so that you do not need to rely on people like me to tell you what to do.
Clone the project from github, replate the API Urls in class config.java and run it on HarmonyOs Device, you should see a toast that says "Logging succesful" if the credentials are correct, otherwise it should show a toast with the error that says "no internet" or and error returned from the server.
This project is available on GitHub: Click here
Recommended sources:
HarmonyOS (essential topics): Essential Topics
Retrofit: Click here
Rxjava: Click here
Original Source
Comment below if you have any questions or suggestions.
Thank you!
In Android we have Manifest file right there we will declare all the activity names like that do we have any file here?
Introduction
If are you new to this application, please follow my previous articles
Pygmy collection application Part 1 (Account kit)
Intermediate: Pygmy Collection Application Part 2 (Ads Kit)
Intermediate: Pygmy Collection Application Part 3 (Crash service)
Intermediate: Pygmy Collection Application Part 4 (Analytics Kit Custom Events)
Intermediate: Pygmy Collection Application Part 5 (Safety Detect)
Intermediate: Pygmy Collection Application Part 6 (Room database)
Click to expand...
Click to collapse
In this article, we will learn how to integrate Huawei Document skew correction using Huawei HiAI in Pygmy collection finance application.
In pygmy collection application for customers KYC update need to happen, so agents will update the KYC, in that case document should be proper, so we will integrate the document skew correction for the image angle adjustment.
Commonly user Struggles a lot while uploading or filling any form due to document issue. This application helps them to take picture from the camera or from the gallery, it automatically detects document from the image.
Document skew correction is used to improve the document photography process by automatically identifying the document in an image. This actually returns the position of the document in original image.
Document skew correction also adjusts the shooting angle of the document based on the position information of the document in original image. This function has excellent performance in scenarios where photos of old photos, paper letters, and drawings are taken for electronic storage.
Features
Document detection: Recognizes documents in images and returns the location information of the documents in the original images.
Document correction: Corrects the document shooting angle based on the document location information in the original images, where areas to be corrected can be customized.
How to integrate Document Skew Correction
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps.
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Select image dialog
Java:
private void selectImage() {
final CharSequence[] items = {"Take Photo", "Choose from Library",
"Cancel"};
AlertDialog.Builder builder = new AlertDialog.Builder(KycUpdateActivity.this);
builder.setTitle("Add Photo!");
builder.setItems(items, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int item) {
boolean result = PygmyUtils.checkPermission(KycUpdateActivity.this);
if (items[item].equals("Take Photo")) {
/*userChoosenTask = "Take Photo";
if (result)
cameraIntent();*/
operate_type = TAKE_PHOTO;
requestPermission(Manifest.permission.CAMERA);
} else if (items[item].equals("Choose from Library")) {
/* userChoosenTask = "Choose from Library";
if (result)
galleryIntent();*/
operate_type = SELECT_ALBUM;
requestPermission(Manifest.permission.READ_EXTERNAL_STORAGE);
} else if (items[item].equals("Cancel")) {
dialog.dismiss();
}
}
});
builder.show();
}
Open Document Skew correction activity
Java:
private void startSuperResolutionActivity() {
Intent intent = new Intent(KycUpdateActivity.this, DocumentSkewCorrectionActivity.class);
intent.putExtra("operate_type", operate_type);
startActivityForResult(intent, DOCUMENT_SKEW_CORRECTION_REQUEST);
}
DocumentSkewCorrectonActivity.java
Java:
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import android.app.Activity;
import android.app.AlertDialog;
import android.app.ProgressDialog;
import android.content.ContentValues;
import android.content.DialogInterface;
import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.Matrix;
import android.graphics.Point;
import android.media.ExifInterface;
import android.net.Uri;
import android.os.Bundle;
import android.os.Handler;
import android.provider.MediaStore;
import android.util.Log;
import android.view.View;
import android.widget.ImageButton;
import android.widget.ImageView;
import android.widget.RelativeLayout;
import android.widget.TextView;
import android.widget.Toast;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzer;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerFactory;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerSetting;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionCoordinateInput;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionResult;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewDetectResult;
import com.shea.pygmycollection.R;
import com.shea.pygmycollection.customview.DocumentCorrectImageView;
import com.shea.pygmycollection.utils.FileUtils;
import com.shea.pygmycollection.utils.UserDataUtils;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
public class DocumentSkewCorrectionActivity extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = "SuperResolutionActivity";
private static final int REQUEST_SELECT_IMAGE = 1000;
private static final int REQUEST_TAKE_PHOTO = 1;
private ImageView desImageView;
private ImageButton adjustImgButton;
private Bitmap srcBitmap;
private Bitmap getCompressesBitmap;
private Uri imageUri;
private MLDocumentSkewCorrectionAnalyzer analyzer;
private Bitmap corrected;
private ImageView back;
private Task<MLDocumentSkewCorrectionResult> correctionTask;
private DocumentCorrectImageView documetScanView;
private Point[] _points;
private RelativeLayout layout_image;
private MLFrame frame;
TextView selectTv;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_document_skew_corretion);
//setStatusBarColor(this, R.color.black);
analyzer = createAnalyzer();
adjustImgButton = findViewById(R.id.adjust);
layout_image = findViewById(R.id.layout_image);
desImageView = findViewById(R.id.des_image);
documetScanView = findViewById(R.id.iv_documetscan);
back = findViewById(R.id.back);
selectTv = findViewById(R.id.selectTv);
adjustImgButton.setOnClickListener(this);
findViewById(R.id.back).setOnClickListener(this);
selectTv.setOnClickListener(this);
findViewById(R.id.rl_chooseImg).setOnClickListener(this);
back.setOnClickListener(this);
int operate_type = getIntent().getIntExtra("operate_type", 101);
if (operate_type == 101) {
takePhoto();
} else if (operate_type == 102) {
selectLocalImage();
}
}
private String[] chooseTitles;
@Override
public void onClick(View v) {
if (v.getId() == R.id.adjust) {
List<Point> points = new ArrayList<>();
Point[] cropPoints = documetScanView.getCropPoints();
if (cropPoints != null) {
points.add(cropPoints[0]);
points.add(cropPoints[1]);
points.add(cropPoints[2]);
points.add(cropPoints[3]);
}
MLDocumentSkewCorrectionCoordinateInput coordinateData = new MLDocumentSkewCorrectionCoordinateInput(points);
getDetectdetectResult(coordinateData, frame);
} else if (v.getId() == R.id.rl_chooseImg) {
chooseTitles = new String[]{getResources().getString(R.string.take_photo), getResources().getString(R.string.select_from_album)};
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setItems(chooseTitles, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialogInterface, int position) {
if (position == 0) {
takePhoto();
} else {
selectLocalImage();
}
}
});
builder.create().show();
} else if (v.getId() == R.id.selectTv) {
if (corrected == null) {
Toast.makeText(this, "Document Skew correction is not yet success", Toast.LENGTH_SHORT).show();
return;
} else {
ProgressDialog pd = new ProgressDialog(this);
pd.setMessage("Please wait...");
pd.show();
//UserDataUtils.saveBitMap(this, corrected);
Intent intent = new Intent();
intent.putExtra("status", "success");
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
if (pd != null && pd.isShowing()) {
pd.dismiss();
}
setResult(Activity.RESULT_OK, intent);
finish();
}
}, 3000);
}
} else if (v.getId() == R.id.back) {
finish();
}
}
private MLDocumentSkewCorrectionAnalyzer createAnalyzer() {
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting
.Factory()
.create();
return MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
}
private void takePhoto() {
layout_image.setVisibility(View.GONE);
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (takePictureIntent.resolveActivity(this.getPackageManager()) != null) {
ContentValues values = new ContentValues();
values.put(MediaStore.Images.Media.TITLE, "New Picture");
values.put(MediaStore.Images.Media.DESCRIPTION, "From Camera");
this.imageUri = this.getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values);
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, this.imageUri);
this.startActivityForResult(takePictureIntent, DocumentSkewCorrectionActivity.this.REQUEST_TAKE_PHOTO);
}
}
private void selectLocalImage() {
layout_image.setVisibility(View.GONE);
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, REQUEST_SELECT_IMAGE);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_OK) {
imageUri = data.getData();
try {
if (imageUri != null) {
srcBitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri);
String realPathFromURI = getRealPathFromURI(imageUri);
int i = readPictureDegree(realPathFromURI);
Bitmap spBitmap = rotaingImageView(i, srcBitmap);
Matrix matrix = new Matrix();
matrix.setScale(0.5f, 0.5f);
getCompressesBitmap = Bitmap.createBitmap(spBitmap, 0, 0, spBitmap.getWidth(),
spBitmap.getHeight(), matrix, true);
reloadAndDetectImage();
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
} else if (requestCode == REQUEST_TAKE_PHOTO && resultCode == Activity.RESULT_OK) {
try {
if (imageUri != null) {
srcBitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri);
String realPathFromURI = getRealPathFromURI(imageUri);
int i = readPictureDegree(realPathFromURI);
Bitmap spBitmap = rotaingImageView(i, srcBitmap);
Matrix matrix = new Matrix();
matrix.setScale(0.5f, 0.5f);
getCompressesBitmap = Bitmap.createBitmap(spBitmap, 0, 0, spBitmap.getWidth(),
srcBitmap.getHeight(), matrix, true);
reloadAndDetectImage();
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
} else if (resultCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_CANCELED) {
finish();
}
}
private void reloadAndDetectImage() {
if (imageUri == null) {
return;
}
frame = MLFrame.fromBitmap(getCompressesBitmap);
Task<MLDocumentSkewDetectResult> task = analyzer.asyncDocumentSkewDetect(frame);
task.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
public void onSuccess(MLDocumentSkewDetectResult result) {
if (result.getResultCode() != 0) {
corrected = null;
Toast.makeText(DocumentSkewCorrectionActivity.this, "The picture does not meet the requirements.", Toast.LENGTH_SHORT).show();
} else {
// Recognition success.
Point leftTop = result.getLeftTopPosition();
Point rightTop = result.getRightTopPosition();
Point leftBottom = result.getLeftBottomPosition();
Point rightBottom = result.getRightBottomPosition();
_points = new Point[4];
_points[0] = leftTop;
_points[1] = rightTop;
_points[2] = rightBottom;
_points[3] = leftBottom;
layout_image.setVisibility(View.GONE);
documetScanView.setImageBitmap(getCompressesBitmap);
documetScanView.setPoints(_points);
}
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
Toast.makeText(getApplicationContext(), e.getMessage(), Toast.LENGTH_SHORT).show();
}
});
}
private void getDetectdetectResult(MLDocumentSkewCorrectionCoordinateInput coordinateData, MLFrame frame) {
try {
correctionTask = analyzer.asyncDocumentSkewCorrect(frame, coordinateData);
} catch (Exception e) {
Log.e(TAG, "The image does not meet the detection requirements.");
}
try {
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
// The check is successful.
if (refineResult != null && refineResult.getResultCode() == 0) {
corrected = refineResult.getCorrected();
layout_image.setVisibility(View.VISIBLE);
desImageView.setImageBitmap(corrected);
UserDataUtils.saveBitMap(DocumentSkewCorrectionActivity.this, corrected);
} else {
Toast.makeText(DocumentSkewCorrectionActivity.this, "The check fails.", Toast.LENGTH_SHORT).show();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Toast.makeText(DocumentSkewCorrectionActivity.this, "The check fails.", Toast.LENGTH_SHORT).show();
}
});
} catch (Exception e) {
Log.e(TAG, "Please set an image.");
}
}
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
if (getCompressesBitmap != null) {
getCompressesBitmap.recycle();
}
if (corrected != null) {
corrected.recycle();
}
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
}
}
public static int readPictureDegree(String path) {
int degree = 0;
try {
ExifInterface exifInterface = new ExifInterface(path);
int orientation = exifInterface.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
switch (orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
degree = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
degree = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
degree = 270;
break;
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
return degree;
}
private String getRealPathFromURI(Uri contentURI) {
String result;
result = FileUtils.getFilePathByUri(this, contentURI);
return result;
}
public static Bitmap rotaingImageView(int angle, Bitmap bitmap) {
Matrix matrix = new Matrix();
matrix.postRotate(angle);
Bitmap resizedBitmap = Bitmap.createBitmap(bitmap, 0, 0,
bitmap.getWidth(), bitmap.getHeight(), matrix, true);
return resizedBitmap;
}
}
activity_document_skew_correction.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#000000"
tools:ignore="MissingDefaultResource">
<LinearLayout
android:id="@+id/linear_views"
android:layout_width="match_parent"
android:layout_height="80dp"
android:layout_alignParentBottom="true"
android:layout_marginBottom="10dp"
android:orientation="horizontal">
<RelativeLayout
android:id="@+id/rl_chooseImg"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_weight="1">
<ImageView
android:layout_width="25dp"
android:layout_height="25dp"
android:layout_centerInParent="true"
android:src="@drawable/add_picture"
app:tint="@color/colorPrimary" />
</RelativeLayout>
<RelativeLayout
android:id="@+id/rl_adjust"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_weight="1">
<ImageButton
android:id="@+id/adjust"
android:layout_width="70dp"
android:layout_height="70dp"
android:layout_centerInParent="true"
android:background="@drawable/ic_baseline_adjust_24" />
</RelativeLayout>
<RelativeLayout
android:id="@+id/rl_help"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_weight="1">
<ImageView
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_centerInParent="true"
android:src="@drawable/back"
android:visibility="invisible" />
</RelativeLayout>
</LinearLayout>
<RelativeLayout
android:id="@+id/rl_navigation"
android:layout_width="match_parent"
android:layout_height="50dp"
android:background="@color/colorPrimary">
<ImageButton
android:id="@+id/back"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_centerVertical="true"
android:layout_marginLeft="20dp"
android:layout_marginTop="@dimen/icon_back_margin"
android:background="@drawable/back" />
<TextView
android:id="@+id/selectTv"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentEnd="true"
android:layout_centerVertical="true"
android:layout_marginEnd="10dp"
android:fontFamily="@font/montserrat_bold"
android:padding="@dimen/hiad_10_dp"
android:text="Select Doc"
android:textColor="@color/white" />
</RelativeLayout>
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerHorizontal="true"
android:layout_marginTop="99dp"
android:layout_marginBottom="100dp">
<com.shea.pygmycollection.customview.DocumentCorrectImageView
android:id="@+id/iv_documetscan"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
android:padding="20dp"
app:LineColor="@color/colorPrimary" />
</RelativeLayout>
<RelativeLayout
android:id="@+id/layout_image"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerHorizontal="true"
android:layout_marginTop="99dp"
android:layout_marginBottom="100dp"
android:background="#000000"
android:visibility="gone">
<ImageView
android:id="@+id/des_image"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
android:padding="20dp" />
</RelativeLayout>
</RelativeLayout>
Result
Tips and Tricks
Recommended image width and height: 1080 px and 2560 px.
Multi-thread invoking is currently not supported.
The document detection and correction API can only be called by 64-bit apps.
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have built an application where that detects the document in the image, and correct the document and it gives a result. We have learnt the following concepts.
1. What is Document skew correction?
2. Feature of Document skew correction.
3. How to integrate Document Skew correction using Huawei HiAI?
4. How to Apply Huawei HiAI?
5. How to build the application?
Reference
Document skew correction
Apply for Huawei HiAI