Android App Bundle - Features on demand--Part 2 - Huawei Developers

More articles like this, you can visit HUAWEI Developer Forum.​
In this article, we will build Android App Bundle (AAB).
In the previous article, we have learned about Huawei Dynamic Ability. Refer the URL:
https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201297090480680014&fid=0101187876626530001
If you have not read my previous article which is part 1 of this article. I would recommend you to kindly go through the link below:
https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201297090480680014&fid=0101187876626530001
1. Dynamic Ability:
Dynamic Ability is a service in which HUAWEI AppGallery implements dynamic loading based on the Android App Bundle technology.
Apps integrated with the Dynamic Ability SDK can dynamically download features or language packages from HUAWEI AppGallery as required, reducing the unnecessary consumption of network traffic and device storage space.
2. Android App Bundle:
It is a new upload format that includes all your apps compiled code and resources, but accepts APK generation and signing to AppGallery. Traditionally, Android apps are distributed using a special file called an Android Package (.apk).
Let’s start development:
We have created following project directory:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s see the gradle file:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 28
buildToolsVersion "28.0.3"
compileOptions {
sourceCompatibility = 1.8
targetCompatibility = 1.8
}
dataBinding {
enabled = true
}
defaultConfig {
applicationId "com.hms.manoj.aab"
minSdkVersion 26
targetSdkVersion 28
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
dynamicFeatures = [":repository", ":retrofit", ":viewmodel"]
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
// Android MVVM Components
implementation 'androidx.lifecycle:lifecycle-extensions:2.1.0'
// Android Support Library
implementation 'androidx.recyclerview:recyclerview:1.0.0'
implementation 'com.google.android.material:material:1.1.0-alpha04'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
//Huawei Dynamic ability
api 'com.huawei.hms:dynamicability:1.0.11.302'
}
Let’s see the manifest file:
Code:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.hms.manoj.aab">
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<application
android:name="com.hms.manoj.aab.MyApp"
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity
android:name="com.hms.manoj.aab.ui.ShowsActivity"
android:launchMode="singleTask"
android:theme="@style/AppTheme"
android:windowSoftInputMode="adjustNothing">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<activity
android:name="com.hms.manoj.aab.ui.ShowDetailActivity"
android:screenOrientation="fullSensor"
android:label="@string/app_name" />
<activity
android:name="com.hms.manoj.aab.ui.CastActivity"
android:screenOrientation="fullSensor"
android:label="@string/app_name" />
</application>
</manifest>
Let’s see the implementation of feature installation inside the code:
Code:
package com.hms.manoj.aab.ui;
import android.content.Intent;
import android.content.IntentSender;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import androidx.databinding.DataBindingUtil;
import androidx.lifecycle.ViewModelProviders;
import androidx.recyclerview.widget.GridLayoutManager;
import com.hms.manoj.aab.R;
import com.hms.manoj.aab.apiconnector.response.Show;
import com.hms.manoj.aab.databinding.ActivityShowsBinding;
import com.hms.manoj.aab.ui.adapter.ShowAdapter;
import com.hms.manoj.aab.viewmodel.ShowsViewModel;
import com.huawei.hms.feature.install.FeatureInstallManager;
import com.huawei.hms.feature.install.FeatureInstallManagerFactory;
import com.huawei.hms.feature.listener.InstallStateListener;
import com.huawei.hms.feature.model.FeatureInstallException;
import com.huawei.hms.feature.model.FeatureInstallRequest;
import com.huawei.hms.feature.model.FeatureInstallSessionStatus;
import com.huawei.hms.feature.model.InstallState;
import com.huawei.hms.feature.tasks.FeatureTask;
import com.huawei.hms.feature.tasks.listener.OnFeatureCompleteListener;
import com.huawei.hms.feature.tasks.listener.OnFeatureFailureListener;
import com.huawei.hms.feature.tasks.listener.OnFeatureSuccessListener;
import java.util.List;
import static com.hms.manoj.aab.utils.Utils.KEY_SHOW_ID;
public class ShowsActivity extends AppCompatActivity {
public static final String TAG=ShowsActivity.class.getName();
private FeatureInstallManager mFeatureInstallManager;
private int sessionId = 10086;
private ActivityShowsBinding mBinding;
private ShowsViewModel mShowsViewModel;
private InstallStateListener mStateUpdateListener = new InstallStateListener() {
@Override
public void onStateUpdate(InstallState state) {
Log.d(TAG, "install session state " + state);
if (state.status() == FeatureInstallSessionStatus.REQUIRES_USER_CONFIRMATION) {
try {
mFeatureInstallManager.triggerUserConfirm(state, ShowsActivity.this, 1);
} catch (IntentSender.SendIntentException e) {
e.printStackTrace();
}
return;
}
if (state.status() == FeatureInstallSessionStatus.REQUIRES_PERSON_AGREEMENT) {
try {
mFeatureInstallManager.triggerUserConfirm(state, ShowsActivity.this, 1);
} catch (IntentSender.SendIntentException e) {
e.printStackTrace();
}
return;
}
if (state.status() == FeatureInstallSessionStatus.INSTALLED) {
Log.i(TAG, "installed success ,can use new feature");
return;
}
if (state.status() == FeatureInstallSessionStatus.UNKNOWN) {
Log.e(TAG, "installed in unknown status");
return;
}
if (state.status() == FeatureInstallSessionStatus.DOWNLOADING) {
long process = state.bytesDownloaded() * 100 / state.totalBytesToDownload();
Log.d(TAG, "downloading percentage: " + process);
return;
}
if (state.status() == FeatureInstallSessionStatus.FAILED) {
Log.e(TAG, "installed failed, errorcode : " + state.errorCode());
return;
}
}
};
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
getSupportActionBar().setTitle("All Shows");
mFeatureInstallManager = FeatureInstallManagerFactory.create(this);
mBinding = DataBindingUtil.setContentView(this, R.layout.activity_shows);
mShowsViewModel = ViewModelProviders.of(this).get(ShowsViewModel.class);
wait(true);
getShowList();
}
public void installFeature(View view) {
if (mFeatureInstallManager == null) {
return;
}
FeatureInstallRequest request = FeatureInstallRequest.newBuilder()
.addModule("repository")
.addModule("retrofit")
.addModule("viewmodel")
.build();
final FeatureTask<Integer> task = mFeatureInstallManager.installFeature(request);
task.addOnListener(new OnFeatureSuccessListener<Integer>() {
@Override
public void onSuccess(Integer integer) {
Log.d(TAG, "load feature onSuccess.session id:" + integer);
}
});
task.addOnListener(new OnFeatureFailureListener<Integer>() {
@Override
public void onFailure(Exception exception) {
if (exception instanceof FeatureInstallException) {
int errorCode = ((FeatureInstallException) exception).getErrorCode();
Log.d(TAG, "load feature onFailure.errorCode:" + errorCode);
} else {
exception.printStackTrace();
}
}
});
task.addOnListener(new OnFeatureCompleteListener<Integer>() {
@Override
public void onComplete(FeatureTask<Integer> featureTask) {
if (featureTask.isComplete()) {
Log.d(TAG, "complete to start install.");
if (featureTask.isSuccessful()) {
Integer result = featureTask.getResult();
sessionId = result;
Log.d(TAG, "succeed to start install. session id :" + result);
} else {
Log.d(TAG, "fail to start install.");
Exception exception = featureTask.getException();
exception.printStackTrace();
}
}
}
});
Log.d(TAG, "start install func end");
}
private void wait(boolean isLoading) {
if (isLoading) {
mBinding.loaderLayout.rootLoader.setVisibility(View.VISIBLE);
mBinding.layout.shows.setVisibility(View.GONE);
} else {
mBinding.loaderLayout.rootLoader.setVisibility(View.GONE);
mBinding.layout.shows.setVisibility(View.VISIBLE);
}
}
private void getShowList() {
mShowsViewModel.getShowsLiveData().observeForever(showList -> {
if (showList != null) {
wait(false);
String eventName = "Show Success";
Bundle bundle = new Bundle();
bundle.putInt("Show Size", showList.size());
setDataIntoAdapter(showList);
} else {
wait(false);
}
});
}
private void setDataIntoAdapter(List<Show> list) {
mBinding.layout.shows.setLayoutManager(new GridLayoutManager(this, 2));
mBinding.layout.shows.setAdapter(new ShowAdapter(list, (item) -> {
Intent intent = new Intent(getBaseContext(), ShowDetailActivity.class);
intent.putExtra(KEY_SHOW_ID, String.valueOf(item.getId()));
String eventName = "Choosed Show";
Bundle bundle = new Bundle();
bundle.putInt("Show ID", item.getId());
startActivity(intent);
}));
}
@Override
protected void onResume() {
super.onResume();
if (mFeatureInstallManager != null) {
mFeatureInstallManager.registerInstallListener(mStateUpdateListener);
}
}
@Override
protected void onPause() {
super.onPause();
if (mFeatureInstallManager != null) {
mFeatureInstallManager.unregisterInstallListener(mStateUpdateListener);
}
}
}
We will create 3 Dynamic Feature module under android project.
1. Repository Module
2. Network Module
3. ViewModel Module
Let’s create Repository Module:
In this module, we have created ShowRepository class which communicate between network and UI components.
Let’s see gradle file:
Code:
apply plugin: 'com.android.dynamic-feature'
android {
compileSdkVersion 29
defaultConfig {
minSdkVersion 27
targetSdkVersion 29
versionCode 1
versionName "1.0"
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation project(':app')
// RxAndroid
implementation 'io.reactivex.rxjava2:rxjava:2.2.8'
implementation 'io.reactivex.rxjava2:rxandroid:2.1.1'
}
Note: Module should have this in the first line, as it is a dynamic feature:
Code:
apply plugin: 'com.android.dynamic-feature'
Let’s see manifest file of this module:
Code:
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:dist="http://schemas.android.com/apk/distribution"
package="com.hms.android.repository">
<dist:module
dist:instant="false"
dist:title="@string/title_repository">
<dist:delivery>
<dist:on-demand />
</dist:delivery>
<dist:fusing dist:include="true" />
</dist:module>
</manifest>
Let’s create Network Module:
In this module, we have implemented all business model.
Let’s see gradle file:
Code:
apply plugin: 'com.android.dynamic-feature'
android {
compileSdkVersion 29
compileOptions {
sourceCompatibility = 1.8
targetCompatibility = 1.8
}
defaultConfig {
minSdkVersion 27
targetSdkVersion 29
versionCode 1
versionName "1.0"
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation project(':app')
// RxAndroid
implementation 'io.reactivex.rxjava2:rxjava:2.2.8'
implementation 'io.reactivex.rxjava2:rxandroid:2.1.1'
// Android MVVM Components
implementation 'androidx.lifecycle:lifecycle-extensions:2.1.0'
}
Let’s see manifest file of this module:
Code:
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:dist="http://schemas.android.com/apk/distribution"
package="com.hms.android.viewmodel">
<dist:module
dist:instant="false"
dist:title="@string/title_viewmodel">
<dist:delivery>
<dist:on-demand />
</dist:delivery>
<dist:fusing dist:include="true" />
</dist:module>
</manifest>
Let’s build Android App Bundle:
We need to follow the following steps to generate .aab file.
Follow the steps to generate aab file.
1. Step: Goto->Build->Build Bundle(s)/APK(s)->Build Bundle(s). Choose Build > Build Bundle(s)/APK(s) > Build Bundle(s)
2. After build success, following dialog-box displayed.
3. Click locate text and the output folder will be displayed.
Note: Singing the aab file carefully and upload on AppGallery portal.
4. Provide your project/app information in AppGallery portal:
5. Upload your aab/apk file in draft section.
Now We can download the application from app gallery(After approval).
Time being, we can run that apk in our mobile device.
Let's see the result.
If you have any doubts or queries. Leave your valuable comment below.

Related

Surface detection with AR Engine

More information like this, you can visit HUAWEI Developer Forum
​
Introduction
AR Engine has support to detect objects in the real world is called "Environment tracking" and with it you can records illumination, plane, image, object, surface, and other environmental information to help your apps merge virtual objects into scenarios in the physical world.
What is HUAWEI AR Engine?
HUAWEI AR Engine is a platform for building augmented reality (AR) apps on Android smartphones. It is based on the HiSilicon chipset, and integrates AR core algorithms to provide basic AR capabilities such as motion tracking, environment tracking, body tracking, and face tracking, allowing your app to bridge virtual world with the real world, for a brand new visually interactive user experience.
Currently, HUAWEI AR Engine provides three types of capabilities, including motion tracking, environment tracking, and human body and face tracking.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Example Android Application
For this example we will work on Environment tracking so we can detect surfaces, like a table or a floor.
Development Process
Creating an App
Create an app following instructions in Creating an AppGallery Connect Project and Adding an App to the Project.
Platform: Android
Device: Mobile phone
App category: App or Game
Integrating HUAWEI AR Engine SDK
Before development, integrate the HUAWEI AR Engine SDK via the Maven repository into your development environment.
Open the build.gradle file in the root directory of your Android Studio project.
Code:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath 'com.huawei.agconnect:agcp:1.3.2.301'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
maven {url 'https://developer.huawei.com/repo/'}
google()
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
Open the build.gradle file in the app directory of your project
Code:
apply plugin: 'com.android.application'
android {
compileSdkVersion 30
buildToolsVersion "30.0.1"
defaultConfig {
applicationId "com.vsm.myarapplication"
minSdkVersion 27
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'androidx.constraintlayout:constraintlayout:2.0.1'
testImplementation 'junit:junit:4.12'
//
implementation 'com.huawei.agconnect:agconnect-core:1.4.1.300'
//
implementation 'com.huawei.hms:arenginesdk:2.13.0.4'
//
implementation 'de.javagl:obj:0.3.0'
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
}
apply plugin: 'com.huawei.agconnect'
We create our Main Activity:
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<android.opengl.GLSurfaceView
android:id="@+id/surfaceview"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:layout_gravity="top" />
<TextView
android:id="@+id/wordTextView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="TextView"
android:textColor="@color/red"
tools:layout_editor_absoluteX="315dp"
tools:layout_editor_absoluteY="4dp" />
<TextView
android:id="@+id/searchingTextView"
android:layout_width="match_parent"
android:layout_height="47dp"
android:layout_alignParentStart="true"
android:layout_alignParentTop="true"
android:layout_marginStart="2dp"
android:layout_marginTop="59dp"
android:layout_marginBottom="403dp"
android:gravity="center"
android:text="Please move the mobile phone slowly to find the plane"
android:textColor="#ffffff"
tools:layout_editor_absoluteX="0dp"
tools:layout_editor_absoluteY="512dp" />
<TextView
android:id="@+id/plane_other"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/plane_other"
android:visibility="gone"
android:rotation="180"
android:textColor="#ff2211"
tools:layout_editor_absoluteX="315dp"
tools:layout_editor_absoluteY="4dp" />
<TextView
android:id="@+id/plane_floor"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/plane_floor"
android:visibility="gone"
android:rotation="180"
android:textColor="#ff2211"
tools:layout_editor_absoluteX="315dp"
tools:layout_editor_absoluteY="4dp" />
<TextView
android:id="@+id/plane_wall"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/plane_wall"
android:visibility="gone"
android:rotation="180"
android:textColor="#ff2211"
tools:layout_editor_absoluteX="315dp"
tools:layout_editor_absoluteY="4dp" />
<TextView
android:id="@+id/plane_table"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/plane_table"
android:visibility="gone"
android:rotation="180"
android:textColor="#ff2211"
tools:layout_editor_absoluteX="315dp"
tools:layout_editor_absoluteY="4dp" />
<TextView
android:id="@+id/plane_seat"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/plane_seat"
android:visibility="gone"
android:rotation="180"
android:textColor="#ff2211"
tools:layout_editor_absoluteX="315dp"
tools:layout_editor_absoluteY="4dp" />
<TextView
android:id="@+id/plane_ceiling"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/plane_ceiling"
android:visibility="gone"
android:rotation="180"
android:textColor="#ff2211"
tools:layout_editor_absoluteX="315dp"
tools:layout_editor_absoluteY="4dp" />
</RelativeLayout>
AR Engine is not for all devices, so first we need to validate if the device support AR Engine and is aviable, here is the list of devices supported
Code:
private boolean arEngineAbilityCheck() {
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk && isRemindInstall) {
Toast.makeText(this, "Please agree to install.", Toast.LENGTH_LONG).show();
finish();
}
Log.d(TAG, "Is Install AR Engine Apk: " + isInstallArEngineApk);
if (!isInstallArEngineApk) {
startActivity(new Intent(this, ConnectAppMarketActivity.class));
isRemindInstall = true;
}
return AREnginesApk.isAREngineApkReady(this);
}
Code:
private void setMessageWhenError(Exception catchException) {
if (catchException instanceof ARUnavailableServiceNotInstalledException) {
startActivity(new Intent(getApplicationContext(), ConnectAppMarketActivity.class));
} else if (catchException instanceof ARUnavailableServiceApkTooOldException) {
message = "Please update HuaweiARService.apk";
} else if (catchException instanceof ARUnavailableClientSdkTooOldException) {
message = "Please update this app";
} else if (catchException instanceof ARUnSupportedConfigurationException) {
message = "The configuration is not supported by the device!";
} else {
message = "exception throw";
}
}
On our MainActivity.java we call the surface detection
Code:
package com.vsm.myarapplication;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.opengl.GLSurfaceView;
import android.os.Bundle;
import android.util.Log;
import android.view.GestureDetector;
import android.view.MotionEvent;
import android.view.View;
import android.widget.Toast;
import com.huawei.hiar.ARConfigBase;
import com.huawei.hiar.AREnginesApk;
import com.huawei.hiar.ARSession;
import com.huawei.hiar.ARWorldTrackingConfig;
import com.huawei.hiar.exceptions.ARCameraNotAvailableException;
import com.huawei.hiar.exceptions.ARUnSupportedConfigurationException;
import com.huawei.hiar.exceptions.ARUnavailableClientSdkTooOldException;
import com.huawei.hiar.exceptions.ARUnavailableServiceApkTooOldException;
import com.huawei.hiar.exceptions.ARUnavailableServiceNotInstalledException;
import com.vsm.myarapplication.common.ConnectAppMarketActivity;
import com.vsm.myarapplication.common.DisplayRotationManager;
import com.vsm.myarapplication.common.PermissionManager;
import com.vsm.myarapplication.rendering.WorldRenderManager;
import java.util.concurrent.ArrayBlockingQueue;
public class MainActivity extends AppCompatActivity {
private static final String TAG = MainActivity.class.getSimpleName();
private static final int MOTIONEVENT_QUEUE_CAPACITY = 2;
private static final int OPENGLES_VERSION = 2;
private ARSession mArSession;
private GLSurfaceView mSurfaceView;
private WorldRenderManager mWorldRenderManager;
private GestureDetector mGestureDetector;
private DisplayRotationManager mDisplayRotationManager;
private ArrayBlockingQueue<GestureEvent> mQueuedSingleTaps = new ArrayBlockingQueue<>(MOTIONEVENT_QUEUE_CAPACITY);
private String message = null;
private boolean isRemindInstall = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// AR Engine requires the camera permission.
PermissionManager.checkPermission(this);
mSurfaceView = findViewById(R.id.surfaceview);
mDisplayRotationManager = new DisplayRotationManager(this);
initGestureDetector();
mSurfaceView.setPreserveEGLContextOnPause(true);
mSurfaceView.setEGLContextClientVersion(OPENGLES_VERSION);
// Set the EGL configuration chooser, including for the number of
// bits of the color buffer and the number of depth bits.
mSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
mWorldRenderManager = new WorldRenderManager(this, this);
mWorldRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mWorldRenderManager.setQueuedSingleTaps(mQueuedSingleTaps);
mSurfaceView.setRenderer(mWorldRenderManager);
mSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
private void initGestureDetector() {
mGestureDetector = new GestureDetector(this, new GestureDetector.SimpleOnGestureListener() {
@Override
public boolean onDoubleTap(MotionEvent motionEvent) {
onGestureEvent(GestureEvent.createDoubleTapEvent(motionEvent));
return true;
}
@Override
public boolean onSingleTapConfirmed(MotionEvent motionEvent) {
onGestureEvent(GestureEvent.createSingleTapConfirmEvent(motionEvent));
return true;
}
@Override
public boolean onDown(MotionEvent motionEvent) {
return true;
}
@Override
public boolean onScroll(MotionEvent e1, MotionEvent e2, float distanceX, float distanceY) {
onGestureEvent(GestureEvent.createScrollEvent(e1, e2, distanceX, distanceY));
return true;
}
});
mSurfaceView.setOnTouchListener(new View.OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
return mGestureDetector.onTouchEvent(event);
}
});
}
private void onGestureEvent(GestureEvent e) {
boolean offerResult = mQueuedSingleTaps.offer(e);
if (offerResult) {
Log.i(TAG, "Successfully joined the queue.");
} else {
Log.i(TAG, "Failed to join queue.");
}
}
@Override
protected void onResume() {
Log.i(TAG, "onResume");
super.onResume();
Exception exception = null;
message = null;
if (mArSession == null) {
try {
if (!arEngineAbilityCheck()) {
finish();
return;
}
mArSession = new ARSession(getApplicationContext());
ARWorldTrackingConfig config = new ARWorldTrackingConfig(mArSession);
config.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS);
config.setSemanticMode(ARWorldTrackingConfig.SEMANTIC_PLANE);
mArSession.configure(config);
mWorldRenderManager.setArSession(mArSession);
} catch (Exception capturedException) {
Log.e(TAG,capturedException.toString());
exception = capturedException;
setMessageWhenError(capturedException);
}
if (message != null) {
stopArSession(exception);
return;
}
}
try {
mArSession.resume();
} catch (ARCameraNotAvailableException e) {
Toast.makeText(this, "Camera open failed, please restart the app", Toast.LENGTH_LONG).show();
mArSession = null;
return;
}
mDisplayRotationManager.registerDisplayListener();
mSurfaceView.onResume();
}
@Override
protected void onPause() {
Log.i(TAG, "onPause start.");
super.onPause();
if (mArSession != null) {
mDisplayRotationManager.unregisterDisplayListener();
mSurfaceView.onPause();
mArSession.pause();
}
Log.i(TAG, "onPause end.");
}
@Override
protected void onDestroy() {
Log.i(TAG, "onDestroy start.");
if (mArSession != null) {
mArSession.stop();
mArSession = null;
}
super.onDestroy();
Log.i(TAG, "onDestroy end.");
}
private boolean arEngineAbilityCheck() {
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk && isRemindInstall) {
Toast.makeText(this, "Please agree to install.", Toast.LENGTH_LONG).show();
finish();
}
Log.d(TAG, "Is Install AR Engine Apk: " + isInstallArEngineApk);
if (!isInstallArEngineApk) {
startActivity(new Intent(this, ConnectAppMarketActivity.class));
isRemindInstall = true;
}
return AREnginesApk.isAREngineApkReady(this);
}
private void setMessageWhenError(Exception catchException) {
if (catchException instanceof ARUnavailableServiceNotInstalledException) {
startActivity(new Intent(getApplicationContext(), ConnectAppMarketActivity.class));
} else if (catchException instanceof ARUnavailableServiceApkTooOldException) {
message = "Please update HuaweiARService.apk";
} else if (catchException instanceof ARUnavailableClientSdkTooOldException) {
message = "Please update this app";
} else if (catchException instanceof ARUnSupportedConfigurationException) {
message = "The configuration is not supported by the device!";
} else {
message = "exception throw";
}
}
private void stopArSession(Exception exception) {
Log.i(TAG, "stopArSession start.");
Toast.makeText(this, message, Toast.LENGTH_LONG).show();
Log.e(TAG, "Creating session error", exception);
if (mArSession != null) {
mArSession.stop();
mArSession = null;
}
Log.i(TAG, "stopArSession end.");
}
}
Conclusion
We can detect surfaces for multiple purposes in a simple way.
Documentation:
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/introduction-0000001050130900
Codelab:
https://developer.huawei.com/consumer/en/codelab/HWAREngine/index.html#0
Code Sample:
https://github.com/spartdark/hms-arengine-myarapplication

Huawei AGC App Linking integration with Unity

More information like this, you can visit HUAWEI Developer Forumhttps://forums.developer.huawei.com...l/en/home?channelname=HuoDong59&ha_source=xda​
Introduction:
In this article, we will cover Integration of AGC App Linking in Unity Project using Official Plugin (Huawei HMS Core App Services).
Requirements:
1. Unity Editor
2. Huawei device
3. Visual Studio
Follows the steps.
Step 1. Create Unity Project.
Click unity icon.
Click NEW, select 3D, Project Name and Location.
Click CREATE, as follows.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2. Click Asset Store, search Huawei HMS Core App Services and click Import, as follows.
Step 3. Once import is successful, verify directory in Assets > Huawei HMS Core App Services path, as follows.
Step 4. Click Console and create a New Project.
Step 5. Choose Project Settings > Player and edit the required options in Publishing Settings, as follows.
Step 6. Verify the files created in Step 5.
Step 7. Download agconnect-services.json and copy to Assets>Plugins>Android, as follows.
Step 8. Update the Package Name.
Step 9. Open LauncherTemplate.gradle and add below line
Code:
apply plugin: 'com.huawei.agconnect'
Step 10. Open "baseProjectTemplate.gradle" and add lines, as follows.
Code:
maven {url 'https://developer.huawei.com/repo/'}
Code:
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 11. Open "mainTemplate.gradle" and add lines like shown below
Code:
apply plugin: 'com.huawei.agconnect'
Code:
implementation 'com.huawei.agconnect:agconnect-core:1.4.1.300'
implementation 'com.huawei.agconnect:agconnect-applinking:1.4.0.300'
implementation 'com.huawei.hms:hianalytics:5.0.3.300'
Step 12. Open AndroidManifest file and Add Activity like shown below.
Code:
<activity android:name="com.hms.hms_analytic_activity.TestAppLinksActivity"
android:theme="@style/UnityThemeSelector">
<intent-filter>
<action android:name="android.intent.action.VIEW" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE" />
<data android:host="developer.huawei.com" android:scheme="https" />
<data android:host="developer.huawei.com" android:scheme="http" />
</intent-filter>
</activity>
Step 13. Create TestAppLinksActivity.java, JavaCallBack.java and place it in Assets->Plugins->Android.
Code:
//package com.test.applinks;
package com.hms.hms_analytic_activity;
import android.app.Activity;
import android.content.Context;
import android.content.Intent;
import android.net.Uri;
import android.os.Bundle;
import android.util.Log;
import com.unity3d.player.UnityPlayerActivity;
import android.widget.Toast;
import com.huawei.agconnect.applinking.AGConnectAppLinking;
import com.huawei.agconnect.applinking.AppLinking;
import com.huawei.agconnect.applinking.ShortAppLinking;
//import androidx.appcompat.app.AppCompatActivity;
import com.huawei.agconnect.applinking.AGConnectAppLinking;
public class TestAppLinksActivity extends UnityPlayerActivity {
public static JavaCallback callback;
private static String agcLink = null;
private static final String DOMAIN_URI_PREFIX = "https://testulink.dra.agconnect.link";
//private static final String DEEP_LINK = "rmOb2";
private static final String DEEP_LINK = "https://developer.huawei.com/consumer/cn/doc/development/AppGallery-connect-Guides?id=123";
private static String deepLinkData = "";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Log.e("TestAppLinksActivity", "TestAppLinksActivity onCreate>> @@");
mContext = this.getApplicationContext();
handleAppLinking(getIntent());
}
public static void setCallback(JavaCallback callback1) {
Log.e("TestAppLinksActivityl", "setCallback>> @@");
callback = callback1;
if(callback != null){
Log.e("TestAppLinksActivity", "update c # script >>>>");
callback.OnJavaCallback(deepLinkData);
}else{
Log.e("TestAppLinksActivity", "TestAppLinksActivity callback is null plz initialize >>>>");
}
}
private void handleAppLinking(Intent intent) {
AGConnectAppLinking.getInstance().getAppLinking(this, intent).addOnSuccessListener(resolvedLinkData -> {
Uri deepLink = null;
if (resolvedLinkData != null) {
deepLink = resolvedLinkData.getDeepLink();
Log.e("TestAppLinksActivity", " [email protected]!!!$$->"+deepLink.toString());
deepLinkData = deepLink.toString();
if(callback != null){
Log.e("TestAppLinksActivity", "update c # script");
callback.OnJavaCallback(deepLink.toString());
}else{
Log.e("TestAppLinksActivity", "callback is null plz initialize");
}
}
}).addOnFailureListener(e -> {
Log.e("TestAppLinksActivity", "getAppLinking:onFailure", e);
});
}
@Override
protected void onNewIntent(Intent intent) {
super.onNewIntent(intent);
setIntent(intent);
handleAppLinking(intent);
}
public static void createAppLinking() {
AppLinking.Builder builder = new AppLinking.Builder().setUriPrefix(DOMAIN_URI_PREFIX)
.setDeepLink(Uri.parse(DEEP_LINK));
//longTextView.setText(builder.buildAppLinking().getUri().toString());
Log.e("TestAppLinksActivity", "createAppLinking long -->"+builder.buildAppLinking().getUri().toString());
agcLink = builder.buildAppLinking().getUri().toString();
builder.buildShortAppLinking(ShortAppLinking.LENGTH.SHORT).addOnSuccessListener(shortAppLinking -> {
//shortTextView.setText(shortAppLinking.getShortUrl().toString());
Log.e("TestAppLinksActivity", "createAppLinking short -->"+shortAppLinking.getShortUrl().toString());
agcLink = shortAppLinking.getShortUrl().toString();
}).addOnFailureListener(e -> {
//Toast.makeText(this, e.getMessage(), Toast.LENGTH_LONG).show();
Log.e("TestAppLinksActivity", "createAppLinking failure -->"+e.getMessage());
});
}
//public void shareLink() {
public static void shareLink(Activity activity){
if (agcLink != null) {
Log.e("TestAppLinksActivity", "shareLink agcLink is not null start activity");
Intent intent = new Intent(Intent.ACTION_SEND);
intent.setType("text/plain");
intent.putExtra(Intent.EXTRA_TEXT, agcLink);
intent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
activity.startActivity(intent);
//startActivity(intent);
}else{
Log.e("TestAppLinksActivity", " shareLink agcLink is null");
}
}
private static Context mContext;
public static Context getAppContext(){
return mContext;
}
}
code explanation : https://developer.huawei.com/consumer/en/codelab/AppLinking/index.html#7
Code:
package com.hms.hms_analytic_activity;
public interface JavaCallback {
void OnJavaCallback(String index);
}
Note : Java classes are using package name com.hms.hms_analytic_activity & we dont create any folder structure, this is allowed.
For full content, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202361288668900258&fid=0101188387844930001

Validate your news: Feat. Huawei ML Kit (Text Image Super-Resolution)

Introduction
Quality improvement has become crucial in this era of digitalization where all our documents are kept in the folders, shared over the network and read on the digital device.
Imaging the grapple of an elderly person who has no way to read and understand an old prescribed medical document which has gone blurred and deteriorated.
Can we evade such issues??
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s unbind what Huawei ML Kit offers to overcome such challenges of our day to day life.
Huawei ML Kit provides Text Image Super-Resolution API to improvise the quality and visibility of old and blurred text on an image.
Text Image Super-Resolution can zoom in an image that contains the text and significantly improve the definition of the text.
Limitations
The text image super-resolution service requires images with the maximum resolution 800 x 800 px and the length greater than or equal to 64 px.
Development Overview
Prerequisite
Must have a Huawei Developer Account
Must have Android Studio 3.0 or later
Must have a Huawei phone with HMS Core 4.0.2.300 or later
EMUI 3.0 or later
Software Requirements
Java SDK 1.7 or later
Android 5.0 or later
Preparation
Create an app or project in the Huawei app gallery connect.
Provide the SHA Key and App Package name of the project in App Information Section and enable the ML Kit API.
Download the agconnect-services.json file.
Create an Android project.
Integration
Add below to build.gradle (project)file, under buildscript/repositories and allprojects/repositories.
Code:
Maven {url 'http://developer.huawei.com/repo/'}
Add below to build.gradle (app) file, under dependencies.
To use the Base SDK of ML Kit-Text Image Super Resolution, add the following dependencies:
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-textimagesuperresolution:2.0.3.300'
}
Adding permissions
Code:
<uses-permission android:name="android.permission.CAMERA " />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Automatically Updating the Machine Learning Model
Add the following statements to the AndroidManifest.xml file to automatically install the machine learning model on the user’s device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "tisr"/>
Development Process
This article focuses on demonstrating the capabilities of Huawei’s ML Kit: Text Image Super- Resolution API’s.
Here is the example which explains how can we integrate this powerful API to leverage the benefits of improvising the Text-Image quality and provide full accessibility to the reader to read the old and blur newspapers from an online news directory.
TextImageView Activity : Launcher Activity
This is main activity of “The News Express “application.
Code:
package com.mlkitimagetext.example;
import androidx.appcompat.app.AppCompatActivity;
import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import com.mlkitimagetext.example.textimagesuperresolution.TextImageSuperResolutionActivity;
public class TextImageView extends AppCompatActivity {
Button NewsExpress;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_text_image_view);
NewsExpress = findViewById(R.id.bt1);
NewsExpress.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
startActivity(new Intent(TextImageView.this, TextImageSuperResolutionActivity.class));
}
});
}
}
Activity_text_image_view.xml
This is the view class for the above activity class.
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/im3">
<LinearLayout
android:id="@+id/ll_buttons"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="200dp"
android:orientation="vertical">
<Button
android:id="@+id/bt1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@android:color/transparent"
android:layout_gravity="center"
android:text="The News Express"
android:textAllCaps="false"
android:textStyle="bold"
android:textSize="34dp"
android:textColor="@color/mlkit_bcr_text_color_white"></Button>
<TextView
android:layout_width="wrap_content"
android:layout_height="match_parent"
android:textStyle="bold"
android:text="Validate Your News"
android:textSize="20sp"
android:layout_gravity="center"
android:textColor="#9fbfdf"/>
</LinearLayout>
</RelativeLayout>
TextImageSuperResolutionActivity
This activity class performs following actions:
Image picker implementation to pick the image from the gallery
Convert selected image to Bitmap
Create a text image super-resolution analyser.
Create an MLFrame object by using android.graphics.Bitmap.
Perform super-resolution processing on the image with text.
Stop the analyser to release detection resources.
Code:
package com.mlkitimagetext.example;
import android.content.Intent;
import android.graphics.Bitmap;
import android.net.Uri;
import android.os.Bundle;
import android.provider.MediaStore;
import android.view.View;
import android.widget.ImageView;
import android.widget.Toast;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.common.MLException;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolution;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolutionAnalyzer;
import com.huawei.hms.mlsdk.textimagesuperresolution.MLTextImageSuperResolutionAnalyzerFactory;
import com.mlkitimagetext.example.R;
import androidx.appcompat.app.AppCompatActivity;
import java.io.IOException;
public class TextImageSuperResolutionActivity<button> extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = "TextSuperResolutionActivity";
private MLTextImageSuperResolutionAnalyzer analyzer;
private static final int INDEX_3X = 1;
private static final int INDEX_ORIGINAL = 2;
private ImageView imageView;
private Bitmap srcBitmap;
Uri imageUri;
Boolean ImageSetupFlag = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_text_super_resolution);
imageView = findViewById(R.id.image);
imageView.setOnClickListener(this);
findViewById(R.id.btn_load).setOnClickListener(this);
createAnalyzer();
}
@Override
public void onClick(View view) {
if (view.getId() == R.id.btn_load) {
openGallery();
}else if (view.getId() == R.id.image)
{
if(ImageSetupFlag != true)
{
detectImage(INDEX_3X);
}else {
detectImage(INDEX_ORIGINAL);
ImageSetupFlag = false;
}
}
}
private void openGallery() {
Intent gallery = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(gallery, 1);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data){
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK && requestCode == 1){
imageUri = data.getData();
try {
srcBitmap = MediaStore.Images.Media.getBitmap(this.getContentResolver(), imageUri);
} catch (IOException e) {
e.printStackTrace();
}
//BitmapFactory.decodeResource(getResources(), R.drawable.new1);
imageView.setImageURI(imageUri);
}
}
private void release() {
if (analyzer == null) {
return;
}
analyzer.stop();
}
private void detectImage(int type) {
if (type == INDEX_ORIGINAL) {
setImage(srcBitmap);
return;
}
if (analyzer == null) {
return;
}
// Create an MLFrame by using the bitmap.
MLFrame frame = new MLFrame.Creator().setBitmap(srcBitmap).create();
Task<MLTextImageSuperResolution> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLTextImageSuperResolution>() {
public void onSuccess(MLTextImageSuperResolution result) {
// success.
Toast.makeText(getApplicationContext(), "Success", Toast.LENGTH_SHORT).show();
setImage(result.getBitmap());
ImageSetupFlag = true;
}
})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException) e;
// Get the error code, developers can give different page prompts according to the error code.
int errorCode = mlException.getErrCode();
// Get the error message, developers can combine the error code to quickly locate the problem.
String errorMessage = mlException.getMessage();
Toast.makeText(getApplicationContext(), "Error:" + errorCode + " Message:" + errorMessage, Toast.LENGTH_SHORT).show();
} else {
// Other exception。
Toast.makeText(getApplicationContext(), "Failed:" + e.getMessage(), Toast.LENGTH_SHORT).show();
}
}
});
}
private void setImage(final Bitmap bitmap) {
imageView.setImageBitmap(bitmap);
}
private void createAnalyzer() {
analyzer = MLTextImageSuperResolutionAnalyzerFactory.getInstance().getTextImageSuperResolutionAnalyzer();
}
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
release();
}
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202388336667910498&fid=0101187876626530001
Which all image format is supported?

Demystifying Document Skew Correction feat. HUAWEI ML KIT

Prolusion
This era is revolutionary for the science and research as most of the innovation is for the consumer needs.
We all know that document scanning is routine errand for most of us and a dire need for today’s digital world.
In such needs, we often require a powerful mechanism which can correct the informalities and skew for our documents.
Document Skew Correction is a technique which helps in correcting the tilted images to the right facing angle which further improvise the visibility of the image.
Huawei ML Kit offers a robust API for skew correction which enables automatic position identification of a document in an image and corrects the shooting angle. It also allows users to customize the edge points.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Suggestions
It is recommended the shooting angle of the image should be within 30 degrees.
It is recommended that the image size be within the range of 320 x 320 px to 1920 x 1920 px.
Skew detection API supports JPG, JPEG, and PNG image formats.
Development Overview
Prerequisite
Must have a Huawei Developer Account
Must have Android Studio 3.0 or later
Must have a Huawei phone with HMS Core 4.0.2.300 or later
EMUI 3.0 or later
Software Requirements
Java SDK 1.7 or later
Android 5.0 or later
Preparation
Create an app or project in the Huawei app gallery connect.
Provide the SHA Key and App Package name of the project in App Information Section and enable the ML Kit API.
Download the agconnect-services.json file.
Create an Android project.
Integration
Add below to build.gradle (project)file, under buildscript/repositories and allprojects/repositories.
Maven {url 'http://developer.huawei.com/repo/'}
Add below to build.gradle (app) file, under dependencies.
To use the Base SDK of ML Kit-Document Skew Correction, add the following dependencies:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-documentskew:2.0.4.300'
}
To use the Full SDK of ML Kit- Document Skew Correction, add the following dependencies:
dependencies{
// Import the Model Package.
implementation 'com.huawei.hms:ml-computer-vision-documentskew-model:2.0.4.300'
}
Adding permissions
<uses-permission android:name="android.permission.CAMERA " />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Automatically Updating the Machine Learning Model
Add the following statements to the AndroidManifest.xml file to automatically install the machine learning model on the user’s device.
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "dsc"/>
Development Process
This article focuses on demonstrating the capabilities of Huawei’s ML Kit: Document Skew Correction API’s.
Here is the example of “SUPER DOC” application which allows user to capture and fetch the images from local memory of the device and let them correct using the document which explains how we can integrate this powerful API to leverage the benefits of correcting a skewed document image to provider the right angle to the document which eventually improves the readability of the document.
Skewdetect Activity
This activity is responsible to click and fetch the images and detect them for the skew correction and further align them and provide the output as aligned document image.
Code:
package com.mlkit.documentSkewCorrection;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Point;
import android.net.Uri;
import android.os.Bundle;
import android.provider.MediaStore;
import android.util.Log;
import android.view.View;
import android.widget.ImageView;
import android.widget.Toast;
import androidx.appcompat.app.AppCompatActivity;
import com.google.android.material.floatingactionbutton.FloatingActionButton;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionConstant;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionCoordinateInput;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionResult;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzer;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerFactory;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerSetting;
import ccom.huawei.hms.mlsdk.dsc.MLDocumentSkewDetectResult;
import com.mlkit.documentSkewCorrection.R;
public class SkewDetect extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = "SkewDetectActivity";
private MLDocumentSkewCorrectionAnalyzer analyzer;
private ImageView mImageView;
private Bitmap bitmap;
Uri imageUri;
private MLDocumentSkewCorrectionCoordinateInput input;
private MLFrame mlFrame;
Boolean FlagCameraClickDone = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
this.setContentView(R.layout.activity_document_skew_correction);
this.findViewById(R.id.image_refine).setOnClickListener(this);
this.mImageView = this.findViewById(R.id.image_refine_result);
if(FlagCameraClickDone)
{
this.findViewById(R.id.image_refine).setVisibility(View.VISIBLE);
}
else
{
this.findViewById(R.id.image_refine).setVisibility(View.GONE);
}
// Create the setting.
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting
.Factory()
.create();
// Get the analyzer.
this.analyzer = MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
FloatingActionButton fab = (FloatingActionButton) findViewById(R.id.fab);
fab.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
FlagCameraClickDone = false;
Intent gallery = new Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(gallery, 1);
}
});
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data){
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK && requestCode == 1){
imageUri = data.getData();
try {
bitmap = MediaStore.Images.Media.getBitmap(this.getContentResolver(), imageUri);
// Create a MLFrame by using the bitmap.
this.mlFrame = new MLFrame.Creator().setBitmap(this.bitmap).create();
} catch (IOException e) {
e.printStackTrace();
}
//BitmapFactory.decodeResource(getResources(), R.drawable.new1);
FlagCameraClickDone = true;
this.findViewById(R.id.image_refine).setVisibility(View.VISIBLE);
mImageView.setImageURI(imageUri);
}
}
@Override
public void onClick(View v) {
this.analyzer();
}
private void analyzer() {
// Call document skew detect interface to get coordinate data
Task<MLDocumentSkewDetectResult> detectTask = this.analyzer.asyncDocumentSkewDetect(this.mlFrame);
detectTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
@Override
public void onSuccess(MLDocumentSkewDetectResult detectResult) {
Log.e(TAG, detectResult.getResultCode() + ":");
if (detectResult != null) {
int resultCode = detectResult.getResultCode();
// Detect success.
if (resultCode == MLDocumentSkewCorrectionConstant.SUCCESS) {
Point leftTop = detectResult.getLeftTopPosition();
Point rightTop = detectResult.getRightTopPosition();
Point leftBottom = detectResult.getLeftBottomPosition();
Point rightBottom = detectResult.getRightBottomPosition();
List<Point> coordinates = new ArrayList<>();
coordinates.add(leftTop);
coordinates.add(rightTop);
coordinates.add(rightBottom);
coordinates.add(leftBottom);
SkewDetect .this.setDetectData(new MLDocumentSkewCorrectionCoordinateInput(coordinates));
SkewDetect .this.refineImg();
} else if (resultCode == MLDocumentSkewCorrectionConstant.IMAGE_DATA_ERROR) {
// Parameters error.
Log.e(TAG, "Parameters error!");
SkewDetect.this.displayFailure();
} else if (resultCode == MLDocumentSkewCorrectionConstant.DETECT_FAILD) {
// Detect failure.
Log.e(TAG, "Detect failed!");
SkewDetect .this.displayFailure();
}
} else {
// Detect exception.
Log.e(TAG, "Detect exception!");
SkewDetect .this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Processing logic for detect failure.
Log.e(TAG, e.getMessage() + "");
SkewDetect .this.displayFailure();
}
});
}
// Show result
private void displaySuccess(MLDocumentSkewCorrectionResult refineResult) {
if (this.bitmap == null) {
this.displayFailure();
return;
}
// Draw the portrait with a transparent background.
Bitmap corrected = refineResult.getCorrected();
if (corrected != null) {
this.mImageView.setImageBitmap(corrected);
} else {
this.displayFailure();
}
}
private void displayFailure() {
Toast.makeText(this.getApplicationContext(), "Fail", Toast.LENGTH_SHORT).show();
}
private void setDetectData(MLDocumentSkewCorrectionCoordinateInput input) {
this.input = input;
}
// Refine image
private void refineImg() {
// Call refine image interface
Task<MLDocumentSkewCorrectionResult> correctionTask = this.analyzer.asyncDocumentSkewCorrect(this.mlFrame, this.input);
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
if (refineResult != null) {
int resultCode = refineResult.getResultCode();
if (resultCode == MLDocumentSkewCorrectionConstant.SUCCESS) {
SkewDetect .this.displaySuccess(refineResult);
} else if (resultCode == MLDocumentSkewCorrectionConstant.IMAGE_DATA_ERROR) {
// Parameters error.
Log.e(TAG, "Parameters error!");
SkewDetect .this.displayFailure();
} else if (resultCode == MLDocumentSkewCorrectionConstant.CORRECTION_FAILD) {
// Correct failure.
Log.e(TAG, "Correct failed!");
SkewDetect .this.displayFailure();
}
} else {
// Correct exception.
Log.e(TAG, "Correct exception!");
SkewDetect .this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Processing logic for refine failure.
SkewDetect .this.displayFailure();
}
});
}
@Override
protected void onDestroy() {
super.onDestroy();
if (this.analyzer != null) {
try {
this.analyzer.stop();
} catch (IOException e) {
Log.e(SkewDetect .TAG, "Stop failed: " + e.getMessage());
}
}
}
}
Skewdetect Activity View Class
This class is responsible for creating the UI definition of the application.
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/shape"
tools:context="com.huawei.mlkit.example.face.StillFaceAnalyseActivity">
/**Create an image view to hold the bitmap**/
<ImageView
android:id="@+id/image_refine_result"
android:layout_width="500dp"
android:layout_height="300dp"
android:layout_below="@+id/image_foreground"
android:layout_marginTop="20dp"></ImageView>
<RelativeLayout
android:id="@+id/relativeLayout1"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_margin="20dp">
/**Create a button to fetch the ML kit API for skew correction**/
<Button
android:id="@+id/imagecorrection"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_alignParentBottom="true"
android:layout_gravity="start|bottom"
android:background="@color/emui_color_gray_1"
android:text=" Skew Correction "
android:textAllCaps="false"
android:textColor="@color/emui_color_gray_7"></Button>
/**Create a button to capture the image for skew correction**/
<com.google.android.material.floatingactionbutton.FloatingActionButton
android:id="@+id/fab"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentRight="true"
android:layout_alignParentBottom="true"
android:layout_gravity="end|bottom"
android:contentDescription="@string/camera"
android:outlineProvider="none"
android:src="@drawable/gall"
app:backgroundTint="@color/emui_color_gray_1"
app:borderWidth="0dp"
app:elevation="2dp" />
/**Create a button to fetch the image for skew correction**/
<com.google.android.material.floatingactionbutton.FloatingActionButton
android:id="@+id/cam"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_gravity="bottom"
android:layout_alignParentLeft="true"
android:contentDescription="@string/camera"
android:outlineProvider="none"
android:src="@drawable/icon_cam"
app:backgroundTint="@color/emui_color_gray_1"
app:borderWidth="0dp"
app:elevation="2dp" />
</RelativeLayout>
</RelativeLayout>
Results
Conclusion
In this article we took a small step to create and demonstrate the integration of Document Skew Correction API’s from Huawei ML Kit for better document image readability.
Upcoming article will have the integration of multiple ML API’s under one powerful application.
Stay tuned!!
References
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/documentskewcorrection-0000001051703156

Pygmy Collection Application Part 7 (Document Skew correction Huawei HiAI)

Introduction​
If are you new to this application, please follow my previous articles
Pygmy collection application Part 1 (Account kit)
Intermediate: Pygmy Collection Application Part 2 (Ads Kit)
Intermediate: Pygmy Collection Application Part 3 (Crash service)
Intermediate: Pygmy Collection Application Part 4 (Analytics Kit Custom Events)
Intermediate: Pygmy Collection Application Part 5 (Safety Detect)
Intermediate: Pygmy Collection Application Part 6 (Room database)
Click to expand...
Click to collapse
In this article, we will learn how to integrate Huawei Document skew correction using Huawei HiAI in Pygmy collection finance application.
In pygmy collection application for customers KYC update need to happen, so agents will update the KYC, in that case document should be proper, so we will integrate the document skew correction for the image angle adjustment.
Commonly user Struggles a lot while uploading or filling any form due to document issue. This application helps them to take picture from the camera or from the gallery, it automatically detects document from the image.
Document skew correction is used to improve the document photography process by automatically identifying the document in an image. This actually returns the position of the document in original image.
Document skew correction also adjusts the shooting angle of the document based on the position information of the document in original image. This function has excellent performance in scenarios where photos of old photos, paper letters, and drawings are taken for electronic storage.
Features
Document detection: Recognizes documents in images and returns the location information of the documents in the original images.
Document correction: Corrects the document shooting angle based on the document location information in the original images, where areas to be corrected can be customized.
How to integrate Document Skew Correction
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps.
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Select image dialog
Java:
private void selectImage() {
final CharSequence[] items = {"Take Photo", "Choose from Library",
"Cancel"};
AlertDialog.Builder builder = new AlertDialog.Builder(KycUpdateActivity.this);
builder.setTitle("Add Photo!");
builder.setItems(items, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int item) {
boolean result = PygmyUtils.checkPermission(KycUpdateActivity.this);
if (items[item].equals("Take Photo")) {
/*userChoosenTask = "Take Photo";
if (result)
cameraIntent();*/
operate_type = TAKE_PHOTO;
requestPermission(Manifest.permission.CAMERA);
} else if (items[item].equals("Choose from Library")) {
/* userChoosenTask = "Choose from Library";
if (result)
galleryIntent();*/
operate_type = SELECT_ALBUM;
requestPermission(Manifest.permission.READ_EXTERNAL_STORAGE);
} else if (items[item].equals("Cancel")) {
dialog.dismiss();
}
}
});
builder.show();
}
Open Document Skew correction activity
Java:
private void startSuperResolutionActivity() {
Intent intent = new Intent(KycUpdateActivity.this, DocumentSkewCorrectionActivity.class);
intent.putExtra("operate_type", operate_type);
startActivityForResult(intent, DOCUMENT_SKEW_CORRECTION_REQUEST);
}
DocumentSkewCorrectonActivity.java
Java:
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import android.app.Activity;
import android.app.AlertDialog;
import android.app.ProgressDialog;
import android.content.ContentValues;
import android.content.DialogInterface;
import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.Matrix;
import android.graphics.Point;
import android.media.ExifInterface;
import android.net.Uri;
import android.os.Bundle;
import android.os.Handler;
import android.provider.MediaStore;
import android.util.Log;
import android.view.View;
import android.widget.ImageButton;
import android.widget.ImageView;
import android.widget.RelativeLayout;
import android.widget.TextView;
import android.widget.Toast;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzer;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerFactory;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerSetting;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionCoordinateInput;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionResult;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewDetectResult;
import com.shea.pygmycollection.R;
import com.shea.pygmycollection.customview.DocumentCorrectImageView;
import com.shea.pygmycollection.utils.FileUtils;
import com.shea.pygmycollection.utils.UserDataUtils;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
public class DocumentSkewCorrectionActivity extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = "SuperResolutionActivity";
private static final int REQUEST_SELECT_IMAGE = 1000;
private static final int REQUEST_TAKE_PHOTO = 1;
private ImageView desImageView;
private ImageButton adjustImgButton;
private Bitmap srcBitmap;
private Bitmap getCompressesBitmap;
private Uri imageUri;
private MLDocumentSkewCorrectionAnalyzer analyzer;
private Bitmap corrected;
private ImageView back;
private Task<MLDocumentSkewCorrectionResult> correctionTask;
private DocumentCorrectImageView documetScanView;
private Point[] _points;
private RelativeLayout layout_image;
private MLFrame frame;
TextView selectTv;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_document_skew_corretion);
//setStatusBarColor(this, R.color.black);
analyzer = createAnalyzer();
adjustImgButton = findViewById(R.id.adjust);
layout_image = findViewById(R.id.layout_image);
desImageView = findViewById(R.id.des_image);
documetScanView = findViewById(R.id.iv_documetscan);
back = findViewById(R.id.back);
selectTv = findViewById(R.id.selectTv);
adjustImgButton.setOnClickListener(this);
findViewById(R.id.back).setOnClickListener(this);
selectTv.setOnClickListener(this);
findViewById(R.id.rl_chooseImg).setOnClickListener(this);
back.setOnClickListener(this);
int operate_type = getIntent().getIntExtra("operate_type", 101);
if (operate_type == 101) {
takePhoto();
} else if (operate_type == 102) {
selectLocalImage();
}
}
private String[] chooseTitles;
@Override
public void onClick(View v) {
if (v.getId() == R.id.adjust) {
List<Point> points = new ArrayList<>();
Point[] cropPoints = documetScanView.getCropPoints();
if (cropPoints != null) {
points.add(cropPoints[0]);
points.add(cropPoints[1]);
points.add(cropPoints[2]);
points.add(cropPoints[3]);
}
MLDocumentSkewCorrectionCoordinateInput coordinateData = new MLDocumentSkewCorrectionCoordinateInput(points);
getDetectdetectResult(coordinateData, frame);
} else if (v.getId() == R.id.rl_chooseImg) {
chooseTitles = new String[]{getResources().getString(R.string.take_photo), getResources().getString(R.string.select_from_album)};
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setItems(chooseTitles, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialogInterface, int position) {
if (position == 0) {
takePhoto();
} else {
selectLocalImage();
}
}
});
builder.create().show();
} else if (v.getId() == R.id.selectTv) {
if (corrected == null) {
Toast.makeText(this, "Document Skew correction is not yet success", Toast.LENGTH_SHORT).show();
return;
} else {
ProgressDialog pd = new ProgressDialog(this);
pd.setMessage("Please wait...");
pd.show();
//UserDataUtils.saveBitMap(this, corrected);
Intent intent = new Intent();
intent.putExtra("status", "success");
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
if (pd != null && pd.isShowing()) {
pd.dismiss();
}
setResult(Activity.RESULT_OK, intent);
finish();
}
}, 3000);
}
} else if (v.getId() == R.id.back) {
finish();
}
}
private MLDocumentSkewCorrectionAnalyzer createAnalyzer() {
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting
.Factory()
.create();
return MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
}
private void takePhoto() {
layout_image.setVisibility(View.GONE);
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (takePictureIntent.resolveActivity(this.getPackageManager()) != null) {
ContentValues values = new ContentValues();
values.put(MediaStore.Images.Media.TITLE, "New Picture");
values.put(MediaStore.Images.Media.DESCRIPTION, "From Camera");
this.imageUri = this.getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values);
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, this.imageUri);
this.startActivityForResult(takePictureIntent, DocumentSkewCorrectionActivity.this.REQUEST_TAKE_PHOTO);
}
}
private void selectLocalImage() {
layout_image.setVisibility(View.GONE);
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, REQUEST_SELECT_IMAGE);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_OK) {
imageUri = data.getData();
try {
if (imageUri != null) {
srcBitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri);
String realPathFromURI = getRealPathFromURI(imageUri);
int i = readPictureDegree(realPathFromURI);
Bitmap spBitmap = rotaingImageView(i, srcBitmap);
Matrix matrix = new Matrix();
matrix.setScale(0.5f, 0.5f);
getCompressesBitmap = Bitmap.createBitmap(spBitmap, 0, 0, spBitmap.getWidth(),
spBitmap.getHeight(), matrix, true);
reloadAndDetectImage();
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
} else if (requestCode == REQUEST_TAKE_PHOTO && resultCode == Activity.RESULT_OK) {
try {
if (imageUri != null) {
srcBitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri);
String realPathFromURI = getRealPathFromURI(imageUri);
int i = readPictureDegree(realPathFromURI);
Bitmap spBitmap = rotaingImageView(i, srcBitmap);
Matrix matrix = new Matrix();
matrix.setScale(0.5f, 0.5f);
getCompressesBitmap = Bitmap.createBitmap(spBitmap, 0, 0, spBitmap.getWidth(),
srcBitmap.getHeight(), matrix, true);
reloadAndDetectImage();
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
} else if (resultCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_CANCELED) {
finish();
}
}
private void reloadAndDetectImage() {
if (imageUri == null) {
return;
}
frame = MLFrame.fromBitmap(getCompressesBitmap);
Task<MLDocumentSkewDetectResult> task = analyzer.asyncDocumentSkewDetect(frame);
task.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
public void onSuccess(MLDocumentSkewDetectResult result) {
if (result.getResultCode() != 0) {
corrected = null;
Toast.makeText(DocumentSkewCorrectionActivity.this, "The picture does not meet the requirements.", Toast.LENGTH_SHORT).show();
} else {
// Recognition success.
Point leftTop = result.getLeftTopPosition();
Point rightTop = result.getRightTopPosition();
Point leftBottom = result.getLeftBottomPosition();
Point rightBottom = result.getRightBottomPosition();
_points = new Point[4];
_points[0] = leftTop;
_points[1] = rightTop;
_points[2] = rightBottom;
_points[3] = leftBottom;
layout_image.setVisibility(View.GONE);
documetScanView.setImageBitmap(getCompressesBitmap);
documetScanView.setPoints(_points);
}
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
Toast.makeText(getApplicationContext(), e.getMessage(), Toast.LENGTH_SHORT).show();
}
});
}
private void getDetectdetectResult(MLDocumentSkewCorrectionCoordinateInput coordinateData, MLFrame frame) {
try {
correctionTask = analyzer.asyncDocumentSkewCorrect(frame, coordinateData);
} catch (Exception e) {
Log.e(TAG, "The image does not meet the detection requirements.");
}
try {
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
// The check is successful.
if (refineResult != null && refineResult.getResultCode() == 0) {
corrected = refineResult.getCorrected();
layout_image.setVisibility(View.VISIBLE);
desImageView.setImageBitmap(corrected);
UserDataUtils.saveBitMap(DocumentSkewCorrectionActivity.this, corrected);
} else {
Toast.makeText(DocumentSkewCorrectionActivity.this, "The check fails.", Toast.LENGTH_SHORT).show();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Toast.makeText(DocumentSkewCorrectionActivity.this, "The check fails.", Toast.LENGTH_SHORT).show();
}
});
} catch (Exception e) {
Log.e(TAG, "Please set an image.");
}
}
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
if (getCompressesBitmap != null) {
getCompressesBitmap.recycle();
}
if (corrected != null) {
corrected.recycle();
}
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
}
}
public static int readPictureDegree(String path) {
int degree = 0;
try {
ExifInterface exifInterface = new ExifInterface(path);
int orientation = exifInterface.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
switch (orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
degree = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
degree = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
degree = 270;
break;
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
return degree;
}
private String getRealPathFromURI(Uri contentURI) {
String result;
result = FileUtils.getFilePathByUri(this, contentURI);
return result;
}
public static Bitmap rotaingImageView(int angle, Bitmap bitmap) {
Matrix matrix = new Matrix();
matrix.postRotate(angle);
Bitmap resizedBitmap = Bitmap.createBitmap(bitmap, 0, 0,
bitmap.getWidth(), bitmap.getHeight(), matrix, true);
return resizedBitmap;
}
}
activity_document_skew_correction.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#000000"
tools:ignore="MissingDefaultResource">
<LinearLayout
android:id="@+id/linear_views"
android:layout_width="match_parent"
android:layout_height="80dp"
android:layout_alignParentBottom="true"
android:layout_marginBottom="10dp"
android:orientation="horizontal">
<RelativeLayout
android:id="@+id/rl_chooseImg"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_weight="1">
<ImageView
android:layout_width="25dp"
android:layout_height="25dp"
android:layout_centerInParent="true"
android:src="@drawable/add_picture"
app:tint="@color/colorPrimary" />
</RelativeLayout>
<RelativeLayout
android:id="@+id/rl_adjust"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_weight="1">
<ImageButton
android:id="@+id/adjust"
android:layout_width="70dp"
android:layout_height="70dp"
android:layout_centerInParent="true"
android:background="@drawable/ic_baseline_adjust_24" />
</RelativeLayout>
<RelativeLayout
android:id="@+id/rl_help"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_weight="1">
<ImageView
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_centerInParent="true"
android:src="@drawable/back"
android:visibility="invisible" />
</RelativeLayout>
</LinearLayout>
<RelativeLayout
android:id="@+id/rl_navigation"
android:layout_width="match_parent"
android:layout_height="50dp"
android:background="@color/colorPrimary">
<ImageButton
android:id="@+id/back"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_centerVertical="true"
android:layout_marginLeft="20dp"
android:layout_marginTop="@dimen/icon_back_margin"
android:background="@drawable/back" />
<TextView
android:id="@+id/selectTv"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentEnd="true"
android:layout_centerVertical="true"
android:layout_marginEnd="10dp"
android:fontFamily="@font/montserrat_bold"
android:padding="@dimen/hiad_10_dp"
android:text="Select Doc"
android:textColor="@color/white" />
</RelativeLayout>
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerHorizontal="true"
android:layout_marginTop="99dp"
android:layout_marginBottom="100dp">
<com.shea.pygmycollection.customview.DocumentCorrectImageView
android:id="@+id/iv_documetscan"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
android:padding="20dp"
app:LineColor="@color/colorPrimary" />
</RelativeLayout>
<RelativeLayout
android:id="@+id/layout_image"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerHorizontal="true"
android:layout_marginTop="99dp"
android:layout_marginBottom="100dp"
android:background="#000000"
android:visibility="gone">
<ImageView
android:id="@+id/des_image"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
android:padding="20dp" />
</RelativeLayout>
</RelativeLayout>
Result​
Tips and Tricks​
Recommended image width and height: 1080 px and 2560 px.
Multi-thread invoking is currently not supported.
The document detection and correction API can only be called by 64-bit apps.
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion​
In this article, we have built an application where that detects the document in the image, and correct the document and it gives a result. We have learnt the following concepts.
1. What is Document skew correction?
2. Feature of Document skew correction.
3. How to integrate Document Skew correction using Huawei HiAI?
4. How to Apply Huawei HiAI?
5. How to build the application?
Reference​
Document skew correction
Apply for Huawei HiAI​

Categories

Resources