AppGallery Connect Dynamic Ability Feature - Huawei Developers

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202332940073930024&fid=0101188387844930001
Hello everyone, in this blog I will explain to you what Dynamic Ability feature is and how it is used.
What Is Dynamic Ability?
Dynamic Ability is a service in which HUAWEI AppGallery implements dynamic loading based on the Android App Bundle technology. Apps integrated with the Dynamic Ability can dynamically download features or language packages from HUAWEI AppGallery on user requests. Its benefits are reducing the unnecessary consumption of network traffic and device storage space. Currently, the Dynamic Ability SDK supports mobile phones, smart screens, and speakers with screens.
How It Works?
After the newly created AAB file of an app is uploaded to HUAWEI AppGallery, the platform splits the app into multiple APKs based on three dimensions: language, screen density, and ABI architecture. When a user downloads an app, HUAWEI AppGallery delivers an APK that is suitable for the user device based on the device language, screen density, and ABI architecture. This reduces the consumption of storage space and network traffic on the user device without affecting the app’s features. When a user downloads an app for the first time, only the basic feature module of the app is downloaded, and dynamic features are downloaded only when necessary.
Development Process
First, we create an Anroid Studio project and we configure build.gradle files.
Code:
allprojects {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
...
}
}
Code:
dependencies {
implementation 'com.huawei.hms:dynamicability:1.0.11.302'
...
}
After that, we need to add Dynamic Feature Module to our project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
We create module name and package name.
We configure module download options.
And we sync the project.
This is the structure of our main project and our dynamic feature.
We override the attachBaseContext method in the project and in an activity of a dynamic feature module, and call FeatureCompat.install to initialize the Dynamic Ability SDK.
Code:
override fun attachBaseContext(newBase: Context?) {
super.attachBaseContext(newBase)
// Start the Dynamic Ability SDK.
FeatureCompat.install(newBase)
}
We call FeatureInstallManagerFactory.create method to instantiate FeatureInstallManager in onCreate method of our main app.
override fun onCreate(savedInstanceState: Bundle?) {
mFeatureInstallManager = FeatureInstallManagerFactory.create(this)
...
}
We build a request for dynamic loading, one or more dynamic features can be added. And we call the installFeature method to install features.
Code:
fun installFeature(view: View?) {
...
// start install
val request = FeatureInstallRequest.newBuilder()
.addModule("dynamicfeature")
.build()
val task = mFeatureInstallManager!!.installFeature(request)
...
}
We register a listener to monitor the status of the feature installation request. There are three types of listeners: OnFeatureCompleteListener, OnFeatureSuccessListener, and OnFeatureFailureListener.
– OnFeatureCompleteListener: Callback is triggered no matter whether the status is successful or failed. We need to determine whether the request is successful or not. If the request fails, an exception will be thrown when FeatureTask.getResult is called.
Code:
fun installFeature(view: View?) {
...
task.addOnListener(object : OnFeatureCompleteListener<Int>() {
override fun onComplete(featureTask: FeatureTask<Int>) {
if (featureTask.isComplete) {
Log.d(TAG, "complete to start install.")
if (featureTask.isSuccessful) {
val result = featureTask.result
sessionId = result
Log.d(TAG, "succeed to start install. session id :$result")
} else {
Log.d(TAG, "fail to start install.")
val exception = featureTask.exception
exception.printStackTrace()
}
}
}
})
...
}
− OnFeatureSuccessListener: Callback is triggered only after HUAWEI AppGallery successfully responds to the request. The callback result contains sessionId, which is the unique ID of a dynamic loading task. With sessionId, we can obtain the dynamic loading progress and cancel a task any time.
Code:
fun installFeature(view: View?) {
...
task.addOnListener(object : OnFeatureSuccessListener<Int>() {
override fun onSuccess(integer: Int) {
Log.d(TAG, "load feature onSuccess.session id:$integer")
}
})
...
}
− OnFeatureFailureListener: Callback is triggered only when HUAWEI AppGallery fails to respond.
Code:
fun installFeature(view: View?) {
...
task.addOnListener(object : OnFeatureFailureListener<Int?>() {
override fun onFailure(exception: Exception) {
if (exception is FeatureInstallException) {
val errorCode = exception.errorCode
Log.d(TAG, "load feature onFailure.errorCode:$errorCode")
} else {
exception.printStackTrace()
}
}
})
...
}
To use the Dynamic Ability, a user must agree to the agreement of HUAWEI AppGallery. Before downloading a dynamic feature, our app needs to verify that the user agrees to the agreement.
If the user agrees to the agreement, the installation process continues.
If the user rejects the agreement, the installation process is terminated.
Code:
private val mStateUpdateListener = InstallStateListener {
...
if (it.status() == FeatureInstallSessionStatus.REQUIRES_USER_CONFIRMATION) {
try {
mFeatureInstallManager!!.triggerUserConfirm(it, this, 1)
} catch (e: SendIntentException) {
e.printStackTrace()
}
[email protected]
}
...
}
Before downloading and installation of a dynamic feature, our app checks whether the user’s device is using a mobile network. According to that, a data consumption reminder is displayed to the user for download consent.
If the user consents to the download, the app starts to download the dynamic feature.
If the user does not consent to the download, the app terminates the download task.
Code:
private val mStateUpdateListener = InstallStateListener {
...
if (it.status() == FeatureInstallSessionStatus.REQUIRES_PERSON_AGREEMENT) {
try {
mFeatureInstallManager!!.triggerUserConfirm(it, this, 1)
} catch (e: SendIntentException) {
e.printStackTrace()
}
[email protected]
}
...
}
We can monitor the download progress of the dynamic feature.
Code:
private val mStateUpdateListener = InstallStateListener {
...
if (it.status() == FeatureInstallSessionStatus.DOWNLOADING) {
val process: Long = it.bytesDownloaded() * 100 / it.totalBytesToDownload()
Log.d(TAG, "downloading percentage: $process")
Toast.makeText(applicationContext, "downloading percentage: $process", Toast.LENGTH_SHORT).show()
[email protected]
}
...
}
A created listener needs to be registered and deregistered at proper times.
Code:
override fun onResume() {
super.onResume()
mFeatureInstallManager?.registerInstallListener(mStateUpdateListener)
}
override fun onPause() {
super.onPause()
mFeatureInstallManager?.unregisterInstallListener(mStateUpdateListener)
}
We can check the installation status.
Code:
private val mStateUpdateListener = InstallStateListener {
...
if (it.status() == FeatureInstallSessionStatus.INSTALLED) {
Log.d(TAG, "installed success ,can use new feature")
Toast.makeText(applicationContext, "installed success , can test new feature ", Toast.LENGTH_SHORT).show()
startfeature.isEnabled = true
installfeature.isEnabled = false
[email protected]
}
if (it.status() == FeatureInstallSessionStatus.UNKNOWN) {
Log.d(TAG, "installed in unknown status")
Toast.makeText(applicationContext, "installed in unknown status ", Toast.LENGTH_SHORT).show()
[email protected]
}
if (it.status() == FeatureInstallSessionStatus.FAILED) {
Log.d(TAG, "installed failed, errorcode : ${it.errorCode()}")
Toast.makeText(applicationContext, "installed failed, errorcode : ${it.errorCode()}", Toast.LENGTH_SHORT).show()
[email protected]
}
...
}
If a dynamic feature does not need to be installed instantly, we can choose delayed installation. With this, the feature can be installed when the app is running in the background. After receiving a delay request, HUAWEI AppGallery will delay the installation based on the device running status.
Code:
fun delayedInstallFeature(view: View?) {
val features = arrayListOf<String>()
features.add("dynamicfeature")
val task = mFeatureInstallManager!!.delayedInstallFeature(features)
task.addOnListener(object : OnFeatureCompleteListener<Void?>() {
override fun onComplete(featureTask: FeatureTask<Void?>) {
if (featureTask.isComplete) {
Log.d(TAG, "complete to delayed_Install")
if (featureTask.isSuccessful) {
Log.d(TAG, "succeed to delayed_install")
} else {
Log.d(TAG, "fail to delayed_install.")
val exception = featureTask.exception
exception.printStackTrace()
}
}
}
})
}
We can delay the uninstallation of a dynamic feature that is no longer used. The uninstallation is not executed instantly, it is executed when the app is running in the background.
Code:
fun delayedUninstallFeature(view: View?) {
val features = arrayListOf<String>()
features.add("dynamicfeature")
val task = mFeatureInstallManager!!.delayedUninstallFeature(features)
task.addOnListener(object : OnFeatureCompleteListener<Void?>() {
override fun onComplete(featureTask: FeatureTask<Void?>) {
if (featureTask.isComplete) {
Log.d(TAG, "complete to delayed_uninstall")
if (featureTask.isSuccessful) {
Log.d(TAG, "succeed to delayed_uninstall")
} else {
Log.d(TAG, "fail to delayed_uninstall.")
val exception = featureTask.exception
exception.printStackTrace()
}
}
}
})
}
Each dynamic loading task has a unique ID, which is specified by sessionId. We can cancel an ongoing task based on sessionId at any time.
Code:
fun abortInstallFeature(view: View?) {
Log.d(TAG, "begin abort_install : $sessionId")
val task = mFeatureInstallManager!!.abortInstallFeature(sessionId)
task.addOnListener(object : OnFeatureCompleteListener<Void?>() {
override fun onComplete(featureTask: FeatureTask<Void?>) {
if (featureTask.isComplete) {
Log.d(TAG, "complete to abort_install.")
if (featureTask.isSuccessful) {
Log.d(TAG, "succeed to abort_install.")
} else {
Log.d(TAG, "fail to abort_install.")
val exception = featureTask.exception
exception.printStackTrace()
}
}
}
})
}
We can obtain the execution status of the task.
Code:
fun getInstallState(view: View?) {
Log.d(TAG, "begin to get session state for: $sessionId")
val task: FeatureTask<InstallState> = mFeatureInstallManager!!.getInstallState(sessionId)
task.addOnListener(object : OnFeatureCompleteListener<InstallState>() {
override fun onComplete(featureTask: FeatureTask<InstallState>) {
if (featureTask.isComplete) {
Log.d(TAG, "complete to get session state.")
if (featureTask.isSuccessful) {
val state: InstallState = featureTask.result
Log.d(TAG, "succeed to get session state.")
Log.d(TAG, state.toString())
} else {
Log.d(TAG, "failed to get session state.")
val exception = featureTask.exception
exception.printStackTrace()
}
}
}
})
}
We also can obtain the execution status of all tasks in the system.
Code:
fun getAllInstallStates(view: View?) {
Log.d(TAG, "begin to get all session states.")
val task = mFeatureInstallManager!!.allInstallStates
task.addOnListener(object : OnFeatureCompleteListener<List<InstallState>>() {
override fun onComplete(featureTask: FeatureTask<List<InstallState>>) {
Log.d(TAG, "complete to get session states.")
if (featureTask.isSuccessful) {
Log.d(TAG, "succeed to get session states.")
val stateList = featureTask.result
for (state in stateList) {
Log.d(TAG, state.toString())
}
} else {
Log.d(TAG, "fail to get session states.")
val exception = featureTask.exception
exception.printStackTrace()
}
}
})
}
During actual usage of an app, the user language may vary. We can dynamically load one or more language packages in our app at a time.
A language package does not need to contain the country code. For example, to load a French package, only fr needs to be entered. The Dynamic Ability SDK automatically loads French resources of multiple countries and regions. To reduce ambiguity, it is advised to use the Locale.forLanguageTag(lang) method to modify the original value of language.
Code:
fun loadLanguage(view: View?) {
if (mFeatureInstallManager == null) {
return
}
// start install
val languages = arrayListOf<String>()
languages.add("fr")
val builder = FeatureInstallRequest.newBuilder()
for (lang in languages) {
builder.addLanguage(Locale.forLanguageTag(lang))
}
val request = builder.build()
val task = mFeatureInstallManager!!.installFeature(request)
task.addOnListener(object : OnFeatureSuccessListener<Int>() {
override fun onSuccess(result: Int) {
Log.d(TAG, "onSuccess callback result $result")
}
})
task.addOnListener(object : OnFeatureFailureListener<Int?>() {
override fun onFailure(exception: java.lang.Exception) {
if (exception is FeatureInstallException) {
Log.d(
TAG, "onFailure callback "
+ exception.errorCode
)
} else {
Log.d(TAG, "onFailure callback ", exception)
}
}
})
task.addOnListener(object : OnFeatureCompleteListener<Int?>() {
override fun onComplete(task: FeatureTask<Int?>) {
Log.d(TAG, "onComplete callback")
}
})
}
With this implementation, users can download the basic feature of the app first. And when they need, they can install other necessary features and uninstall unnecessary features.
References
https://developer.huawei.com/consumer/en/doc/development/AppGallery-connect-Guides/agc-featuredelivery-introduction

Related

Android App Bundle - Features on demand--part 1

More articles like this, you can visit HUAWEI Developer Forum.​
Introduction:
Dynamic Ability is a service in which HUAWEI AppGallery implements dynamic loading based on the Android App Bundle technology.
Apps integrated with the Dynamic Ability SDK can dynamically download features or language packages from HUAWEI AppGallery as required, reducing the unnecessary consumption of network traffic and device storage space.
What is Android App Bundle?
It is a new upload format that includes all your app’s compiled code and resources, but defers APK generation and signing to AppGallery. Traditionally, Android apps are distributed using a special file called an Android Package (.apk).
Advantage of using App Bundle format:
· Dynamic Ability Feature: AppGallery’ s new app serving model, called Dynamic Ability, uses your app bundle to generate and serve optimized APKs for each user’s device configuration, so they download only the code and resources they need to run your app. For example, you don’t need other languages strings if you have set English as your default language.
· No need to manually manage multiple APKs: You no longer have to build, sign, and manage multiple APKs to support different devices, and users get smaller, more optimized downloads. For example, now you don’t have to create multiple APKs for devices with different screen resolutions.
· Dynamic Feature Module: These modules contain features and assets that you can decide not to include when users first download and install your app. using the Dynamic Ability SDK, your app can later request to download those modules as dynamic feature APKs. For example, video calling feature and camera filters can be downloaded later on demand.
· Reduced APK size: Using Split APK mechanism, AppGallery can break up a large app into smaller, discrete packages that are installed on a user’s device as required. On average, apps published with app bundles are 20% smaller in size.
Let’s start Development process:
We need to follow the below step in order to achieve Dynamic Ability.
1. Create an App on AppGallery.
2. Create an Android Studio Project.
3. Integrate Dynamic Ability SDK.
4. Launch the app.
Create an App on AppGallery:
We need to create an app:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
App package manually:
Note: Download json file and add into your project.
Create an Android Studio Project:
Select project templet:
Create app name, package name, and project location:
Now we need to add Dynamic Ability module in our project:
Create module name and package name:
Create module download options:
Sync project and add the following plugin in module’s Gradle file:
apply plugin: 'com.android.dynamic-feature'
Let’s see module’s manifest.xml file:
Code:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:dist="http://schemas.android.com/apk/distribution"
package="com.huawei.android.dynamicfeaturesplit.splitsamplefeature01">
<dist:module
dist:onDemand="true"
dist:title="@string/title_splitsamplefeature01">
<dist:fusing dist:include="true" />
</dist:module>
<application>
<activity android:name=".FeatureActivity"></activity>
</application>
</manifest>
<dist:module>: This new XML element defines attributes that determine how the module is packaged and distributed as APKs.
distnDemand="true|false": Specifies whether the module should be available as an on demand download.
dist:title="@string/feature_name": Specifies a user-facing title for the module.
<dist:fusing include="true|false" />: Specifies whether to include the module in multi-APKs that target devices running Android 4.4 (API level 20) and lower.
Integrate Dynamic Ability SDK:
1. Open the build.gradle file (usually in the root directory) of your project. Go to allprojects > repositories and configure the Maven repository address for the SDK.
Code:
allprojects{
repositories{
maven {url 'http://developer.huawei.com/repo/'}
...
}
}
2. Add the following code to the build.gradle file (usually app/build.gradle) in the app directory to integrate the Dynamic Ability SDK:
Code:
dependencies {
implementation 'com.huawei.hms:dynamicability:1.0.11.302'
...
}
· Let’s sync the project and start app implementation:
We have created following project structure.
·Let’s see the implementation of Application class:
Code:
import android.app.Application;
import android.content.Context;
import android.util.Log;
import com.huawei.hms.feature.dynamicinstall.FeatureCompat;
public class DynamicFeatureSampleApplication extends Application {
public static final String TAG = DynamicFeatureSampleApplication.class.getSimpleName();
@Override
protected void attachBaseContext(Context base) {
super.attachBaseContext(base);
try {
FeatureCompat.install(base);
} catch (Exception e) {
Log.w(TAG, "", e);
}
}
}
Configure your app in your Android project, override the attachBaseContext() method in the project, and call FeatureCompat.install to initialize the Dynamic Ability SDK.
· Let’s the implementation of Activity:
Code:
package com.huawei.android.dynamicfeaturesplit;
import android.app.Activity;
import android.content.Intent;
import android.content.IntentSender;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.widget.ProgressBar;
import android.widget.Toast;
import com.huawei.hms.feature.install.FeatureInstallManager;
import com.huawei.hms.feature.install.FeatureInstallManagerFactory;
import com.huawei.hms.feature.listener.InstallStateListener;
import com.huawei.hms.feature.model.FeatureInstallException;
import com.huawei.hms.feature.model.FeatureInstallRequest;
import com.huawei.hms.feature.model.FeatureInstallSessionStatus;
import com.huawei.hms.feature.model.InstallState;
import com.huawei.hms.feature.tasks.FeatureTask;
import com.huawei.hms.feature.tasks.listener.OnFeatureCompleteListener;
import com.huawei.hms.feature.tasks.listener.OnFeatureFailureListener;
import com.huawei.hms.feature.tasks.listener.OnFeatureSuccessListener;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Locale;
import java.util.Set;
public class SampleEntry extends Activity {
private static final String TAG = SampleEntry.class.getSimpleName();
private ProgressBar progressBar;
private FeatureInstallManager mFeatureInstallManager;
private int sessionId = 10086;
private InstallStateListener mStateUpdateListener = new InstallStateListener() {
@Override
public void onStateUpdate(InstallState state) {
Log.d(TAG, "install session state " + state);
if (state.status() == FeatureInstallSessionStatus.REQUIRES_USER_CONFIRMATION) {
try {
mFeatureInstallManager.triggerUserConfirm(state, SampleEntry.this, 1);
} catch (IntentSender.SendIntentException e) {
e.printStackTrace();
}
return;
}
if (state.status() == FeatureInstallSessionStatus.REQUIRES_PERSON_AGREEMENT) {
try {
mFeatureInstallManager.triggerUserConfirm(state, SampleEntry.this, 1);
} catch (IntentSender.SendIntentException e) {
e.printStackTrace();
}
return;
}
if (state.status() == FeatureInstallSessionStatus.INSTALLED) {
Log.i(TAG, "installed success ,can use new feature");
makeToast("installed success , can test new feature ");
return;
}
if (state.status() == FeatureInstallSessionStatus.UNKNOWN) {
Log.e(TAG, "installed in unknown status");
makeToast("installed in unknown status ");
return;
}
if (state.status() == FeatureInstallSessionStatus.DOWNLOADING) {
long process = state.bytesDownloaded() * 100 / state.totalBytesToDownload();
Log.d(TAG, "downloading percentage: " + process);
makeToast("downloading percentage: " + process);
return;
}
if (state.status() == FeatureInstallSessionStatus.FAILED) {
Log.e(TAG, "installed failed, errorcode : " + state.errorCode());
makeToast("installed failed, errorcode : " + state.errorCode());
return;
}
}
};
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
progressBar = findViewById(R.id.progress_bar);
mFeatureInstallManager = FeatureInstallManagerFactory.create(this);
}
@Override
protected void onResume() {
super.onResume();
if (mFeatureInstallManager != null) {
mFeatureInstallManager.registerInstallListener(mStateUpdateListener);
}
}
@Override
protected void onPause() {
super.onPause();
if (mFeatureInstallManager != null) {
mFeatureInstallManager.unregisterInstallListener(mStateUpdateListener);
}
}
/**
* install feature
*
* @param view the view
*/
public void installFeature(View view) {
if (mFeatureInstallManager == null) {
return;
}
// start install
FeatureInstallRequest request = FeatureInstallRequest.newBuilder()
.addModule("SplitSampleFeature01")
.build();
final FeatureTask<Integer> task = mFeatureInstallManager.installFeature(request);
task.addOnListener(new OnFeatureSuccessListener<Integer>() {
@Override
public void onSuccess(Integer integer) {
Log.d(TAG, "load feature onSuccess.session id:" + integer);
}
});
task.addOnListener(new OnFeatureFailureListener<Integer>() {
@Override
public void onFailure(Exception exception) {
if (exception instanceof FeatureInstallException) {
int errorCode = ((FeatureInstallException) exception).getErrorCode();
Log.d(TAG, "load feature onFailure.errorCode:" + errorCode);
} else {
exception.printStackTrace();
}
}
});
task.addOnListener(new OnFeatureCompleteListener<Integer>() {
@Override
public void onComplete(FeatureTask<Integer> featureTask) {
if (featureTask.isComplete()) {
Log.d(TAG, "complete to start install.");
if (featureTask.isSuccessful()) {
Integer result = featureTask.getResult();
sessionId = result;
Log.d(TAG, "succeed to start install. session id :" + result);
} else {
Log.d(TAG, "fail to start install.");
Exception exception = featureTask.getException();
exception.printStackTrace();
}
}
}
});
Log.d(TAG, "start install func end");
}
/**
* start feature
*
* @param view the view
*/
public void startFeature01(View view) {
// test getInstallModules
Set<String> moduleNames = mFeatureInstallManager.getAllInstalledModules();
Log.d(TAG, "getInstallModules : " + moduleNames);
if (moduleNames != null && moduleNames.contains("SplitSampleFeature01")) {
try {
startActivity(new Intent(this, Class.forName(
"com.huawei.android.dynamicfeaturesplit.splitsamplefeature01.FeatureActivity")));
} catch (Exception e) {
Log.w(TAG, "", e);
}
}
}
/**
* cancel install task
*
* @param view the view
*/
public void abortInstallFeature(View view) {
Log.d(TAG, "begin abort_install : " + sessionId);
FeatureTask<Void> task = mFeatureInstallManager.abortInstallFeature(sessionId);
task.addOnListener(new OnFeatureCompleteListener<Void>() {
@Override
public void onComplete(FeatureTask<Void> featureTask) {
if (featureTask.isComplete()) {
Log.d(TAG, "complete to abort_install.");
if (featureTask.isSuccessful()) {
Log.d(TAG, "succeed to abort_install.");
} else {
Log.d(TAG, "fail to abort_install.");
Exception exception = featureTask.getException();
exception.printStackTrace();
}
}
}
});
}
/**
* get install task state
*
* @param view the view
*/
public void getInstallState(View view) {
Log.d(TAG, "begin to get session state for: " + sessionId);
FeatureTask<InstallState> task = mFeatureInstallManager.getInstallState(sessionId);
task.addOnListener(new OnFeatureCompleteListener<InstallState>() {
@Override
public void onComplete(FeatureTask<InstallState> featureTask) {
if (featureTask.isComplete()) {
Log.d(TAG, "complete to get session state.");
if (featureTask.isSuccessful()) {
InstallState state = featureTask.getResult();
Log.d(TAG, "succeed to get session state.");
Log.d(TAG, state.toString());
} else {
Log.e(TAG, "failed to get session state.");
Exception exception = featureTask.getException();
exception.printStackTrace();
}
}
}
});
}
/**
* get states of all install tasks
*
* @param view the view
*/
public void getAllInstallStates(View view) {
Log.d(TAG, "begin to get all session states.");
FeatureTask<List<InstallState>> task = mFeatureInstallManager.getAllInstallStates();
task.addOnListener(new OnFeatureCompleteListener<List<InstallState>>() {
@Override
public void onComplete(FeatureTask<List<InstallState>> featureTask) {
Log.d(TAG, "complete to get session states.");
if (featureTask.isSuccessful()) {
Log.d(TAG, "succeed to get session states.");
List<InstallState> stateList = featureTask.getResult();
for (InstallState state : stateList) {
Log.d(TAG, state.toString());
}
} else {
Log.e(TAG, "fail to get session states.");
Exception exception = featureTask.getException();
exception.printStackTrace();
}
}
});
}
/**
* deffer to install features
*
* @param view the view
*/
public void delayedInstallFeature(View view) {
List<String> features = new ArrayList<>();
features.add("SplitSampleFeature01");
FeatureTask<Void> task = mFeatureInstallManager.delayedInstallFeature(features);
task.addOnListener(new OnFeatureCompleteListener<Void>() {
@Override
public void onComplete(FeatureTask<Void> featureTask) {
if (featureTask.isComplete()) {
Log.d(TAG, "complete to delayed_Install");
if (featureTask.isSuccessful()) {
Log.d(TAG, "succeed to delayed_install");
} else {
Log.d(TAG, "fail to delayed_install.");
Exception exception = featureTask.getException();
exception.printStackTrace();
}
}
}
});
}
/**
* uninstall features
*
* @param view the view
*/
public void delayedUninstallFeature(View view) {
List<String> features = new ArrayList<>();
features.add("SplitSampleFeature01");
FeatureTask<Void> task = mFeatureInstallManager.delayedUninstallFeature(features);
task.addOnListener(new OnFeatureCompleteListener<Void>() {
@Override
public void onComplete(FeatureTask<Void> featureTask) {
if (featureTask.isComplete()) {
Log.d(TAG, "complete to delayed_uninstall");
if (featureTask.isSuccessful()) {
Log.d(TAG, "succeed to delayed_uninstall");
} else {
Log.d(TAG, "fail to delayed_uninstall.");
Exception exception = featureTask.getException();
exception.printStackTrace();
}
}
}
});
}
/**
* install languages
*
* @param view the view
*/
public void loadLanguage(View view) {
if (mFeatureInstallManager == null) {
return;
}
// start install
Set<String> languages = new HashSet<>();
languages.add("fr-FR");
FeatureInstallRequest.Builder builder = FeatureInstallRequest.newBuilder();
for (String lang : languages) {
builder.addLanguage(Locale.forLanguageTag(lang));
}
FeatureInstallRequest request = builder.build();
FeatureTask<Integer> task = mFeatureInstallManager.installFeature(request);
task.addOnListener(new OnFeatureSuccessListener<Integer>() {
@Override
public void onSuccess(Integer result) {
Log.d(TAG, "onSuccess callback result " + result);
}
});
task.addOnListener(new OnFeatureFailureListener<Integer>() {
@Override
public void onFailure(Exception exception) {
if (exception instanceof FeatureInstallException) {
Log.d(TAG, "onFailure callback "
+ ((FeatureInstallException) exception).getErrorCode());
} else {
Log.d(TAG, "onFailure callback ", exception);
}
}
});
task.addOnListener(new OnFeatureCompleteListener<Integer>() {
@Override
public void onComplete(FeatureTask<Integer> task) {
Log.d(TAG, "onComplete callback");
}
});
}
private void makeToast(String msg) {
Toast.makeText(this, msg, Toast.LENGTH_LONG).show();
}
}
· We have integrated sdk based classes and callbacks to achieve Dynamic Ability feature in our activity based class.
· Let’s implement module based activity class:
Code:
import android.app.Activity;
import android.content.Context;
import android.os.Bundle;
import android.util.Log;
import android.widget.ImageView;
import android.widget.Toast;
import com.huawei.hms.feature.dynamicinstall.FeatureCompat;
public class FeatureActivity extends Activity {
private static final String TAG = FeatureActivity.class.getSimpleName();
static {
System.loadLibrary("feature-native-lib");
}
@Override
protected void attachBaseContext(Context newBase) {
super.attachBaseContext(newBase);
try {
FeatureCompat.install(newBase);
} catch (Exception e) {
Log.w(TAG, "", e);
}
}
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_feature);
ImageView mImageView = findViewById(R.id.iv_load_png);
mImageView.setImageDrawable(getResources().getDrawable(R.mipmap.google));
Toast.makeText(this, "from feature " + stringFromJNI(), Toast.LENGTH_LONG).show();
}
/**
* String from jni string.
*
* @return the string
*/
public native String stringFromJNI();
}
· In an activity of a dynamic feature module, call FeatureCompat.install to initialize the Dynamic Ability SDK.
Launch the app:
· Let’s see the result-
If you have any doubts or queries. Please leave your valuable comment in the comment section

Product Visual Search – Ultimate Guide

More information like this, you can visit HUAWEI Developer Forum​
Introduction
HMS ML kit service searches in pre-established product image library for the same or similar products based on a product image taken by a customer, and returns the IDs of those products and related information.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Use Case
We will capture the product image using device camera from our developed shopping application.
We will show the returned products list in recycle view.
Prerequisite
Java JDK 1.8 or higher is recommended.
Android Studio is recommended.
Huawei android device with HMS core 4.0.0.300 or higher.
Before developing an app, you will need to register as a HUAWEI developer. Refer to Register a HUAWEI ID.
Integrating app gallery connect SDK. Refer to AppGallery Connect Service Getting Started.
Implementation
Enable ML kit in Manage APIs. Refer to Service Enabling.
Integrate following dependencies in app-level build.gradle.
Code:
// Import the product visual search SDK.
implementation 'com.huawei.hms:ml-computer-vision-cloud:2.0.1.300'
3. Add agc plugin in the top of app.gradle file.
Code:
apply plugin: 'com.huawei.agconnect'
4. Add the following permission in manifest.
Camera permission android.permission.CAMERA: Obtains real-time images or videos from a camera.
Internet access permission android.permission.INTERNET: Accesses cloud services on the Internet.
Storage write permission android.permission.WRITE_EXTERNAL_STORAGE: Upgrades the algorithm version.
Storage read permission android.permission.READ_EXTERNAL_STORAGE: Reads photos stored on a device.
5. To request camera permission in realtime.
Code:
private void requestCameraPermission() {
final String[] permissions = new String[] {Manifest.permission.CAMERA};
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, this.CAMERA_PERMISSION_CODE);
return;
}
}
6. Add following code in Application class
Code:
public class MyApplication extends Application {
@Override
public void onCreate() {
super.onCreate();
MLApplication.getInstance().setApiKey("API KEY");
}
}
API key can be obtained either from AGC or integrated agcconnect-services.json.
7. To create an analyzer for product visual search.
Code:
private void initializeProductVisionSearch() {
MLRemoteProductVisionSearchAnalyzerSetting settings = new MLRemoteProductVisionSearchAnalyzerSetting.Factory()
// Set the maximum number of products that can be returned.
.setLargestNumOfReturns(16)
// Set the product set ID. (Contact [email protected] to obtain the configuration guide.)
// .setProductSetId(productSetId)
// Set the region.
.setRegion(MLRemoteProductVisionSearchAnalyzerSetting.REGION_DR_CHINA)
.create();
analyzer
= MLAnalyzerFactory.getInstance().getRemoteProductVisionSearchAnalyzer(settings);
}
8. To capture image from camera.
Code:
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(intent, REQ_CAMERA_CODE);
9. Once image has been captured, onActivityResult() method will be executed.
Code:
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
Log.d(TAG, "onActivityResult");
if(requestCode == 101) {
if (resultCode == RESULT_OK) {
Bitmap bitmap = (Bitmap) data.getExtras().get("data");
if (bitmap != null) {
// Create an MLFrame object using the bitmap, which is the image data in bitmap format.
MLFrame mlFrame = new MLFrame.Creator().setBitmap(bitmap).create();
mlImageDetection(mlFrame);
}
}
}
}
private void mlImageDetection(MLFrame mlFrame) {
Task> task = analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener>() {
public void onSuccess(List products) {
// Processing logic for detection success.
displaySuccess(products);
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for detection failure.
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
}
private void displaySuccess(List productVisionSearchList) {
List productImageList = new ArrayList();
String prodcutType = "";
for (MLProductVisionSearch productVisionSearch : productVisionSearchList) {
Log.d(TAG, "type: " + productVisionSearch.getType() );
prodcutType = productVisionSearch.getType();
for (MLVisionSearchProduct product : productVisionSearch.getProductList()) {
productImageList.addAll(product.getImageList());
Log.d(TAG, "custom content: " + product.getCustomContent() );
}
}
StringBuffer buffer = new StringBuffer();
for (MLVisionSearchProductImage productImage : productImageList) {
String str = "ProductID: " + productImage.getProductId() + "
ImageID: " + productImage.getImageId() + "
Possibility: " + productImage.getPossibility();
buffer.append(str);
buffer.append("
");
}
Log.d(TAG , "display success: " + buffer.toString());
FragmentTransaction transaction = getFragmentManager().beginTransaction();
transaction.replace(R.id.main_fragment_container, new SearchResultFragment(productImageList, prodcutType ));
transaction.commit();
}
onSuccess() callback will give us list of MLProductVisionSearch objects, which can be used to get product id and image URL of each product. Also we can get the product type using productVisionSearch.getType(). getType() returns number which can be mapped.
10. We can achieve the product type mapping with following code.
Code:
private String getProductType(String type) {
switch(type) {
case "0":
return "Others";
case "1":
return "Clothing";
case "2":
return "Shoes";
case "3":
return "Bags";
case "4":
return "Digital & Home appliances";
case "5":
return "Household Products";
case "6":
return "Toys";
case "7":
return "Cosmetics";
case "8":
return "Accessories";
case "9":
return "Food";
}
return "Others";
}
11. To get product id and image URL from MLVisionSearchProductImage.
Code:
@Override
public void onBindViewHolder(ViewHolder holder, int position) {
final MLVisionSearchProductImage mlProductVisionSearch = productVisionSearchList.get(position);
holder.tvTitle.setText(mlProductVisionSearch.getProductId());
Glide.with(context)
.load(mlProductVisionSearch.getImageId())
.diskCacheStrategy(DiskCacheStrategy.ALL)
.into(holder.imageView);
}
Images
Reference
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/sdk-data-security-0000001050040129-V5

Creating a simple News Searcher with Huawei Search Kit

Introduction
Hello reader, in this article, I am going to demonstrate how to utilize Huawei Mobile Services (HMS) Search Kit to search for news articles from the web with customizable parameters. Also, I will show you how to use tools like auto suggestions and spellcheck capabilities provided by HMS Search Kit.
Getting Started
First, we need to follow instructions on the official website to integrate Search Kit into our app.
Getting Started
After we’re done with that, let’s start coding. First, we need to initialize Search Kit in our Application/Activity.
Java:
@HiltAndroidApp
class NewsApp : Application() {
override fun onCreate() {
super.onCreate()
SearchKitInstance.init(this, YOUR_APP_ID)
}
}
Next, let’s not forget adding our Application class to manifest. Also to allow HTTP network requests on devices with targetSdkVersion 28 or later, we need to allow clear text traffic. (Search Kit doesn’t support minSdkVersion below 24).
XML:
<application
android:name=".NewsApp"
android:usesCleartextTraffic="true">
...
</application>
Acquiring Access Token
The token is used to verify a search request on the server. Search results of the request are returned only after the verification is successful. Therefore, before we implement any search functions, we need to get the Access Token first.
OAuth 2.0-based Authentication
If you scroll down, you will see a method called Client Credentials, which does not require authorization from a user. In this mode, your app can generate an access token to access Huawei public app-level APIs. Exactly what we need. I have used Retrofit to do this job.
Let’s create a data class that represents the token response from Huawei servers.
Java:
data class TokenResponse(val access_token: String, val expires_in: Int, val token_type: String)
Then, let’s create an interface like below to generate Retrofit Service.
Java:
interface TokenRequestService {
@FormUrlEncoded
@POST("oauth2/v3/token")
suspend fun getRequestToken(
@Field("grant_type") grantType: String,
@Field("client_id") clientId: String,
@Field("client_secret") clientSecret: String
): TokenResponse
}
Then, let’s create a repository class to call our API service.
Java:
class NewsRepository(
private val tokenRequestService: TokenRequestService
) {
suspend fun getRequestToken() = tokenRequestService.getRequestToken(
"client_credentials",
YOUR_APP_ID,
YOUR_APP_SECRET
)
}
You can find your App ID and App secret from console.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I have used Dagger Hilt to provide Repository for view models that need it. Here is the Repository Module class that creates the objects to be injected to view models.
Java:
@InstallIn(SingletonComponent::class)
@Module
class RepositoryModule {
@Provides
@Singleton
fun provideRepository(
tokenRequestService: TokenRequestService
): NewsRepository {
return NewsRepository(tokenRequestService)
}
@Provides
@Singleton
fun providesOkHttpClient(): OkHttpClient {
return OkHttpClient.Builder().build()
}
@Provides
@Singleton
fun providesRetrofitClientForTokenRequest(okHttpClient: OkHttpClient): TokenRequestService {
val baseUrl = "https://oauth-login.cloud.huawei.com/"
return Retrofit.Builder()
.baseUrl(baseUrl)
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create())
.client(okHttpClient)
.build()
.create(TokenRequestService::class.java)
}
}
In order to inject our module, we need to add @HiltAndroidApp annotation to NewsApp application class. Also, add @AndroidEntryPoint to fragments that need dependency injection. Now we can use our repository in our view models.
I have created a splash fragment to get access token, because without it, none of the search functionalities would work.
Java:
@AndroidEntryPoint
class SplashFragment : Fragment(R.layout.fragment_splash) {
private var _binding: FragmentSplashBinding? = null
private val binding get() = _binding!!
private val viewModel: SplashViewModel by viewModels()
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
_binding = FragmentSplashBinding.bind(view)
lifecycleScope.launch {
viewModel.accessToken.collect {
if (it is TokenState.Success) {
findNavController().navigate(R.id.action_splashFragment_to_homeFragment)
}
if (it is TokenState.Failure) {
binding.progressBar.visibility = View.GONE
binding.tv.text = "An error occurred, check your connection"
}
}
}
}
override fun onDestroyView() {
super.onDestroyView()
_binding = null
}
}
Java:
class SplashViewModel @ViewModelInject constructor(private val repository: NewsRepository) :
ViewModel() {
private var _accessToken = MutableStateFlow<TokenState>(TokenState.Loading)
var accessToken: StateFlow<TokenState> = _accessToken
init {
getRequestToken()
}
private fun getRequestToken() {
viewModelScope.launch {
try {
val token = repository.getRequestToken().access_token
SearchKitInstance.getInstance()
.setInstanceCredential(token)
SearchKitInstance.instance.newsSearcher.setTimeOut(5000)
Log.d(
TAG,
"SearchKitInstance.instance.setInstanceCredential done $token"
)
_accessToken.emit(TokenState.Success(token))
} catch (e: Exception) {
Log.e(HomeViewModel.TAG, "get token error", e)
_accessToken.emit(TokenState.Failure(e))
}
}
}
companion object {
const val TAG = "SplashViewModel"
}
}
As you can see, once we receive our access token, we call setInstanceCredential() method with the token as the parameter. Also I have set a 5 second timeout for the News Searcher. Then, Splash Fragment should react to the change in access token flow, navigate the app to the home fragment while popping splash fragment from back stack, because we don’t want to go back there. But if token request fails, the fragment will show an error message.
Setting up Search Kit Functions
Since we have given Search Kit the token it requires, we can proceed with the rest. Let’s add three more function to our repository.
1. getNews()
This function will take two parameters — search term, and page which will be used for pagination. NewsState is a sealed class that represents two states of news search request, success or failure.
Search Kit functions are synchronous, therefore we launch them in in the Dispatchers.IO context so they don’t block our UI.
In order to start a search request, we create an CommonSearchRequest, then apply our search parameters. setQ to set search term, setLang to set in which language we want to get our news (I have selected English), setSregion to set from which region we want to get our news (I have selected whole world), setPs to set how many news we want in single page, setPn to set which page of news we want to get.
Then we call the search() method to get a response from the server. if it is successful, we get a result in the type of BaseSearchResponse<List<NewsItem>>. If it’s unsuccessful (for example there is no network connection) we get null in return. In that case It returns failure state.
Java:
class NewsRepository(
private val tokenRequestService: TokenRequestService
) {
...
suspend fun getNews(query: String, pageNumber: Int): NewsState = withContext(Dispatchers.IO) {
var newsState: NewsState
Log.i(TAG, "getting news $query $pageNumber")
val commonSearchRequest = CommonSearchRequest()
commonSearchRequest.setQ(query)
commonSearchRequest.setLang(Language.ENGLISH)
commonSearchRequest.setSregion(Region.WHOLEWORLD)
commonSearchRequest.setPs(10)
commonSearchRequest.setPn(pageNumber)
try {
val result = SearchKitInstance.instance.newsSearcher.search(commonSearchRequest)
newsState = if (result != null) {
if (result.data.size > 0) {
Log.i(TAG, "got news ${result.data.size}")
NewsState.Success(result.data)
} else {
NewsState.Error(Exception("no more news"))
}
} else {
NewsState.Error(Exception("fetch news error"))
}
} catch (e: Exception) {
newsState = NewsState.Error(e)
Log.e(TAG, "caught news search exception", e)
}
[email protected] newsState
}
suspend fun getAutoSuggestions(str: String): AutoSuggestionsState =
withContext(Dispatchers.IO) {
val autoSuggestionsState: AutoSuggestionsState
autoSuggestionsState = try {
val result = SearchKitInstance.instance.searchHelper.suggest(str, Language.ENGLISH)
if (result != null) {
AutoSuggestionsState.Success(result.suggestions)
} else {
AutoSuggestionsState.Failure(Exception("fetch suggestions error"))
}
} catch (e: Exception) {
AutoSuggestionsState.Failure(e)
}
[email protected] autoSuggestionsState
}
suspend fun getSpellCheck(str: String): SpellCheckState = withContext(Dispatchers.IO) {
val spellCheckState: SpellCheckState
spellCheckState = try {
val result = SearchKitInstance.instance.searchHelper.spellCheck(str, Language.ENGLISH)
if (result != null) {
SpellCheckState.Success(result)
} else {
SpellCheckState.Failure(Exception("fetch spellcheck error"))
}
} catch (
e: Exception
) {
SpellCheckState.Failure(e)
}
[email protected] spellCheckState
}
companion object {
const val TAG = "NewsRepository"
}
}
2. getAutoSuggestions()
Search Kit can provide search suggestions with SearchHelper.suggest() method. It takes two parameters, a String to provide suggestions for, and a language type. If the operation is successful, a result in the type AutoSuggestResponse. We can access a list of SuggestObject from suggestions field of this AutoSuggestResponse. Every SuggestObject represents a suggestion from HMS which contains a String value.
3. getSpellCheck()
It works pretty much the same with auto suggestions. SearchHelper.spellCheck() method takes the same two parameters like suggest() method. But it returns a SpellCheckResponse, which has two important fields: correctedQuery and confidence. correctedQuery is what Search Kit thinks the corrected spelling should be, confidence is how confident Search kit is about the recommendation. Confidence has 3 values, which are 0 (not confident, we should not rely on it), 1 (confident), 2 (highly confident).
Using the functions above in our app
Home Fragments has nothing to show when it launches, because nothing has been searched yet. User can click the magnifier icon in toolbar to navigate to Search Fragment. Code for Search Fragment/View Model is below.
Notes:
Search View should expand on default with keyboard showing so user can start typing right away.
Every time query text changes, it will be emitted to a flow in view model. then it will be collected by two listeners in the fragment, first one to search for auto suggestions, second one to spell check. I did this to avoid unnecessary network calls, debounce(500) will make sure subsequent entries when the user is typing fast (less than half a second for a character) will be ignored and only the last search query will be used.
When user submit query term, the string will be sent back to HomeFragment using setFragmentResult() (which is only available fragment-ktx library Fragment 1.3.0-alpha04 and above).
Java:
@AndroidEntryPoint
class SearchFragment : Fragment(R.layout.fragment_search) {
private var _binding: FragmentSearchBinding? = null
private val binding get() = _binding!!
private val viewModel: SearchViewModel by viewModels()
@FlowPreview
@ExperimentalCoroutinesApi
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
_binding = FragmentSearchBinding.bind(view)
(activity as AppCompatActivity).setSupportActionBar(binding.toolbar)
setHasOptionsMenu(true)
//listen to the change in query text, trigger getSuggestions function after debouncing and filtering
lifecycleScope.launch {
viewModel.searchQuery.debounce(500).filter { s: String ->
[email protected] s.length > 3
}.distinctUntilChanged().flatMapLatest { query ->
Log.d(TAG, "getting suggestions for term: $query")
viewModel.getSuggestions(query).catch {
}
}.flowOn(Dispatchers.Default).collect {
if (it is AutoSuggestionsState.Success) {
val list = it.data
Log.d(TAG, "${list.size} suggestion")
binding.chipGroup.removeAllViews()
//create a chip for each suggestion and add them to chip group
list.forEach { suggestion ->
val chip = Chip(requireContext())
chip.text = suggestion.name
chip.isClickable = true
chip.setOnClickListener {
//set fragment result to return search term to home fragment.
setFragmentResult(
"requestKey",
bundleOf("bundleKey" to suggestion.name)
)
findNavController().popBackStack()
}
binding.chipGroup.addView(chip)
}
} else if (it is AutoSuggestionsState.Failure) {
Log.e(TAG, "suggestions request error", it.exception)
}
}
}
//listen to the change in query text, trigger spellcheck function after debouncing and filtering
lifecycleScope.launch {
viewModel.searchQuery.debounce(500).filter { s: String ->
[email protected] s.length > 3
}.distinctUntilChanged().flatMapLatest { query ->
Log.d(TAG, "spellcheck for term: $query")
viewModel.getSpellCheck(query).catch {
Log.e(TAG, "spellcheck request error", it)
}
}.flowOn(Dispatchers.Default).collect {
if (it is SpellCheckState.Success) {
val spellCheckResponse = it.data
val correctedStr = spellCheckResponse.correctedQuery
val confidence = spellCheckResponse.confidence
Log.d(
TAG,
"corrected query $correctedStr confidence level $confidence"
)
if (confidence > 0) {
//show spellcheck layout, and set on click listener to send corrected term to home fragment
//to be searched
binding.tvDidYouMeanToSearch.visibility = View.VISIBLE
binding.tvCorrected.visibility = View.VISIBLE
binding.tvCorrected.text = correctedStr
binding.llSpellcheck.setOnClickListener {
setFragmentResult(
"requestKey",
bundleOf("bundleKey" to correctedStr)
)
findNavController().popBackStack()
}
} else {
binding.tvDidYouMeanToSearch.visibility = View.GONE
binding.tvCorrected.visibility = View.GONE
}
} else if (it is SpellCheckState.Failure) {
Log.e(TAG, "spellcheck request error", it.exception)
}
}
}
}
override fun onCreateOptionsMenu(menu: Menu, inflater: MenuInflater) {
super.onCreateOptionsMenu(menu, inflater)
inflater.inflate(R.menu.menu_search, menu)
val searchMenuItem = menu.findItem(R.id.searchItem)
val searchView = searchMenuItem.actionView as SearchView
searchView.setIconifiedByDefault(false)
searchMenuItem.expandActionView()
searchMenuItem.setOnActionExpandListener(object : MenuItem.OnActionExpandListener {
override fun onMenuItemActionExpand(item: MenuItem?): Boolean {
return true
}
override fun onMenuItemActionCollapse(item: MenuItem?): Boolean {
findNavController().popBackStack()
return true
}
})
searchView.setOnQueryTextListener(object : SearchView.OnQueryTextListener {
override fun onQueryTextSubmit(query: String?): Boolean {
return if (query != null && query.length > 3) {
setFragmentResult("requestKey", bundleOf("bundleKey" to query))
findNavController().popBackStack()
true
} else {
Toast.makeText(requireContext(), "Search term is too short", Toast.LENGTH_SHORT)
.show()
true
}
}
override fun onQueryTextChange(newText: String?): Boolean {
viewModel.emitNewTextToSearchQueryFlow(newText ?: "")
return true
}
})
}
override fun onDestroyView() {
super.onDestroyView()
_binding = null
}
companion object {
const val TAG = "SearchFragment"
}
}
Java:
class SearchViewModel @ViewModelInject constructor(private val repository: NewsRepository) :
ViewModel() {
private var _searchQuery = MutableStateFlow<String>("")
var searchQuery: StateFlow<String> = _searchQuery
fun getSuggestions(str: String): Flow<AutoSuggestionsState> {
return flow {
try {
val result = repository.getAutoSuggestions(str)
emit(result)
} catch (e: Exception) {
}
}
}
fun getSpellCheck(str: String): Flow<SpellCheckState> {
return flow {
try {
val result = repository.getSpellCheck(str)
emit(result)
} catch (e: Exception) {
}
}
}
fun emitNewTextToSearchQueryFlow(str: String) {
viewModelScope.launch {
_searchQuery.emit(str)
}
}
}
Now the HomeFragment has a search term to search for.
When the view is created, we receive the search term returned from Search Fragment on setFragmentResultListener. Then search for news using this query, then submit the PagingData to the recycler view adapter. Also, I made sure same flow will be returned if the new query is the same with the previous one so no unnecessary calls will be made.
Java:
@AndroidEntryPoint
class HomeFragment : Fragment(R.layout.fragment_home) {
private var _binding: FragmentHomeBinding? = null
private val binding get() = _binding!!
private val viewModel: HomeViewModel by viewModels()
private lateinit var listAdapter: NewsAdapter
private var startedLoading = false
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
_binding = FragmentHomeBinding.bind(view)
(activity as AppCompatActivity).setSupportActionBar(binding.toolbar)
setHasOptionsMenu(true)
listAdapter = NewsAdapter(NewsAdapter.NewsComparator, onItemClicked)
binding.rv.adapter =
listAdapter.withLoadStateFooter(NewsLoadStateAdapter(listAdapter))
//if user swipe down to refresh, refresh paging adapter
binding.swipeRefreshLayout.setOnRefreshListener {
listAdapter.refresh()
}
// Listen to search term returned from Search Fragment
setFragmentResultListener("requestKey") { _, bundle ->
// We use a String here, but any type that can be put in a Bundle is supported
val result = bundle.getString("bundleKey")
binding.tv.visibility = View.GONE
if (result != null) {
binding.toolbar.subtitle = "News about $result"
lifecycleScope.launchWhenResumed {
binding.swipeRefreshLayout.isRefreshing = true
viewModel.searchNews(result).collectLatest { value: PagingData<NewsItem> ->
listAdapter.submitData(value)
}
}
}
}
//need to listen to paging adapter load state to stop swipe to refresh layout animation
//if load state contain error, show a toast.
listAdapter.addLoadStateListener {
if (it.refresh is LoadState.NotLoading && startedLoading) {
binding.swipeRefreshLayout.isRefreshing = false
} else if (it.refresh is LoadState.Error && startedLoading) {
binding.swipeRefreshLayout.isRefreshing = false
val loadState = it.refresh as LoadState.Error
val errorMsg = loadState.error.localizedMessage
Toast.makeText(requireContext(), errorMsg, Toast.LENGTH_SHORT).show()
} else if (it.refresh is LoadState.Loading) {
startedLoading = true
}
}
}
override fun onCreateOptionsMenu(menu: Menu, inflater: MenuInflater) {
super.onCreateOptionsMenu(menu, inflater)
inflater.inflate(R.menu.menu_home, menu)
}
override fun onOptionsItemSelected(item: MenuItem): Boolean {
return when (item.itemId) {
R.id.searchItem -> {
//launch search fragment when search item clicked
findNavController().navigate(R.id.action_homeFragment_to_searchFragment)
true
}
else ->
super.onOptionsItemSelected(item)
}
}
//callback function to be passed to paging adapter, used to launch news links.
private val onItemClicked = { it: NewsItem ->
val builder = CustomTabsIntent.Builder()
val customTabsIntent = builder.build()
customTabsIntent.launchUrl(requireContext(), Uri.parse(it.clickUrl))
}
override fun onDestroyView() {
super.onDestroyView()
_binding = null
}
companion object {
const val TAG = "HomeFragment"
}
}
Java:
class HomeViewModel @ViewModelInject constructor(private val repository: NewsRepository) :
ViewModel() {
private var lastSearchQuery: String? = null
var lastFlow: Flow<PagingData<NewsItem>>? = null
fun searchNews(query: String): Flow<PagingData<NewsItem>> {
return if (query != lastSearchQuery) {
lastSearchQuery = query
lastFlow = Pager(PagingConfig(pageSize = 10)) {
NewsPagingDataSource(repository, query)
}.flow.cachedIn(viewModelScope)
lastFlow as Flow<PagingData<NewsItem>>
} else {
lastFlow!!
}
}
companion object {
const val TAG = "HomeViewModel"
}
}
The app also uses Paging 3 library to provide endless scrolling for news articles, which is out of scope for this article, you may check the GitHub repo for how to achieve pagination with Search Kit. The end result looks like the images below.
Check the repo here.
Tips
When Search Kit fails to fetch results (example: no internet connection), it will return null object, you can manually return an exception so you can handle the error.
Conclusion
HMS Search Kit provide easy to use APIs for fast and efficient customizable searching for web sites, images, videos and news articles in many languages and regions. Also, it provides convenient features like auto suggestions and spellchecking.
Reference
Huawei Search Kit
What other features search kit provides other than news?
any additional feature can be supported?
Can we search daily base news ?
ask011 said:
What other features search kit provides other than news?
Click to expand...
Click to collapse
Hello, can reference documentation at https://developer.huawei.com/consum.../HMSCore-Guides/introduction-0000001055591730
ask011 said:
What other features search kit provides other than news?
Click to expand...
Click to collapse
Hello, can reference documentation at https://developer.huawei.com/consum.../HMSCore-Guides/introduction-0000001055591730

Search News with Voice (Search Kit — ML Kit(ASR)— Network Kit)

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone. In this article, I will try to talk about the uses of Huawei Search Kit, Huawei ML Kit and Huawei Network Kit. I have developed a demo app using these 3 kits to make it clearer.
What is Search Kit?​HUAWEI Search Kit fully opens Petal Search capabilities through the device-side SDK and cloud-side APIs, enabling ecosystem partners to quickly provide the optimal mobile app search experience.
What is Network Kit?​Network Kit is a basic network service suite. It incorporates Huawei’s experience in far-field network communications, and utilizes scenario-based RESTful APIs as well as file upload and download APIs. Therefore, Network Kit can provide you with easy-to-use device-cloud transmission channels featuring low latency, high throughput, and high security.
What is ML Kit — ASR?​Automatic speech recognition (ASR) can recognize speech not longer than 60s and convert the input speech into text in real time. This service uses industry-leading deep learning technologies to achieve a recognition accuracy of over 95%.
Development Steps​1. Integration
First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. You can access the article about that steps from the link below.
Android | Integrating Your Apps With Huawei HMS Core
Hi, this article explains you how to integrate with HMS (Huawei Mobile Services) and making AppGallery Connect Console project settings.
medium.com
2. Adding Dependencies
After HMS Core is integrated into the project and the Search Kit and ML Kit are activated through the console, the required library should added to the build.gradle file in the app directory as follows. The project’s minSdkVersion value should be 24. For this, the minSdkVersion value in the same file should be updated to 24.
Java:
...
defaultConfig {
applicationId "com.myapps.searchappwithml"
minSdkVersion 24
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
...
dependencies {
...
implementation 'com.huawei.agconnect:agconnect-core:1.5.0.300'
implementation 'com.huawei.hms:network-embedded:5.0.1.301'
implementation 'com.huawei.hms:searchkit:5.0.4.303'
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.2.0.300'
...
}
3. Adding Permissions
Java:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
4. Application Class
When the application is started, we need to initialize the Kits in the Application class. Then we need to specify the Application class to the “android: name” tag in the manifest file.
Java:
@HiltAndroidApp
class SearchApplication : Application(){
override fun onCreate() {
super.onCreate()
initNetworkKit()
initSearchKit()
initMLKit()
}
private fun initNetworkKit(){
NetworkKit.init(
applicationContext,
object : NetworkKit.Callback() {
override fun onResult(result: Boolean) {
if (result) {
Log.i(NETWORK_KIT_TAG, "init success")
} else {
Log.i(NETWORK_KIT_TAG, "init failed")
}
}
})
}
private fun initSearchKit(){
SearchKitInstance.init(this, APP_ID)
CoroutineScope(Dispatchers.IO).launch {
SearchKitInstance.instance.refreshToken()
}
}
private fun initMLKit() {
MLApplication.getInstance().apiKey = API_KEY
}
}
5. Getting Access Token
We need to use Access Token to send requests to Search Kit. I used the Network Kit to request the Access Token. Its use is very similar to services that perform other Network operations.
As with other Network Services, there are Annotations such as POST, FormUrlEncoded, Headers, Field.
Java:
interface AccessTokenService {
@POST("oauth2/v3/token")
@FormUrlEncoded
@Headers("Content-Type:application/x-www-form-urlencoded", "charset:UTF-8")
fun createAccessToken(
@Field("grant_type") grant_type: String,
@Field("client_secret") client_secret: String,
@Field("client_id") client_id: String
) : Submit<String>
}
We need to create our request structure using the RestClient class.
Java:
@Module
@InstallIn(ApplicationComponent::class)
class ApplicationModule {
companion object{
private const val TIMEOUT: Int = 500000
private var restClient: RestClient? = null
fun getClient() : RestClient {
val httpClient = HttpClient.Builder()
.connectTimeout(TIMEOUT)
.writeTimeout(TIMEOUT)
.readTimeout(TIMEOUT)
.build()
if (restClient == null) {
restClient = RestClient.Builder()
.baseUrl("https://oauth-login.cloud.huawei.com/")
.httpClient(httpClient)
.build()
}
return restClient!!
}
}
}
Finally, by sending the request, we reach the AccessToken.
Java:
data class AccessTokenModel (
var access_token : String,
var expires_in : Int,
var token_type : String
)
...
fun SearchKitInstance.refreshToken() {
ApplicationModule.getClient().create(AccessTokenService::class.java)
.createAccessToken(
GRANT_TYPE,
CLIENT_SECRET,
CLIENT_ID
)
.enqueue(object : Callback<String>() {
override fun onFailure(call: Submit<String>, t: Throwable) {
Log.d(ACCESS_TOKEN_TAG, "getAccessTokenErr " + t.message)
}
override fun onResponse(
call: Submit<String>,
response: Response<String>
) {
val convertedResponse =
Gson().fromJson(response.body, AccessTokenModel::class.java)
setInstanceCredential(convertedResponse.access_token)
}
})
}
6. ML Kit (ASR) — Search Kit
Since we are using ML Kit (ASR), we first need to get microphone permission from the user. Then we start ML Kit (ASR) with the help of a button and get a text from the user. By sending this text to the function we created for the Search Kit, we reach the data we will show on the screen.
Here I used the Search Kit’s Web search feature. Of course, News, Image, Video search features can be used according to need.
Java:
@AndroidEntryPoint
class MainActivity : AppCompatActivity() {
private lateinit var binding: MainBinding
private val adapter: ResultAdapter = ResultAdapter()
private var isPermissionGranted: Boolean = false
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = MainBinding.inflate(layoutInflater)
setContentView(binding.root)
binding.button.setOnClickListener {
if (isPermissionGranted) {
startASR()
}
}
binding.recycler.adapter = adapter
val permission = arrayOf(Manifest.permission.INTERNET, Manifest.permission.RECORD_AUDIO)
ActivityCompat.requestPermissions(this, permission,MIC_PERMISSION)
}
override fun onRequestPermissionsResult(
requestCode: Int,
permissions: Array<out String>,
grantResults: IntArray
) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
when (requestCode) {
MIC_PERMISSION -> {
// If request is cancelled, the result arrays are empty.
if (grantResults.isNotEmpty()
&& grantResults[0] == PackageManager.PERMISSION_GRANTED
&& grantResults[1] == PackageManager.PERMISSION_GRANTED) {
// permission was granted
Toast.makeText(this, "Permission granted", Toast.LENGTH_SHORT).show()
isPermissionGranted = true
} else {
// permission denied,
Toast.makeText(this, "Permission denied", Toast.LENGTH_SHORT).show()
}
return
}
}
}
private fun startASR() {
val intent = Intent(this, MLAsrCaptureActivity::class.java)
.putExtra(MLAsrCaptureConstants.LANGUAGE, "en-US")
.putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX)
startActivityForResult(intent, ASR_REQUEST_CODE)
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == ASR_REQUEST_CODE) {
when (resultCode) {
MLAsrCaptureConstants.ASR_SUCCESS -> if (data != null) {
val bundle = data.extras
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
val text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT).toString()
performSearch(text)
}
}
MLAsrCaptureConstants.ASR_FAILURE -> if (data != null) {
val bundle = data.extras
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_CODE)) {
val errorCode = bundle.getInt(MLAsrCaptureConstants.ASR_ERROR_CODE)
Toast.makeText(this, "Error Code $errorCode", Toast.LENGTH_LONG).show()
}
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_MESSAGE)) {
val errorMsg = bundle.getString(MLAsrCaptureConstants.ASR_ERROR_MESSAGE)
Toast.makeText(this, "Error Code $errorMsg", Toast.LENGTH_LONG).show()
}
}
else -> {
Toast.makeText(this, "Failed to get data", Toast.LENGTH_LONG).show()
}
}
}
}
private fun performSearch(query: String) {
CoroutineScope(Dispatchers.IO).launch {
val searchKitInstance = SearchKitInstance.instance
val webSearchRequest = WebSearchRequest().apply {
setQ(query)
setLang(loadLang())
setSregion(loadRegion())
setPs(5)
setPn(1)
}
val response = searchKitInstance.webSearcher.search(webSearchRequest)
displayResults(response.data)
}
}
private fun displayResults(data: List<WebItem>) {
runOnUiThread {
adapter.items.apply {
clear()
addAll(data)
}
adapter.notifyDataSetChanged()
}
}
}
Output​
Conclusion​By using these 3 kits effortlessly, you can increase the quality of your application in a short time. I hope this article was useful to you. See you in other articles
References​Network Kit: https://developer.huawei.com/consum...s-V5/network-introduction-0000001050440045-V5
ML Kit: https://developer.huawei.com/consum...s-V5/service-introduction-0000001050040017-V5
Search Kit: https://developer.huawei.com/consum...re-Guides-V5/introduction-0000001055591730-V5

Tips for Developing a Standing up Reminder

Check this out: Are you bending like this at your desk?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Well, I am, and if you're like me, maybe we should get up and move around for a little while.
Joking aside, I know COVID-19 forced many of you to work from home. As a result, many of us have started to live a sedentary lifestyle. After reading a bunch of posts shared by my family describing how harmful sitting too long is, I decided to change this habit by developing a function that reminds me to move around, like this:
Development Overview​To develop such a function, I turned to the mobile context-awareness capabilities from HMS Core Awareness Kit. I used a time awareness capability and behavior awareness capability to create a time barrier and behavior detection barrier respectively, as well as a combination of the barriers.
More specifically, these included:
Time awareness capability: TimeBarrier.duringTimePeriod(long startTimeStamp, long endSecondsMillis); is used to define a time barrier. If the current time is within the range from startTimeStamp to endSecondsMillis, the barrier status is true. Otherwise, it is false.
Behavior awareness capability: BehaviorBarrier.keeping(BehaviorBarrier.BEHAVIOR_STILL); is used to define a behavior detection barrier. If the status of a user is still, the barrier status is true; if the status of a user changes — from being stationary to moving, for example — then the barrier will be triggered, and its status will be false.
Barrier combination: Use and to combine the above two barriers into AwarenessBarrier.and(keepStillBarrier, timePeriodBarrier). When the current time of a user is within the specified time segment, and their status is still, the barrier status will be true. Otherwise, it is false.
It's quite straightforward, right? Let's take a deeper look into how the function is developed.
Development Procedure​Making Preparations​1. Create an Android Studio project. Put agconnect-services.json and the app signing certificate to the app's root directory. If you need to know where to obtain the two files, you can check the References section to get more information.
2. Configure a Maven repository address and import a plugin.
Code:
buildscript {
repositories {
maven { url 'http://szxy1.artifactory.cd-cloud-artifact.tools.huawei.com/artifactory/sz-maven-public/' }
maven { url 'http://dgg.maven.repo.cmc.tools.huawei.com/artifactory/Product-CloudTest-snapshot/' }
maven { url 'http://dgg.maven.repo.cmc.tools.huawei.com/artifactory/Product-cloudserviceSDK-release/' }
maven { url 'http://artifactory.cde.huawei.com/artifactory/Product-Binary-Release/' }
maven { url 'http://language.cloudartifact.dgg.dragon.tools.huawei.com/artifactory/product_maven/' }
}
dependencies {
classpath 'com.android.tools.build:gradle:3.4.3'
classpath 'com.huawei.agconnect:agcp:1.0.0.300'
}
}
allprojects {
repositories {
maven { url 'http://szxy1.artifactory.cd-cloud-artifact.tools.huawei.com/artifactory/sz-maven-public/' }
maven { url 'http://dgg.maven.repo.cmc.tools.huawei.com/artifactory/Product-CloudTest-snapshot/' }
maven { url 'http://dgg.maven.repo.cmc.tools.huawei.com/artifactory/Product-cloudserviceSDK-release/' }
maven { url 'http://artifactory.cde.huawei.com/artifactory/Product-Binary-Release/' }
maven { url 'http://language.cloudartifact.dgg.dragon.tools.huawei.com/artifactory/product_maven/' }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
3. Open the app-level build.gradle file, add the plugin, configure the signing certificate parameters, and add necessary building dependencies.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 31
buildToolsVersion "31.0.0"
defaultConfig {
applicationId "com.huawei.smartlifeassistant"
minSdkVersion 26
targetSdkVersion 31
versionCode 2
versionName "2.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
signingConfigs {
release {
storeFile file('Awareness.jks')
keyAlias 'testKey'
keyPassword 'lhw123456'
storePassword 'lhw123456'
v1SigningEnabled true
v2SigningEnabled true
}
}
buildTypes {
release {
signingConfig signingConfigs.release
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
debug {
signingConfig signingConfigs.release
debuggable true
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'com.huawei.agconnect:agconnect-core:1.5.2.300'
implementation 'com.huawei.hms:awareness:3.1.0.301'
}
4. Make sure that the app package names in agconnect-services.json and the project are the same. Then, compile the project.
Requesting Dynamic Permissions​
Code:
private static final int PERMISSION_REQUEST_CODE = 940;
private final String[] mPermissionsOnHigherVersion = new String[]{Manifest.permission.ACCESS_FINE_LOCATION,
Manifest.permission.ACCESS_BACKGROUND_LOCATION,
Manifest.permission.ACTIVITY_RECOGNITION,
Manifest.permission.BLUETOOTH_CONNECT};
private final String[] mPermissionsOnLowerVersion = new String[]{Manifest.permission.ACCESS_FINE_LOCATION,
"com.huawei.hms.permission.ACTIVITY_RECOGNITION"};
private void checkAndRequestPermissions() {
List<String> permissionsDoNotGrant = new ArrayList<>();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
for (String permission : mPermissionsOnHigherVersion) {
if (ActivityCompat.checkSelfPermission(this, permission)
!= PackageManager.PERMISSION_GRANTED) {
permissionsDoNotGrant.add(permission);
}
}
} else {
for (String permission : mPermissionsOnLowerVersion) {
if (ActivityCompat.checkSelfPermission(this, permission)
!= PackageManager.PERMISSION_GRANTED) {
permissionsDoNotGrant.add(permission);
}
}
}
if (permissionsDoNotGrant.size() > 0) {
ActivityCompat.requestPermissions(this,
permissionsDoNotGrant.toArray(new String[0]), PERMISSION_REQUEST_CODE);
}
}
Check whether the dynamic permissions are granted in onCreate of the activity.
Code:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_sedentary_reminder);
setTitle(getString(R.string.life_assistant));
// Check whether the dynamic permissions are granted.
checkAndRequestPermissions();
//...
}
private void checkAndRequestPermissions() {
List<String> permissionsDoNotGrant = new ArrayList<>();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
for (String permission : mPermissionsOnHigherVersion) {
if (ActivityCompat.checkSelfPermission(this, permission)
!= PackageManager.PERMISSION_GRANTED) {
permissionsDoNotGrant.add(permission);
}
}
} else {
for (String permission : mPermissionsOnLowerVersion) {
if (ActivityCompat.checkSelfPermission(this, permission)
!= PackageManager.PERMISSION_GRANTED) {
permissionsDoNotGrant.add(permission);
}
}
}
if (permissionsDoNotGrant.size() > 0) {
ActivityCompat.requestPermissions(this,
permissionsDoNotGrant.toArray(new String[0]), PERMISSION_REQUEST_CODE);
}
}
Using the Broadcast Message to Create PendingIntent Which Is Triggered When the Barrier Status Changes, and Registering a Broadcast Receiver​
Code:
final String barrierReceiverAction = getApplication().getPackageName() + "COMBINED_BARRIER_RECEIVER_ACTION";
Intent intent = new Intent(barrierReceiverAction);
// Also, we can use getActivity() or getService() to create PendingIntent.
// This depends on what action you want to be triggered when the barrier status changes.
mPendingIntent = PendingIntent.getBroadcast(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT
| PendingIntent.FLAG_MUTABLE);
// Register a broadcast receiver to receive the broadcast when the barrier status changes.
mBarrierReceiver = new CombinedBarrierReceiver();
registerReceiver(mBarrierReceiver, new IntentFilter(barrierReceiverAction));
final class CombinedBarrierReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
BarrierStatus barrierStatus = BarrierStatus.extract(intent);
String label = barrierStatus.getBarrierLabel();
int barrierPresentStatus = barrierStatus.getPresentStatus();
if (label == null) {
return;
}
switch (label) {
case COMBINED_BEHAVIOR_TIME_BARRIER_LABEL:
if (barrierPresentStatus == BarrierStatus.FALSE) {
if (System.currentTimeMillis() - lastTime >= tenSecondsMillis) {
alert.show();
}
updateTimeAwarenessBarrier();
}
break;
default:
break;
}
}
}
Registering or Deleting the Barrier Combination​Use a switch on the UI to register or delete the barrier combination.
Code:
automaticAdjustSwitch = findViewById(R.id.sedentary_reminder_switch);
automaticAdjustSwitch.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
startAutomaticAdust(isChecked);
}
});
private void startAutomaticAdust(boolean isChecked) {
if (isChecked) {
addBarriers();
} else {
deleteBarriers();
}
}
private void addBarriers() {
keepStillBarrier = BehaviorBarrier.keeping(BehaviorBarrier.BEHAVIOR_STILL);
updateTimeAwarenessBarrier();
}
@NonNull
private void updateTimeAwarenessBarrier() {
long currentTimeStamp = System.currentTimeMillis();
lastTime = currentTimeStamp;
AwarenessBarrier timePeriodBarrier = TimeBarrier.duringTimePeriod(currentTimeStamp, currentTimeStamp + tenSecondsMillis);
AwarenessBarrier combinedTimeBluetoothBarrier = AwarenessBarrier.and(keepStillBarrier, timePeriodBarrier);
Utils.addBarrier(this, COMBINED_BEHAVIOR_TIME_BARRIER_LABEL,
combinedTimeBluetoothBarrier, mPendingIntent);
}
private void deleteBarriers() {
Utils.deleteBarrier(this, mPendingIntent);
}
Showing the Reminding Information​Use an AlertDialog to remind a user.
Code:
// Initialize Builder.
builder = new AlertDialog.Builder(this);
// Load and configure the custom view.
final LayoutInflater inflater = getLayoutInflater();
View view_custom = inflater.inflate(R.layout.view_dialog_custom, null, false);
builder.setView(view_custom);
builder.setCancelable(false);
alert = builder.create();
view_custom.findViewById(R.id.i_kown).setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
alert.dismiss();
}
});
final class CombinedBarrierReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
BarrierStatus barrierStatus = BarrierStatus.extract(intent);
String label = barrierStatus.getBarrierLabel();
int barrierPresentStatus = barrierStatus.getPresentStatus();
if (label == null) {
return;
}
switch (label) {
case COMBINED_BEHAVIOR_TIME_BARRIER_LABEL:
if (barrierPresentStatus == BarrierStatus.FALSE) {
if (System.currentTimeMillis() - lastTime >= tenSecondsMillis) {
alert.show();
}
updateTimeAwarenessBarrier();
}
break;
default:
break;
}
}
}
And just like that, the standing up reminder function is created.
In fact, I've got some more ideas for using mobile context-awareness capabilities, such as developing a sleep reminder using the ambient light awareness capability and the time awareness capability. This reminder can notify users when it is bedtime based on a specified time and when the ambient brightness is lower than a specified value.
A schedule reminder also sounds like a good idea, which uses the time awareness capability to tell a user their schedule for a day at a specified time.
These are just some of my ideas. If you've got some other interesting inspirations for using the context-awareness capabilities, please share them in the comments section below and see how our ideas overlap.
References​>>The dangers of sitting
>>What are the risks of sitting too much?[z1]
>>Obtaining agconnect-services.json
>>Obtaining a signing certificate

Categories

Resources