Ever seen a breathtaking landmark or scenery while flipping through a book or magazine, and been frustrated because you don't know what it's called or where it is? Wouldn't it be great if there was an app that could tell you what you're seeing! Fortunately, there's our ML Kit. It comes with a landmark recognition service and makes it remarkably easy to develop such an app. So let's take a look at how to use this service!
Introduction to Landmark Recognition
The landmark recognition service enables you to obtain the landmark name, landmark longitude and latitude, and even a confidence value of the input image. When you input an image for recognition, a confidence value will be provided whereby a higher confidence value indicates that the landmark in the input image is more likely to be recognized. You can then use this information to create a highly-personalized experience for your users. Currently, the service is capable of recognizing more than 17,000 landmarks around the world.
When using landmark recognition, the device calls the on-cloud API for detection, and the detection algorithm model runs on the cloud. You'll need to ensure that the device is connected to the Internet while using this service.
Preparations
Configuring the development environment
1. Create an app in AppGallery Connect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For details, see Getting Started with Android.
2. Enable ML Kit.
Click here for more details.
3. Download the agconnect-services.json file, which is automatically generated after the app is created. Copy it to the app directory of the Android Studio project.
4. Configure the Maven repository address for the HMS Core SDK.
5. Integrate the landmark recognition SDK.
Configure the SDK in the build.gradle file in the app directory.
Code:
// Import the landmark recognition SDK.
implementation 'com.huawei.hms:ml-computer-vision-cloud:2.0.5.304'
Add the AppGallery Connect plugin configuration as needed through either of the following methods:
Method 1: Add the following information under the declaration in the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block:
Code:
plugins {
id 'com.android.application'
id 'com.huawei.agconnect'
}
Code Development
1. Obtain the camera permission to use the camera.
(Mandatory) Set the static permission.
Code:
<uses-permission android:name="android.permission.CAMERA" />
(Mandatory) Obtain the permission dynamically.
Code:
ActivityCompat.requestPermissions(
this, new String[]{Manifest.permission. CAMERA
}, 1);
2. Set the API key. This service runs on the cloud, meaning an API key is required to set the cloud authentication information for the app. This step is mandatory, and failure to complete it will result in an error being reported when the app is running.
Code:
// Set the API key to access the on-cloud services.
private void setApiKey() {
// Parse the agconnect-services.json file to obtain its information.
AGConnectServicesConfig config = AGConnectServicesConfig.fromContext(getApplication());
// Sets the API key.
MLApplication.getInstance().setApiKey(config.getString("client/api_key"));
}
3. Create a landmark analyzer through either of the following methods:
Code:
// Method 1: Use default parameter settings.
MLRemoteLandmarkAnalyzer analyzer = MLAnalyzerFactory.getInstance().getRemoteLandmarkAnalyzer();
Code:
// Method 2: Use customized parameter settings through the MLRemoteLandmarkAnalyzerSetting class.
/**
* Use custom parameter settings.
* setLargestNumOfReturns indicates the maximum number of recognition results.
* setPatternType indicates the analyzer mode.
* MLRemoteLandmarkAnalyzerSetting.STEADY_PATTERN: The value 1 indicates the stable mode.
* MLRemoteLandmarkAnalyzerSetting.NEWEST_PATTERN: The value 2 indicates the latest mode.
*/
private void initLandMarkAnalyzer() {
settings = new MLRemoteLandmarkAnalyzerSetting.Factory()
.setLargestNumOfReturns(1)
.setPatternType(MLRemoteLandmarkAnalyzerSetting.STEADY_PATTERN)
.create();
analyzer = MLAnalyzerFactory.getInstance().getRemoteLandmarkAnalyzer(settings);
}
4. Convert the image collected from the camera or album to a bitmap. This is not provided by the landmark recognition SDK, so you'll need to implement it on your own.
Code:
// Select an image.
private void selectLocalImage() {
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, REQUEST_SELECT_IMAGE);
}
Enable the landmark recognition service in the callback.
Code:
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
// Image selection succeeded.
if (requestCode == REQUEST_SELECT_IMAGE && resultCode == RESULT_OK) {
if (data != null) {
// Obtain the image URI through getData(). imageUri = data.getData();
// Implement the BitmapUtils class by yourself. Obtain the bitmap of the image with its URI.
bitmap = BitmapUtils.loadFromPath(this, imageUri, getMaxWidthOfImage(), getMaxHeightOfImage());
}
// Start landmark recognition.
startAnalyzerImg(bitmap);
}
}
5. Start landmark recognition after obtaining the bitmap of the image. As this service runs on the cloud, a poor network connection may slow down data transmission. Therefore, it's recommended that you add a mask to the bitmap prior to landmark recognition.
Code:
// Start landmark recognition.
private void startAnalyzerImg(Bitmap bitmap) {
if (imageUri == null) {
return;
}
// Add a mask.
progressBar.setVisibility(View.VISIBLE);
img_analyzer_landmark.setImageBitmap(bitmap);
// Create an MLFrame object using android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image size be greater than or equal to 640 x 640 px.
MLFrame mlFrame = new MLFrame.Creator().setBitmap(bitmap).create();
Task<List<MLRemoteLandmark>> task = analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<List<MLRemoteLandmark>>() {
public void onSuccess(List<MLRemoteLandmark> landmarkResults) {
progressBar.setVisibility(View.GONE);
// Called upon recognition success.
Log.d("BitMapUtils", landmarkResults.get(0).getLandmark());
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
progressBar.setVisibility(View.GONE);
// Called upon recognition failure.
// Recognition failure.
try {
MLException mlException = (MLException) e;
// Obtain the result code. You can process the result code and set a different prompt for users for each result code.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
// Record the code and message of the error in the log.
Log.d("BitMapUtils", "errorCode: " + errorCode + "; errorMessage: " + errorMessage);
} catch (Exception error) {
// Handle the conversion error.
}
}
});
}
Testing the App
The following illustrates how the service works, using the Oriental Pearl Tower in Shanghai and Pyramid of Menkaure as examples:
More Information1. Before performing landmark recognition, set the API key to set the cloud authentication information for the app. Otherwise, an error will be reported while the app is running.
2. Landmark recognition runs on the cloud, so completion may be slow. It is recommended that you add a mask before performing landmark recognition.
ReferencesFor more details, you can go to:
ML Kit official website
ML Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download ML Kit sample codes
Stack Overflow to solve any integration problems
Related
This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
This is all about integration of HMS Safety Detect API in the Android app using MVVM RxAndroid.
What is HMS Safety Detect API?
Ø The Safety Detect provides system integrity check (SysIntegrity), app security check (AppsCheck), malicious URL check (URLCheck), and fake user detection (UserDetect), helping you prevent security threats to your app.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s create a Demo Project:
HUAWEI HMS Safety Detect integration requires the following preparations
Ø Creating an AGC Application.
Ø Creating an Android Studio Project.
Ø Generating a signature certificate.
Ø Generating a signature certificate fingerprint.
Ø Configuring the signature certificate fingerprint.
Ø Adding the application package name and save the configuration file.
Ø Configure the Maven address and AGC gradle plug-in.
Ø Configure the signature file in Android Studio.
In this article, we will implement SysIntegrity API in demo project using with RxAndroid and MVVM.
Call the API and handle responses.
Verify the certificate chain, signature, and domain name on the server.
1. Open AppGallery Console:
1. We need to create an application inside console.
2. We need to enable the Safety Detect api.
Go to Console > AppGallery Connect > My apps, click your app, and go to Develop > Manage APIs.
Now enable Safety Detect Api
Download the agconnect-services.json
Move the downloaded agconnect-services.json file to the app root directory of your Android Studio project.
We need to add HMS SDK dependency in app:gradle file
Code:
implementation 'com.huawei.hms:safetydetect:4.0.0.300'
We need to add maven dependency inside project:gradle file
Code:
maven { url 'http://developer.huawei.com/repo/' }
We need to add two more dependencies in app:gradle file
Code:
// MVVM
implementation 'androidx.lifecycle:lifecycle-extensions:2.1.0'
// RxAndroid
implementation 'io.reactivex.rxjava2:rxjava:2.2.8'
implementation 'io.reactivex.rxjava2:rxandroid:2.1.1'
Enable Data Binding
Code:
dataBinding {
enabled = true
}
2. Let’s implement api :
I have created following classes.
1. SysIntegrityDataSource : Which invoke the System Integrity Api with help of RxJava.
2. SysIntegrityViewModel : Which handle the response from System Integrity api and provide LiveData for view componets.
3. SysIntegrityFragment : Which observe the livedata from viewmodel class and set values in views such as textviews and button.
Note: If you are not familiar with MVVM or RxAndroid then I would like to suggest you to please go through my following articles:
· Android MyShows App — Rxandroid MVVM LiveData ViewModel DataBinding, Networking with Retrofit, Gson & Glide — Series
· Demystifying Data Binding — Android Jetpack — Series
Let’s see the implementation of SysIntegrityDataSource.java class.
Code:
public class SysIntegrityDataSource {
private static final String APP_ID = "XXXXXXXX";
private Context context;
public SysIntegrityDataSource(Context context) {
this.context = context;
}
public Single<SysIntegrityResp> executeSystemIntegrity() {
return Single.create(this::invokeSysIntegrity);
}
private void invokeSysIntegrity(SingleEmitter<SysIntegrityResp> emitter) {
byte[] nonce = ("Sample" + System.currentTimeMillis()).getBytes();
SafetyDetect.getClient(context)
.sysIntegrity(nonce, APP_ID)
.addOnSuccessListener(emitter::onSuccess)
.addOnFailureListener(emitter::onError);
}
}
invokeSysIntegrity() : This method invoke the System Integrity api and emit the data onSuccess/OnError and past it to Single<SysIntegrityResp> observable.
executeSystemIntegrity() : This method will create Single observable and return the response from invokeSysIntegrity() method.
3. Let’s implement ViewModel :
I have created SysIntegrityViewModel.java class.
Code:
public class SysIntegrityViewModel extends AndroidViewModel {
private final CompositeDisposable disposables = new CompositeDisposable();
private SysIntegrityDataSource sysIntegrityDataSource;
private MutableLiveData<SysIntegrityResp> systemIntegrityLiveData;
private MutableLiveData<String> error;
public SysIntegrityViewModel(Application app) {
super(app);
sysIntegrityDataSource = new SysIntegrityDataSource(app.getBaseContext());
systemIntegrityLiveData = new MutableLiveData<>();
error = new MutableLiveData<>();
}
public LiveData<SysIntegrityResp> observerSystemIntegrity() {
sysIntegrityDataSource.executeSystemIntegrity()
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe(new SingleObserver<SysIntegrityResp>() {
@Override
public void onSubscribe(Disposable d) {
disposables.add(d);
}
@Override
public void onSuccess(SysIntegrityResp response) {
systemIntegrityLiveData.setValue(response);
}
@Override
public void onError(Throwable e) {
error.setValue(e.getMessage());
}
});
return systemIntegrityLiveData;
}
public LiveData<String> getError() {
return error;
}
@Override
protected void onCleared() {
disposables.clear();
}
}
MutableLiveData<SysIntegrityResp> systemintegrityLiveData: This field which provide the live data and return the value from viewmodel to fragment class.
observerSysIntegrity() : Which observe RxAndroid’s Single(observable) on main thread and set the value in systemIntegrityLiveData. If we got error while observing it will post the error in MutableLiveData<String> error.
4. Let’s implement Fragment :
I have created SysIntegrityFragment.java class Which obaserve the System Integrity api’s reponse and set the values in views.
Code:
public class SysIntegrityFragment extends Fragment {
private SysIntegrityViewModel sysIntegrityViewModel;
private FragmentSysBinding sysBinding;
public View onCreateView(@NonNull LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
sysBinding=DataBindingUtil.inflate(inflater, R.layout.fragment_sys, container, false);
sysIntegrityViewModel = ViewModelProviders.of(this).get(SysIntegrityViewModel.class);
sysBinding.btnSys.setOnClickListener(v->{
processView();
sysIntegrityViewModel.observerSystemIntegrity().observe(getViewLifecycleOwner(), this::setSystemIntegrity);
sysIntegrityViewModel.getError().observe(getViewLifecycleOwner(),this::showError);
});
return sysBinding.getRoot();
}
private void setSystemIntegrity(SysIntegrityResp response){
String jwsStr = response.getResult();
String[] jwsSplit = jwsStr.split("\\.");
String jwsPayloadStr = jwsSplit[1];
String payloadDetail = new String(Base64.decode(jwsPayloadStr.getBytes(), Base64.URL_SAFE));
try {
final JSONObject jsonObject = new JSONObject(payloadDetail);
final boolean basicIntegrity = jsonObject.getBoolean("basicIntegrity");
sysBinding.btnSys.setBackgroundResource(basicIntegrity ? R.drawable.btn_round_green : R.drawable.btn_round_red);
sysBinding.btnSys.setText(R.string.rerun);
String isBasicIntegrity = String.valueOf(basicIntegrity);
String basicIntegrityResult = "Basic Integrity: " + isBasicIntegrity;
sysBinding.txtBasicIntegrityTitle.setText(basicIntegrityResult);
if (!basicIntegrity) {
String advice = "Advice: " + jsonObject.getString("advice");
sysBinding.txtPayloadAdvice.setText(advice);
}
} catch (JSONException e) {
}
}
private void showError(String error){
Toast.makeText(getActivity().getApplicationContext(), error, Toast.LENGTH_SHORT).show();
sysBinding.btnSys.setBackgroundResource(R.drawable.btn_round_yellow);
sysBinding.btnSys.setText(R.string.rerun);
}
private void processView() {
sysBinding.txtBasicIntegrityTitle.setText("");
sysBinding.txtPayloadBasicIntegrity.setText("");
sysBinding.btnSys.setText(R.string.processing);
sysBinding.btnSys.setBackgroundResource(R.drawable.btn_round_processing);
}
}
We have instantiated instance of view model using ViewModel factory method.
We will consume the response on button click’s event.
If we got success response then we will display inside textviews and button otherwise we will show the error toast.
5. Let’s see the result:
Build the app and hit run button.
Click > RunDetection Case 1: Success Case 2: SDK Error Case 3: Integrity false (Rooted)
I hope you have learnt something new today. If you have any query regarding this article, please feel free to post any comments.
Any questions about this, you can try to acquire answers from HUAWEI Developer Forum.
Useful sharing,thanks
Thank you so much for sharing, very useful.
Thank you so much for sharing, very useful.
Thank you so much for sharing, very useful.
Does it work offline?
useful sharing,thanks!
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
These days mobile devices are part of our life. We do many operations from our mobile phones such as making payment, logging in to social media accounts, checking our bank accounts.
These are the operations which need high security level. If our device will have malicious apps or something like that, our accounts will have trouble and we may suffer many financial and moral damages.
In this article, I will talk about how to improve app security by using HMS Safety Detect Kit.
To do that, I have developed a simple secure web browser app. Because in web browsers, we can use bank websites, we can login to our social media, we can make some payment and use our credit/bank card information. We wouldn’t like to our information to be stolen.
App Preparations
I use Koin framework for dependency injection in my application.
To use Koin Framework in our application, we should add 3 dependencies to our app. In the above, you can find dependencies which you need to add in app-level build.gradle file.
Code:
def koinVersion = "2.2.0-rc-4"
dependencies {
....
// Koin for Android
implementation "org.koin:koin-android:$koinVersion"
// Koin Android Scope feature
implementation "org.koin:koin-android-scope:$koinVersion"
// Koin Android ViewModel feature
implementation "org.koin:koin-android-viewmodel:$koinVersion"
}
After we have implemented the Koin dependencies, we need to create our modules which we will add in our application class.
We will get necessary objects with the help of these modules. I prefer to define different module files for different works.
Code:
val applicationModule = module {
single(named("appContext")){ androidApplication().applicationContext }
factory { HmsHelper() }
factory { SystemHelper() }
}
Code:
val dataModule = module {
factory<ErrorItem>(named("HmsNotAvailable")) { ErrorItem(
icon = ContextCompat.getDrawable(get(named("appContext")), R.drawable.huawei)!!,
title = androidContext().getString(R.string.hms_not_available),
message = androidContext().getString(R.string.download_hms_core)) }
factory<ErrorItem>(named("DeviceNotSecure")) { ErrorItem(
icon = ContextCompat.getDrawable(get(named("appContext")), R.drawable.ic_device_not_secure)!!,
title = androidContext().getString(R.string.device_not_secure),
message = androidContext().getString(R.string.device_not_secure_message)) }
factory<ErrorItem>(named("MaliciousApps")) { ErrorItem(
icon = ContextCompat.getDrawable(get(named("appContext")), R.drawable.ic_malicious_apps)!!,
title = androidContext().getString(R.string.device_not_secure),
message = androidContext().getString(R.string.malicious_apps_message)) }
}
Code:
val viewModelModule = module {
viewModel { SplashViewModel() }
}
After we have defined our modules, we need to setup Koin in our application class.
While starting Koin, we should add our modules which we have defined above, and if we want to use app context, we should androidContext value.
XML:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.berkberber.hms_securewebbrowser">
....
<application
android:name=".SecureWebBrowserApp"
....
>
....
</application>
</manifest>
Code:
class SecureWebBrowserApp: Application(){
override fun onCreate() {
super.onCreate()
setup()
}
private fun setupKoin(){
startKoin {
androidContext([email protected])
modules(
applicationModule,
viewModelModule,
dataModule
)
}
}
private fun setup(){
setupKoin()
}
}
To get more information about app and to see how I used other things such as Navigation Component, MVVM, and etc. you can visit my GitHub repository.
HMS Safety Detect
Safety Detect Kit helps us to improve the security level of our apps. There are 5 different APIs we can use with HMS Safety Detect Kit.
SysIntegrity API: Helps us to check device security. We can determine that device has been rooted or has not.
AppsCheck API: Helps us to determine and list malicious apps which have installed to device.
URLCheck API: Helps us check whether websites are safe.
UserDetect API: Helps us to determine that user is fake or is not.
WifiDetect API: Helps us to check whether Wi-Fi which the device has connected is secure.
Note: UserDetect API is available outside of Chinese mainland. WifiDetect API is available only in the Chinese mainland.
In this article, I have been focused on app security. So, I used SysIntegrity API and AppsCheck API and I will give you informations about these APIs.
Checking is HMS available on device (optional)
We will use Safety Detect Kit in our application. Safety Detect Kit requires HMS Core to be installed on the device.
We don’t have to make this control, but if device doesn’t have HMS, we can’t use HMS Safety Detect Kit. That’s why I recommend you to check HMS Core availability on device and if device doesn’t have HMS, it is better to show an error screen to user.
To check HMS availability we need to add base HMS dependency to our app-level build.gradle file.
To check that device has HMS support or has not, we can write very basic function called as isHmsAvailable().
Code:
def hmsBaseVersion = "5.0.3.300"
dependencies {
...
// HMS Base
implementation "com.huawei.hms:base:${hmsBaseVersion}"
}
Code:
class HmsHelper: KoinComponent{
private val appContext: Context by inject(named("appContext"))
fun isHmsAvailable(): Boolean {
val isAvailable = HuaweiApiAvailability.getInstance().isHuaweiMobileNoticeAvailable(appContext)
return (ConnectionResult.SUCCESS == isAvailable)
}
}
If this function returns true, that means device has HMS support and we can start our application.
If this function returns false, that means device doesn’t have HMS support and we shouldn’t start our application. We may show an error screen to user.
SysIntegrity API
SysIntegrity API helps us to check that the user’s device is secure or is not. Even if the device has been rooted, SysIntegrity API will tell us that device is not secure.
To check the device security, we can call our isDeviceSecure() function.
As you see, this function will create a nonce value with an algorithm and pass this value to checkDeviceSecurity() function.
You may ask that, what is the algorithm value which I have used as “Constants.SAFETY_DETECT_ALGORITHM”. You can define this algorithm value as shown in below:
Code:
object Constants{
const val BASIC_INTEGRITY = "basicIntegrity"
const val SAFETY_DETECT_ALGORITHM = "SHA1PRNG"
}
As you see, we have defined two different values. We will use these values while checking device security.
You already know where to use SAFETY_DETECT_ALGORITHM value.
We will use BASIC_INTEGRITY value to get device security situation from JSON.
If this value returns true, that means user’s device is secure.
If this value returns false, that means device is not secure or device has been rooted.
Code:
object SafetyDetectService : KoinComponent {
private val appContext: Context by inject(named("appContext"))
private val client: SafetyDetectClient = SafetyDetect.getClient(appContext)
fun isDeviceSecure(serviceListener: IServiceListener<Boolean>) {
val nonce = ByteArray(24)
try {
val random: SecureRandom = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O)
SecureRandom.getInstanceStrong()
else
SecureRandom.getInstance(Constants.SAFETY_DETECT_ALGORITHM)
random.nextBytes(nonce)
} catch (error: NoSuchAlgorithmException) {
serviceListener.onError(ErrorType.NO_SUCH_OBJECT)
}
checkDeviceSecurity(nonce, serviceListener)
}
private fun checkDeviceSecurity(nonce: ByteArray, serviceListener: IServiceListener<Boolean>){
client.sysIntegrity(nonce, BuildConfig.APP_ID)
.addOnSuccessListener { sysIntegrityResp ->
SafetyDetectHelper.getPayloadDetailAsJson(sysIntegrityResp)?.let { jsonObject ->
serviceListener.onSuccess(jsonObject.getBoolean(Constants.BASIC_INTEGRITY))
} ?: kotlin.run {
serviceListener.onError(ErrorType.SERVICE_FAILURE)
}
}
.addOnFailureListener {
serviceListener.onError(ErrorType.SERVICE_FAILURE)
}
}
}
As I talked about above, we need to get a json object from SysIntegrityResp object which has been returned by SysIntegrity API. To get this value, we can define a helper object and we can add all operations about getting json in here.
As you see in the below, we will send a SysIntegrityResp object as parameter and with the help of this function, we can get json object about our device security.
Code:
object SafetyDetectHelper {
fun getPayloadDetailAsJson(sysIntegrityResp: SysIntegrityResp): JSONObject? {
val jwsStr = sysIntegrityResp.result
val jwsSplit = jwsStr.split(".").toTypedArray()
val jwsPayloadStr = jwsSplit[1]
val payloadDetail = String(Base64.decode(
jwsPayloadStr.toByteArray(StandardCharsets.UTF_8), Base64.URL_SAFE),
StandardCharsets.UTF_8)
return try {
JSONObject(payloadDetail)
}catch (jsonError: JSONException){
null
}
}
}
If device is secure, we can do our next operations which we need to do. If device is not secure, we should show an error screen to user and we shouldn’t let user to start our application.
AppsCheck API
AppsCheck API helps us to determine malicious apps in user’s device. Thus, if device has some malicious apps, we will not let user to start our application for user’s security.
getMaliciousAppsList() function gives us a list of malicious app and it uses MaliciousAppsData class which has been defined by Huawei as a model class.
This API will return us a response object and this object will have the malicious apps list. If there is not any malicious apps on the device, we can return null and let user to use our application.
But if there are some malicious apps, we shouldn’t let user to start our application and we can show an error screen to user. If we would like to we can list malicious apps to user.
Note: It is better to list malicious apps and let user to delete these applications from device. That is what I am doing in my app. Also, if we would like to do more operations about malicious apps, we can define our own class like I talked about below.
Code:
object SafetyDetectService : KoinComponent {
private val appContext: Context by inject(named("appContext"))
private val client: SafetyDetectClient = SafetyDetect.getClient(appContext)
fun checkMaliciousApps(serviceListener: IServiceListener<ArrayList<MaliciousApps>?>){
client.maliciousAppsList
.addOnSuccessListener { maliciousAppsListResp ->
if(maliciousAppsListResp.rtnCode == CommonCode.OK){
val maliciousAppsList: List<MaliciousAppsData> = maliciousAppsListResp.maliciousAppsList
if(maliciousAppsList.isEmpty())
serviceListener.onSuccess(null)
else{
var maliciousApps = arrayListOf<MaliciousApps>()
for(maliciousApp in maliciousAppsList){
maliciousApp.apply {
maliciousApps.add(MaliciousApps(packageName = apkPackageName,
sha256 = apkSha256,
apkCategory = apkCategory))
}
}
serviceListener.onSuccess(maliciousApps)
}
}
}
.addOnFailureListener {
serviceListener.onError(ErrorType.SERVICE_FAILURE)
}
}
}
If we would like to do more operations like getting app icon, app name and etc. we can define our own data class.
I defined my own data class as shown in the below to do more specific operations with malicious apps.
Code:
data class MaliciousApps(
val packageName: String,
val sha256: String,
val apkCategory: Int
): KoinComponent{
private val appContext: Context = get(named("appContext"))
private val systemHelper: SystemHelper = get()
fun getAppIcon(): Drawable = systemHelper.getAppIconByPackageName(packageName)
fun getAppName(): String = systemHelper.getAppNameByPackageName(packageName)
fun getThreatDescription(): String {
return when(apkCategory){
1 -> appContext.getString(R.string.risky_app_description)
2 -> appContext.getString(R.string.virus_app_description)
else -> ""
}
}
}
Here I am just using same values with Huawei’s MaliciousAppsData class. But I added my own functions in here to get app icon, app name and threat description.
To get more information about application by package name, we can define new object called as SystemHelper and we can do these operations in here.
Code:
class SystemHelper: KoinComponent {
private val appContext: Context by inject(named("appContext"))
/**
* Getting application information by package name
* @param packageName: Package name of the app that we want to get information about
* @return ApplicationInfo class to get app icons, app names and etc. by package name
*/
private fun getAppByPackageName(packageName: String): ApplicationInfo{
return appContext.packageManager.getApplicationInfo(packageName, 0)
}
/**
* Getting application icon by package name
* @param packageName: Package name of the app which we want to get icon
* @return Icon of the application as drawable
*/
fun getAppIconByPackageName(packageName: String): Drawable{
val app = getAppByPackageName(packageName)
return appContext.packageManager.getApplicationIcon(app)
}
/**
* Getting application name by package name
* @param packageName: Package name of the app which we want to get name
* @return Name of the application as drawable
*/
fun getAppNameByPackageName(packageName: String): String{
val app = getAppByPackageName(packageName)
return appContext.packageManager.getApplicationLabel(app).toString()
}
}
When API founds malicious apps we need to list these apps to user and let user to delete these apps from device.
To do that, we can use selectedApp() function. This function will take the malicious app and ask user to delete them.
We need to detect that user has accepted to deleting application or has not. We need to start activity with result and we need to listen this result. If user really delete the application, we need to remove it from list. If there is not any malicious app on list after removing it, we can navigate user to our app.
Code:
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if(requestCode == DELETE_REQUEST_CODE){
when(resultCode){
Activity.RESULT_OK -> {
maliciousApps.remove(selectedMaliciousApp)
setRecyclerView()
}
Activity.RESULT_CANCELED -> {
Toast.makeText(requireContext(), requireContext().getString(R.string.should_delete_app), Toast.LENGTH_LONG).show()
}
}
}
}
private var deleteClickListener = object: DeleteClickListener{
override fun selectedApp(maliciousApp: MaliciousApps) {
var deleteIntent = Intent(Intent.ACTION_DELETE).apply {
data = Uri.parse("package:${maliciousApp.packageName}")
putExtra(Intent.EXTRA_RETURN_RESULT, true)
}
startActivityForResult(deleteIntent, DELETE_REQUEST_CODE)
}
}
To learn more about the app and examine it, you can visit my GitHub repository.
References
berkberberr/HMS-SecureWebBrowserExample: This repository is a secure web browser app which is using Huawei Mobile Services. (github.com)
Safety Detect: SysIntegrity, URLCheck, AppsCheck, UserDetect - HUAWEI Developer
Ever seen a breathtaking landmark or scenic attraction when flipping through a book or magazine, and been frustrated by failing to find its name or location — wouldn't be so nice if there was an app that could tell you what you're seeing!
Fortunately, there's HUAWEI ML Kit, which comes with a landmark recognition service, and makes it remarkably easy to develop such an app.
So let's take a look at how to use this service!
Introduction to Landmark RecognitionThe landmark recognition service enables you to obtain the landmark name, landmark longitude and latitude, and even a confidence value for the input image. A higher confidence value indicates that the landmark in the input image is more likely to be recognized. You can then use this information to create a highly-personalized experience for your users. Currently, the service is capable of recognizing more than 17,000 landmarks around the world.
In landmark recognition, the device calls the on-cloud API for detection, and the detection algorithm model runs on the cloud. During commissioning and usage, you'll need to make sure that the device can access the Internet.
PreparationsConfiguring the development environment
Create an app in AppGallery Connect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For details, see Getting Started with Android.
2. Enable ML Kit.
Click here for more details.
Download the agconnect-services.json file, which is automatically generated after the app is created. Copy it to the root directory of your Android Studio project.
Configure the Maven repository address for the HMS Core SDK.
Integrate the landmark recognition SDK.
Configure the SDK in the build.gradle file in the app directory.
Code:
// Import the landmark recognition SDK.
implementation 'com.huawei.hms:ml-computer-vision-cloud:2.0.5.304'
Add the AppGallery Connect plugin configuration as needed through either of the following methods:
Method 1: Add the following information under the declaration in the file header:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block:
plugins {
id 'com.android.application'
id 'com.huawei.agconnect'
}
Code DevelopmentObtain the camera permission to use the camera.
Code:
(Mandatory) Set the static permission.
<uses-permission android:name="android.permission.CAMERA" />
(Mandatory) Obtain the dynamic permission.
ActivityCompat.requestPermissions(
this, new String[]{Manifest.permission. CAMERA
}, 1);
Set the API key. This service runs on the cloud, which means that an API key is required to set the cloud authentication information for the app. This step is a must, and failure to complete it will result in an error being reported when the app is running.
Code:
// Set the API key to access the on-cloud services.
private void setApiKey() {
// Parse the agconnect-services.json file to obtain its information.
AGConnectServicesConfig config = AGConnectServicesConfig.fromContext(getApplication());
// Sets the API key.
MLApplication.getInstance().setApiKey(config.getString("client/api_key"));
}
Create a landmark analyzer through either of the following methods:
// Method 1: Use default parameter settings.
Code:
MLRemoteLandmarkAnalyzer analyzer = MLAnalyzerFactory.getInstance().getRemoteLandmarkAnalyzer();
// Method 2: Use customized parameter settings through the MLRemoteLandmarkAnalyzerSetting class.
Code:
/**
* Use custom parameter settings.
* setLargestNumOfReturns indicates the maximum number of recognition results.
* setPatternType indicates the analyzer mode.
* MLRemoteLandmarkAnalyzerSetting.STEADY_PATTERN: The value 1 indicates the stable mode.
* MLRemoteLandmarkAnalyzerSetting.NEWEST_PATTERN: The value 2 indicates the latest mode.
*/
private void initLandMarkAnalyzer() {
settings = new MLRemoteLandmarkAnalyzerSetting.Factory()
.setLargestNumOfReturns(1)
.setPatternType(MLRemoteLandmarkAnalyzerSetting.STEADY_PATTERN)
.create();
analyzer = MLAnalyzerFactory.getInstance().getRemoteLandmarkAnalyzer(settings);
}
Convert the image collected from the camera or album to a bitmap. This is not provided by the landmark recognition SDK, so you'll need to implement it on your own.
Code:
// Select an image.
private void selectLocalImage() {
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, REQUEST_SELECT_IMAGE);
}
Enable the landmark recognition service in the callback.
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
// Image selection succeeded.
if (requestCode == REQUEST_SELECT_IMAGE && resultCode == RESULT_OK) {
if (data != null) {
// Obtain the image URI through getData(). imageUri = data.getData();
// Implement the BitmapUtils class by yourself. Obtain the bitmap of the image with its URI. bitmap = BitmapUtils.loadFromPath(this, imageUri, getMaxWidthOfImage(), getMaxHeightOfImage());
}
// Start landmark recognition.
startAnalyzerImg(bitmap);
}
}
Start landmark recognition after obtaining the bitmap of the image. Since this service runs on the cloud, if the network status is poor, data transmission can be slow. Therefore, it's recommended that you add a mask to the bitmap prior to landmark recognition.
Code:
// Start landmark recognition.
private void startAnalyzerImg(Bitmap bitmap) {
if (imageUri == null) {
return;
}
// Add a mask.
progressBar.setVisibility(View.VISIBLE);
img_analyzer_landmark.setImageBitmap(bitmap);
// Create an MLFrame object using android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image size be greater than or equal to 640 x 640 px.
MLFrame mlFrame = new MLFrame.Creator().setBitmap(bitmap).create();
Task<List<MLRemoteLandmark>> task = analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<List<MLRemoteLandmark>>() {
public void onSuccess(List<MLRemoteLandmark> landmarkResults) {
progressBar.setVisibility(View.GONE);
// Called upon recognition success.
Log.d("BitMapUtils", landmarkResults.get(0).getLandmark());
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
progressBar.setVisibility(View.GONE);
// Called upon recognition failure.
// Recognition failure.
try {
MLException mlException = (MLException) e;
// Obtain the result code. You can process the result code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
// Record the code and message of the error in the log.
Log.d("BitMapUtils", "errorCode: " + errorCode + "; errorMessage: " + errorMessage);
} catch (Exception error) {
// Handle the conversion error.
}
}
});
}
Testing the AppThe following illustrates how the service works, using the Oriental Pearl Tower in Shanghai and Pyramid of Menkaure as examples:
More Information1. Before performing landmark recognition, set the API key to set the cloud authentication information for the app. Otherwise, an error will be reported while the app is running.
2. Landmark recognition runs on the cloud, so it may take some time to complete. It is recommended that you use the mask before performing landmark recognition.
3. If you are interested in other ML Kit services, feel free to check out our official materials.
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source
Introduction
The API can perfectly retrieve the information of tables as well as text from cells, besides, merged cells can also be recognized. It supports recognition of tables with clear and unbroken lines, but not supportive of tables with crooked lines or cells divided by color background. Currently the API supports recognition from printed materials and snapshots of slide meetings, but it is not functioning for the screenshots or photos of excel sheets and any other table editing software.
Here, the image resolution should be higher than 720p (1280×720 px), and the aspect ratio (length-to-width ratio) should be lower than 2:1.
In this article, we will learn how to implement Huawei HiAI kit using Table Recognition service into android application, this service helps us to extract the table content from images.
Software requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required processors kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
How to integrate Table recognition.
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei's AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}Copy code
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Java:
import android.Manifest;
import android.content.Intent;
import android.graphics.Bitmap;
import android.provider.MediaStore;
import android.support.annotation.Nullable;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TableLayout;
import android.widget.TextView;
import android.widget.Toast;
import com.huawei.hiai.vision.common.ConnectionCallback;
import com.huawei.hiai.vision.common.VisionBase;
import com.huawei.hiai.vision.common.VisionImage;
import com.huawei.hiai.vision.image.sr.ImageSuperResolution;
import com.huawei.hiai.vision.text.TableDetector;
import com.huawei.hiai.vision.visionkit.text.config.VisionTableConfiguration;
import com.huawei.hiai.vision.visionkit.text.table.Table;
import com.huawei.hiai.vision.visionkit.text.table.TableCell;
import com.huawei.hiai.vision.visionkit.text.table.TableContent;
import java.util.List;
public class TableRecognition extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView tableContentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
private TableLayout tableLayout;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_table_recognition);
requestPermissions(permission, REQUEST_CODE);
initializeVisionBase();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
tableContentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
tableLayout = findViewById(R.id.tableLayout);
btnImage.setOnClickListener(v -> {
selectImage();
tableLayout.removeAllViews();
});
}
private void initializeVisionBase() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DoesDeviceSupportTableRecognition();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DoesDeviceSupportTableRecognition() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
originalImage.setImageBitmap(bitmap);
if (isConnection) {
extractTableFromTheImage();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void extractTableFromTheImage() {
tableContentText.setVisibility(View.VISIBLE);
TableDetector mTableDetector = new TableDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
VisionTableConfiguration mTableConfig = new VisionTableConfiguration.Builder()
.setAppType(VisionTableConfiguration.APP_NORMAL)
.setProcessMode(VisionTableConfiguration.MODE_OUT)
.build();
mTableDetector.setVisionConfiguration(mTableConfig);
mTableDetector.prepare();
Table table = new Table();
int mResult_code = mTableDetector.detect(image, table, null);
if (mResult_code == 0) {
int count = table.getTableCount();
List<TableContent> tc = table.getTableContent();
StringBuilder sbTableCell = new StringBuilder();
List<TableCell> tableCell = tc.get(0).getBody();
for (TableCell c : tableCell) {
List<String> words = c.getWord();
StringBuilder sb = new StringBuilder();
for (String s : words) {
sb.append(s).append(",");
}
String cell = c.getStartRow() + ":" + c.getEndRow() + ": " + c.getStartColumn() + ":" +
c.getEndColumn() + "; " + sb.toString();
sbTableCell.append(cell).append("\n");
tableContentText.setText("Count = " + count + "\n\n" + sbTableCell.toString());
}
}
}
}Copy code
Result
Tips and Tricks
Recommended image should be larger than 720px.
Multiple table recognition currently not supported.
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have done table content extraction from image, for further analysis with statistics or just for editing it. This works for tables with clear and simple structure information. We have learnt the following concepts.
1. Introduction of Table recognition?
2. How to integrate Table using Huawei HiAI
3. How to Apply Huawei HiAI
4. How to build the application
Reference
Table Recognition
Apply for Huawei HiAI
Happy coding
useful sharing,thanks!
To create is human nature. It is this urge that has driven the rapid growth of self-media. But wherever content is created, it is at risk of being copied or stolen, which is why regulators, content platforms, and creators are trying to crack down on plagiarism and protect the rights of creators.
As a solution to this challenge, DCI Kit, developed by Huawei and Copyright Protection Center of China (CPCC), safeguards digital works' copyright by leveraging technologies such as blockchain and big data. It now offers capabilities like DCI user registration, copyright registration, and copyright safeguarding. Information about successfully registered works (including their DCI codes) will be stored in the blockchain, ensuring that all copyright information is reliable and traceable. In this respect, DCI Kit offers all-round copyright protection for creators anywhere.
EffectsAfter a DCI user initiates a request to register copyright for a work, CPCC will record the copyright-related information and issue a DCI code for the registered work. With blockchain and big data technologies, DCI Kit frees creators from the tedious process of registering for copyright protection, helping maximize the copyright value.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Preparations1. Configuring the Build Dependency for the DCI SDK
Add build dependencies on the DCI SDK in the dependencies block in the app-level build.gradle file.
Code:
// Add DCI SDK dependencies.
implementation 'com.huawei.hms:dci:3.0.1.300'
2. Configuring AndroidManifest.xml
Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and Internet access permission as needed.
Code:
<!-- Permission to write data into and read data from storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Permission to access the Internet. -->
<uses-permission
android:name="android.permission.INTERNET" />
Development Procedure1. Initializing the DCI SDK
Initialize the DCI SDK in the onCreate() method of Application.
Code:
@Override
public void onCreate() {
super.onCreate();
// Initialize the DCI SDK.
HwDciPublicClient.initApplication(this);
}
2. Registering a User as the DCI User
Code:
// Obtain the OpenID and access token through Account Kit.
AccountAuthParams authParams = new AccountAuthParamsHelper(AccountAuthParams.DEFAULT_AUTH_REQUEST_PARAM)
.setAccessToken()
.setProfile()
.createParams();
AccountAuthService service = AccountAuthManager.getService(activity, authParams);
Task<AuthAccount> mTask = service.silentSignIn();
mTask.addOnSuccessListener(new OnSuccessListener<AuthAccount() {
@Override
public void onSuccess(AuthAccount authAccount) {
// Obtain the OpenID.
String hmsOpenId = authAccount.getOpenId();
// Obtain the access token.
String hmsAccessToken= authAccount.getAccessToken();
}
});
// Set the input parameters.
ParamsInfoEntity paramsInfoEntity = new ParamsInfoEntity();
// Pass the app ID obtained from AppGallery Connect.
paramsInfoEntity.setHmsAppId(hmsAppId);
// Pass the OpenID.
paramsInfoEntity.setHmsOpenId(hmsOpenId);
// hmsPushToken: push token provided by Push Kit. If you do not integrate Push Kit, do not pass this value.
paramsInfoEntity.setHmsPushToken(hmsPushToken);
// Pass the access token.
paramsInfoEntity.setHmsToken(hmsAccessToken);
// Customize the returned code, which is used to check whether the result belongs to your request.
int myRequestCode = 1;
// Launch the user registration screen.
HwDciPublicClient.registerDciAccount(activity,paramsInfoEntity ,myRequestCode);
// After the registration is complete, the registration result can be obtained from onActivityResult.
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode != myRequestCode || resultCode != RESULT_OK || data == null) {
return;
}
int code = data.getIntExtra(HwDciConstant.DCI_REGISTER_RESULT_CODE, 0);
if (code == 200) {
// A DCI UID is returned if the DCI user registration is successful.
AccountInfoEntity accountInfoEntity = data.getParcelableExtra(HwDciConstant.DCI_ACCOUNT_INFO_KEY);
String dciUid = accountInfoEntity.getUserId();
} else {
// Process the failure based on the code if the DCI user registration fails.
}
}
3. Registering Copyright for a Work
Pass information related to the work by calling applyDciCode of HwDciPublicClient to register its copyright.
Code:
paramsInfoEntity.setDciUid(dciUid);
paramsInfoEntity.setHmsAppId(hmsAppId);
paramsInfoEntity.setHmsOpenId(hmsOpenId);
paramsInfoEntity.setHmsToken(hmsToken);
// Obtain the local path for storing the digital work.
String imageFilePath = imageFile.getAbsolutePath();
// Obtain the name of the city where the user is now located.
String local = "Beijing";
// Obtain the digital work creation time, which is displayed as a Unix timestamp. The current time is used as an example.
long currentTime = System.currentTimeMillis();
// Call the applyDciCode method.
HwDciPublicClient.applyDciCode(paramsInfoEntity, imageFilePath,local,currentTime, new HwDciClientCallBack<String>() {
@Override
public void onSuccess(String workId) {
// After the copyright registration request is submitted, save workId locally, which will be used to query the registration result.
}
@Override
public void onFail(int code, String msg) {
// Failed to submit the request for copyright registration.
}
});
4. Querying the Copyright Registration Result
Call queryWorkDciInfo of HwDciPublicClient to check the copyright registration result according to the returned code. If the registration is successful, obtain the DCI code issued for the work.
Code:
ParamsInfoEntity paramsInfoEntity = new ParamsInfoEntity();
paramsInfoEntity.setDciUid(dciUid);
paramsInfoEntity.setHmsAppId(hmsAppId);
paramsInfoEntity.setHmsOpenId(hmsOpenId);
paramsInfoEntity.setHmsToken(hmsToken);
paramsInfoEntity.setWorkId(workId);
HwDciPublicClient.queryWorkDciInfo(paramsInfoEntity, new HwDciClientCallBack<WorkDciInfoEntity>() {
@Override
public void onSuccess(WorkDciInfoEntity result) {
if (result == null) {
return;
}
// Check the copyright registration result based on the returned status code. 0 indicates that the registration is being processed, 1 indicates that the registration is successful, and 2 indicates that the registration failed.
if (result.getRegistrationStatus() == 1) {
// If the copyright registration is successful, a DCI code will be returned.
mDciCode = result.getDciCode();
}else if (result.getRegistrationStatus() == 0) {
// The copyright registration is being processed.
}else {
// If the copyright registration fails, a failure cause will be returned.
String message = result.getMessage()
}
}
@Override
public void onFail(int code, String msg) {
// Query failed.
}});
5. Adding a DCI Icon for a Digital Work
Call addDciWatermark of HwDciPublicClient to add a DCI icon for the work whose copyright has been successfully registered. The icon serves as an identifier, indicating that the work copyright has been registered.
Code:
// Pass the local path of the digital work that requires a DCI icon.
String imageFilePath = imageFile.getAbsolutePath();
HwDciPublicClient.addDciWatermark(imageFilePath, new HwDciClientCallBack<String>() {
@Override
public void onSuccess(String imageBase64String) {
// After the DCI icon is successfully added, the digital work is returned as a Base64-encoded character string.
}
@Override
public void onFail(int code, String msg) {
// Failed to add the DCI icon.
}
});
Source CodeTo obtain the source code, please visit GitHub sample code repo.
Thanks for sharing..