Image Kit: Developing the Image Vision Service - Huawei Developers

Overview
The Image Vision service provides you with 24 color filters, which are easy to use.
Use Cases
An app needs to beautify or retouch images.
Development Procedure
The Image Vision service renders the images you provide and returns them to your app.
1. Import the Image Vision service packages.
Code:
import com.huawei.hms.image.vision.*;
import com.huawei.hms.image.vision.bean.ImageVisionResult;
2. Construct an ImageVision instance. The instance will be used to perform related operations such as obtaining the filter effect.
Code:
// Obtain an ImageVisionImpl instance.
ImageVisionImpl imageVisionAPI = ImageVision.getInstance(this);
3 Initialize the service. To call setVisionCallBack during service initialization, your app must implement the ImageVision.VisionCallBack API with its onSuccess(int successCode) and onFailure(int errorCode) methods rewritten. If the ImageVision instance is successfully obtained, the onSuccess method is called and then the Image Vision service is initialized. If the ImageVision instance fails to be obtained, the onFailure method is called and an error code is returned. Your app is allowed to use the Image Vision service only after the successful verification. That is, the value of initCode must be 0, indicating that the initialization is successful.
Code:
imageVisionAPI.setVisionCallBack(new ImageVision.VisionCallBack() {
@Override
public void onSuccess(int successCode) {
int initCode = imageVisionAPI.init(context, authJson);
...
}
@Override
public void onFailure(int errorCode) {
...
}
});
Description of authJson parameters:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
You can find the preceding parameters except token in the agconnect-services.json file.
4. Construct a jsonObject object.
Description of requestJson and imageBitmap parameters:
Description of requestJson parameters:
Description of taskJson parameters:
For details, see the description of authJson parameters in step 3.
filterType mapping:
5. Obtain the rendered image.
When calling the getColorFilter() API, you need to specify the bitmap image to be rendered and select the required filter. In addition, you need to pass the authentication information. Your app can use the service only after it is successfully authenticated. On receiving the request from your app, the filter service parses the image and returns the rendered view to the app.
Note: Call the API in a subthread other than the main thread. If the image cannot be displayed during the test, it is recommended your turn off hardware acceleration.
Code:
//Obtain the rendering result from visionResult.
ImageVisionResult visionResult = imageVisionAPI.getColorFilter(requestJson,imageBitmap);
Description of visionResult parameters:
Description of responseJson parameters:
To turn off hardware acceleration, configure the AndroidManifest.xml file as follows:
Code:
<application
...
android:hardwareAccelerated="false"
...
</application>
6. Stop the Image Vision service.
If you do not need to use filters any longer, call the imageVisionAPI.stop() API to stop the Image Vision service. If the returned stopCode is 0, the service is successfully stopped.
Code:
if (null != imageVisionAPI) {
int stopCode = imageVisionAPI.stop();
}

Hi,
am facing an issue while integrated image vision API
2020-10-08 18:21:49.971 3143-3143/com.huawei.sujith.imageeditor E/ImageVisionImpl: ClassNotFoundException: Didn't find class "com.huawei.hms.image.visionkit.dynamic.FunctionCreator" on path: DexPathList[[zip file "/data/user_de/152/com.huawei.android.hsf/modules/external/huawei_module_imagevision/10003301/com.huawei.hms.image.visionkit-10003301.apk"],nativeLibraryDirectories=[/data/user_de/152/com.huawei.android.hsf/modules/external/huawei_module_imagevision/10003301/com.huawei.hms.image.visionkit-10003301.apk!/lib/arm64-v8a, /system/lib64, /hw_product/lib64, /system/product/lib64]]

Related

Quick integration for Image Kit Filter functionality

Introduction
HUAWEI Image Kit provides smart image editing, designing and animating capabilities into your application. It provides Image Vision service for 24 unique colour filters, 9 smart layouts as well as the image theme tagging, text arts, image cropping and many more functionalities. It also provide Image rendering service for five basic animations and nine advance animations capabilities.
It provides two SDKs, Image Vision SDK and Image Render SDK.
Image Vision service APIs to implement functions such as filter, smart layout, sticker, theme tagging and image cropping.
Image Render service APIs to implement basic and advanced animation effects.
Software Requirements
Java JDK 1.8 or later
Android API Level 26 or higher
EMUI 8.1 or later (applicable to the SDK, but not the fallback-SDK)
HMS Core (APK) 4.0.2.300 or later (applicable to the SDK, but not the fallback-SDK)
Functions
Basic Animations
Advanced Animations
Image Editing
Integration Preparations
To integrate HUAWEI Image Kit, you must complete the following preparations:
Register as a developer.
Create an Android Studio project.
Generate a signing certificate fingerprint.
Configure the signing certificate fingerprint on AG Console.
Integration steps
In this article we are going to implement filter service, although Image Vision provides five major functionalities which is most commonly used by image editing application and many more.
First, we need to add the Image Vision SDK dependencies in app build.gradle file.
Code:
dependencies {
...
implementation 'com.huawei.hms:image-vision:1.0.3.301'
implementation 'com.huawei.hms:image-vision-fallback:1.0.3.301'
...
}
Note: Image Kit works only on a device running Android 8.0 or later.
For initializing the service. call setVisionCallBack during service initialization, your app must implement the ImageVision. VisionCallBack API and override its onSuccess(int successCode) and onFailure(int errorCode) methods.
Code:
ImageVisionImpl imageVisionAPI = ImageVision.getInstance(this);
imageVisionAPI.setVisionCallBack(new ImageVision.VisionCallBack() {
@Override
public void onSuccess(int successCode) {
int initCode = imageVisionAPI.init(context, authJson);
...
}
@Override
public void onFailure(int errorCode) {
...
}
});
After Initializing the service select the Image from gallery :
Code:
public static void getByAlbum(Activity act) {
Intent getAlbum = new Intent(Intent.ACTION_GET_CONTENT);
String[] mimeTypes = {"image/jpeg", "image/png", "image/webp"};
getAlbum.putExtra(Intent.EXTRA_MIME_TYPES, mimeTypes);
getAlbum.setType("image/*");
getAlbum.addCategory(Intent.CATEGORY_OPENABLE);
act.startActivityForResult(getAlbum, 801);
}
Note: The image size should not be greater than 15 MB, resolution should not be greater than 8000 x 8000 pixel and the aspect should be in between 1:3 to 3:1.
There are 24 different colour filters provided by Filter Service. For each type of filter we have separate respective mapping code. Image Vision will return a bitmap filtered by one of these 24 colour filters.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Make sure that we call the getColorFilter() API in the background thread. For calling the filter API, you need to specify the bitmap of the image to be processed and the filter effect and set the filtered image on imageView.
Code:
//Obtain the rendering result from visionResult.
new Thread(new Runnable() {
@Override
public void run() {
ImageVisionResult visionResult = imageVisionAPI.getColorFilter(requestJson,imageBitmap);
}
}).start();
After getting the filtered result, If you do not want to use filters any longer, call the imageVisionAPI.stop() API to stop the Image Vision service. If the returned stopCode is 0, the service is successfully stopped.
Code:
if (null != imageVisionFilterAPI) {
int stopCode = imageVisionFilterAPI.stop();
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topic/0203429680523150041

How Can I Quickly Integrate Remote Configuration to a Web App?

Recently, I found that Remote Configuration of AppGallery Connect now supports the web platform, which I’ve long been waiting for. Previously, it has only been supported on Android, but now it can be applied to web apps. For details about the integration demo, visit GitHub.​
Integration Steps​1. Enable the service.
a) Sign in to AppGallery Connect and create a web app.
b) Enable Remote Configuration.
c) Click New parameter and set parameters.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2. Integrate the SDK.
a) Run the following command:
Code:
npm install –save @agconnect/remoteconfig
3. Implement the functions.
a) Obtain the local configuration parameters.
Create a local configuration map in the Vue.
Apply the local settings.
JavaScript:
export function applyDefault(map) {
return agconnect.remoteConfig().applyDefault(map);
}
b) Call the fetch API to fetch parameter values from the cloud.
JavaScript:
export async function fetch() {
return agconnect.remoteConfig().fetch().then(() => {
return Promise.resolve();
}).catch((err) => {
return Promise.reject(err);
});
}
c) Apply the parameter values to the local host immediately or upon the next launch.
1. Applying the parameter values immediately:
Call the apply API.
JavaScript:
export function apply() {
return agconnect
.remoteConfig().apply().then((res) => {
return Promise.resolve(res);
}
).catch(error => {
return Promise.reject(error);
});
}
2. Applying the parameter values upon the next launch:
Call the applyLastFetch API to apply the last fetched parameter values.
JavaScript:
// Load configurations.
export function applyLastLoad() {
return agconnect
.remoteConfig().loadLastFetched().then(async (res) => {
if (res) {
await agconnect.remoteConfig().apply(res);
}
return Promise.resolve(res);
}
).catch(error => {
return Promise.reject(error);
});
}
d) Obtain all parameter values from the local host and cloud.
Call the getMergedAll API to obtain all parameter values.
JavaScript:
export function getMergedAll() {
return agconnect.remoteConfig().getMergedAll();
}
e) Call the clearAll API to clear all parameter values.
JavaScript:
export function clearAll() {
agconnect.remoteConfig().clearAll();
}
References:
Demo:
https://github.com/AppGalleryConnect/agc-demos/tree/main/Web/agc-romoteconfig-demo-javascript
Remote Configuration Development Guide:
https://developer.huawei.com/consum...-remoteconfig-web-getstarted-0000001056501223
Well explained, Using remote config can we change UI elements.
What is the basic use of Remote Config in web?
sujith.e said:
Well explained, Using remote config can we change UI elements.
Click to expand...
Click to collapse
yes, that is convenient for developers.
ask011 said:
What is the basic use of Remote Config in web?
Click to expand...
Click to collapse
You can flexibly modify the behavior and appearance of applications by changing cloud parameter values.

Animate a Picture with the Moving Picture Capability

Ever wondered how to animate a static image? The moving picture capability of Video Editor Kit has the answer. It adds authentic facial expressions to an image of faces by leveraging the AI algorithms such as face detection, face key point detection, facial expression feature extraction, and facial expression animation.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Impressive stuff, right? Let's move on and see how this capability can be integrated.
Integration Procedure​Preparations​For details, please check the official document.
Configuring a Video Editing Project​1. Set the app authentication information.
You can set the information through an API key or access token.
Use the setAccessToken method to set an access token during initialization when the app is started. The access token needs to be set only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID.
This ID is used to manage your usage quotas, so ensure that the ID is unique
Code:
MediaApplication.getInstance().setLicenseId("License ID");
2.1 Initialize the running environment for HuaweiVideoEditor.
When creating a video editing project, first create a HuaweiVideoEditor object and initialize its running environment. When exiting a video editing project, release the HuaweiVideoEditor object.
Create a HuaweiVideoEditor object
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the position for the preview area.
This area renders video images, which is implemented by creating SurfaceView in the fundamental capability SDK. Ensure that the preview area position on your app is specified before creating this area.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Set the layout of the preview area.
editor.setDisplay(mSdkPreviewContainer);
Initialize the running environment. If the license verification fails, LicenseException is thrown.
After the HuaweiVideoEditor object is created, it has not occupied any system resource. You need to manually set the time for initializing the running environment of the object. Then, necessary threads and timers will be created in the fundamental capability SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
2.2 Add a video or image.
Create a video lane and add a video or image to the lane using the file path.
Code:
// Obtain the HVETimeLine object.
HVETimeLine timeline = editor.getTimeLine();
// Create a video lane.
HVEVideoLane videoLane = timeline.appendVideoLane();
// Add a video to the end of the video lane.
HVEVideoAsset videoAsset = vidoeLane.appendVideoAsset("test.mp4");
// Add an image to the end of the video lane.
HVEImageAsset imageAsset = vidoeLane.appendImageAsset("test.jpg");
Integrating the Moving Picture Capability​
Code:
// Add the moving picture effect.
videoAsset.addFaceReenactAIEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Handling progress.
}
@Override
public void onSuccess() {
// Handling success.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Handling failure.
}
});
// Remove the moving picture effect.
videoAsset.removeFaceReenactAIEffect();
Result​
This article presents the moving picture capability of Video Editor Kit. For more, check here.
To learn more, please visit:
>> HUAWEI Developers official website
>> Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

Solution to Creating an Image Classifier

I don't know if it's the same for you, but I always get frustrated when sorting through my phone's album. It seems to take forever before I can find the image that I want to use. As a coder, I can't help but wonder if there's a solution for this. Is there a way to organize an entire album? Well, let's take a look at how to develop an image classifier using a service called image classification.
Development Preparations​1. Configure the Maven repository address for the SDK to be used.
Java:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the image classification SDK.
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-classification:3.3.0.300'
// Import the image classification model package.
implementation 'com.huawei.hms:ml-computer-vision-image-classification-model:3.3.0.300'
Project Configuration​1. Set the authentication information for the app.
This information can be set through an API key or access token.
Use the setAccessToken method to set an access token during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create an image classification analyzer in on-device static image detection mode.
Java:
// Method 1: Use customized parameter settings for device-based recognition.
MLLocalClassificationAnalyzerSetting setting =
new MLLocalClassificationAnalyzerSetting.Factory()
.setMinAcceptablePossibility(0.8f)
.create();
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer(setting);
// Method 2: Use default parameter settings for on-device recognition.
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer();
3. Create an MLFrame object.
Java:
// Create an MLFrame object using the bitmap which is the image data in bitmap format. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image dimensions be greater than or equal to 112 x 112 px.
MLFrame frame = MLFrame.fromBitmap(bitmap);
4. Call asyncAnalyseFrame to classify images.
Java:
Task<List<MLImageClassification>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Recognition success.
// Callback when the MLImageClassification list is returned, to obtain information like image categories.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize relevant messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
5. Stop the analyzer after recognition is complete.
Java:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Remarks​The image classification capability supports the on-device static image detection mode, on-cloud static image detection mode, and camera stream detection mode. The demo here illustrates only the first mode.
I came up with a bunch of application scenarios to use image classification, for example: education apps. With the help of image classification, such an app enables its users to categorize images taken in a period into different albums; travel apps. Image classification allows such apps to classify images according to where they are taken or by objects in the images; file sharing apps. Image classification allows users of such apps to upload and share images by image category.
References​>>Image classification Development Guide
>>Reddit to join developer discussions
>>GitHub to download the sample code
>>Stack Overflow to solve integration problems

FAQs About Integrating HMS Core Ads Kit

Ads Kit offers Publisher Service for developers, helping them obtain high-quality ads by resource integration, based on a vast Huawei device user base; it also provides Identifier Service for advertisers to deliver personalized ads and attribute conversions.
Now, let's share some problems that developers often encounter when integrating the kit for your reference.
1. What can I do if the banner ad dimensions zoom in when the device screen orientation is switched from portrait to landscape?​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Solution:
Fix the width or height of BannerView. For example, the height of the banner ad is fixed in the following code:
Code:
<com.huawei.hms.ads.banner.BannerView
android:id="@+id/hw_banner_view"
android:layout_width="match_parent"
android:layout_height="45dp"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true" />
The following figure shows how a banner ad is normally displayed.
2. How to request more than one native ads at a time?​
You can call loadAds() to request multiple native ads.
A request initiated by loadAds() should contain two parameters: AdParam and maxAdsNum. The maxAdsNum parameter specifies the maximum number (that is, 5) of ads to be loaded. The number of returned ads will be less than or equal to that requested, and the returned ones are all different. Sample code is as follows:
Code:
nativeAdLoader.loadAds(new AdParam.Builder().build(), 5);
After ads are successfully loaded, the SDK calls the onNativeAdLoaded() method of NativeAd.NativeAdLoadedListener multiple times to return a NativeAd object each time. After all the ads are returned, the SDK calls the onAdLoaded() method of AdListener to send a request success notification. If no ad is loaded, the SDK calls the onAdFailed() method of AdListener.
In the sample code below, testy63txaom86 is a test ad unit ID, which should be replaced with a formal one before app release.
Code:
NativeAdLoader.Builder builder = new NativeAdLoader.Builder(this, "testy63txaom86");
NativeAdLoader nativeAdLoader = builder.setNativeAdLoadedListener(new NativeAd.NativeAdLoadedListener() {
@Override
public void onNativeAdLoaded(NativeAd nativeAd) {
// Called each time an ad is successfully loaded.
...
}
}).setAdListener(new AdListener() {
@Override
public void onAdLoaded() {
// Called when all the ads are successfully returned.
...
}
@Override
public void onAdFailed(int errorCode) {
// Called when all ads fail to be loaded.
...
}
}).build();
nativeAdLoader.loadAds(new AdParam.Builder().build(), 5);
Note: Before reusing NativeAdLoader to load ads, ensure that the previous request is complete.
3. What can I do if "onMediaError : -1" is reported when a roll ad is played in rolling mode in an app?​
After the roll ad is played for the first time, the error code -1 is returned when the roll ad is played again.
Cause analysis:
1. The network is unavailable.
2. The roll ad is not released after the playback is complete. As a result, the error code -1 is returned during next playback.
Solution:
1. Check the network. To allow apps to use cleartext HTTP and HTTPS traffic on devices with targetSdkVersion 28 or later, configure the following information in the AndroidManifest.xml file:
Code:
<application
...
android:usesCleartextTraffic="true"
>
...
</application>
2. Release the roll ad in onMediaCompletion() of InstreamMediaStateListener. The roll ad needs to be released each time playback is complete.
Code:
public void onMediaCompletion(int playTime) {
updateCountDown(playTime);
removeInstream();
playVideo();
}
private void removeInstream() {
if (null != instreamView) {
instreamView.onClose();
instreamView.destroy();
instreamContainer.removeView(instreamView);
instreamContainer.setVisibility(View.GONE);
instreamAds.clear();
}
}
To learn more, please visit:
>> Ads Kit
>> Development Guide of Ads Kit

Categories

Resources