HUAWEI Scene Kit to add 3D Objects on Android app - Huawei Developers

More articles like this, you can visit HUAWEI Developer Forum​
HUAWEI Scene Kit
-Lightweight rendering engine that features high performance and low consumption.
-Provides advanced descriptive APIs for you to edit, operate, and render 3D materials.
-Adopts physically based rendering (PBR) pipelines to achieve realistic rendering effects.
-Easily load and display complicated 3D objects on Android phones.
-Can be apply on AR virtual fitting room, 3D virtual galleries for art, and VR remote teaching.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Supported Devices:
HMS Core 4.0.2.300 later
Android 8.0/EMUI 8.0 later
Suppport Vulkan graphics APIs.
Supported Formats:
1.Materials rendered:GITP,glb
2.gITF textures: png,jpeg
3.Sky box material: dds(cubeMap)
Implementation Process
1. Add build dependencies in the dependencies section.
Code:
implementation 'com.huawei.scenekit:sdk:1.5.0.300'
2.Create a SampleView that inherits from SceneView
(SceneView inherits from surfaceView and overrides methods including surfaceCreated,surfaceChanged, surfaceDestroyed, onTouchEvent, and onDraw)
Code:
public class SampleView extends SceneView {
public SampleView(Context context) {
super(context);
}
public SampleView(Context context, AttributeSet attributeSet) {
super(context, attributeSet);
}
}
3.Override the surfaceCreated method in SampleView and call the super method and load 3D scene materials, skybox materials, and lighting maps.
Code:
@Override
public void surfaceCreated(SurfaceHolder holder) {
super.surfaceCreated(holder);
// Loads the model of a scene by reading files from assets.
loadScene("scene.gltf");
// Loads skybox materials by reading files from assets.
loadSkyBox("skyboxTexture.dds");
// Loads specular maps by reading files from assets.
loadSpecularEnvTexture("specularEnvTexture.dds");
// Loads diffuse maps by reading files from assets.
loadDiffuseEnvTexture("diffuseEnvTexture.dds");
}
(dds,gltf will be stored in assets directory and assets is inside app directory)
4.You may override other surface lifecycle methods.
Code:
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
super.surfaceChanged(holder, format, width, height);
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
super.surfaceDestroyed(holder);
}
@Override
public boolean onTouchEvent(MotionEvent motionEvent) {
return super.onTouchEvent(motionEvent);
}
@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);
}
5.Create an Activity that inherits from Activity and call setContentView in the onCreate method to load the SampleView.
Code:
public class SampleActivity extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_sample);
}
}
6.In acitivity_sample create the SampleView
Code:
<com.huawei.huaweitextocr.huawei.SampleView
android:layout_width="match_parent"
android:layout_height="match_parent"/>
Reference Link:
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/client-dev-0000001050162137-V5

Related

Android: How to Develop an ID Photo DIY Applet in 30 Min

The source code will be shared and it's quite friendly to users who are fresh to Android. The whole process will take only 30 minutes.
It's an application level development and we won't go through the algorithm of image segmentation. I use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don't know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don't have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn't meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
[CODE]buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
[/CODE]
1.2 Add SDK Dependency in Application Level build.gradle
Code:
Introducing SDK and basic SDK of face recognition:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user's device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--使用存储权限--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // 根据包名打开对应的设置界面
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
Code:
The image segmentation detector can be created through the image segmentation detection configurator "mlimagesegmentation setting".
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create "mlframe" Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator "MLImageSegmentationSetting".
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call "asyncanalyseframe" Method for Image Segmentation
Code:
// 创建一个task,处理图像分割检测器返回的结果。 Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // 异步处理图像分割检测器返回结果 Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let's see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-Core/hms-ml-demo/tree/master/ID-Photo-DIY
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
1. People's portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
2. Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
3. Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4

Huawei Game Service for libGDX games

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201320947104400266&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hi folks! In this guide I’ll explain how to integrate the Huawei Game Service with gdx-gamesvcs, a libGDX extension library for Game Services.
Medium Link: https://medium.com/huawei-developers/huawei-game-service-for-libgdx-games-3b189d8b4a82
What’s gdx-gamesvcs?
gdx-gamesvcs is a libGDX extension that aims to provide a cross-platforms API
for Game Services.
It supports the following services:
• Google Play Games;
• Apple Game Center;
• GameJolt;
• Amazon GameCircle;
• Kongregate;
• Huawei Game Service (starting from version 1.1.0-SNAPSHOT version)
For further information, please visit: https://github.com/MrStahlfelge/gdx-gamesvcs
Huawei Game Service for libGDX
It supports the following features:
- Account Kit: login, logout;
- Game Service: player info, achievements, leaderboards, game events;
- Drive Kit: save, load and delete data on Cloud
Requirements
• EMUI 3.0+ / Android 4.4+
• HMS Core 4.0.0.300+
• Android Studio 3.0+
Preparation
• Create an app in AppGallery Connect. The app type must be Game.
• Create a libGDX project.
• Generate a signature certificate.
• Generate a signature certificate fingerprint.
• Configure the signature certificate fingerprint.
• Add the app package name and save the configuration file.
• Configure the Maven repository address and AppGallery Connect gradle
plug-in.
• Configure the signature file in Android Studio.
For further information, please visit: https://developer.huawei.com/consumer/en/codelab/HMSPreparation/index.html#0
Finally coding!
Gradle:
implementation “de.golfgl.gdxgamesvcs:gdx-gamesvcs-android-huawei:1.1.0-SNAPSHOT”
Now, in your Activity extending the AndroidApplication class (a libGDX class), optionally implementing the IGameServiceListener (to listen events), You have to instantiate the HuaweiGameServicesClient, and then manage the lifecycle events. Like this:
Code:
public class GameServiceActivity extends AndroidApplication implements IGameServiceListener {
private IGameServiceClient gsClient = null;
@Override
protected void onCreate (Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//true if You want to manage data on Cloud
this.gsClient = HuaweiGameServicesClient(this, true);
this.gsClient.setListener(this);
initialize(gdxGameSvcsApp, config);
}
@Override
protected void onPause() {
super.onPause();
if (this.gsClient != null) {
this.gsClient.pauseSession();
}
}
@Override
protected void onResume() {
super.onResume();
if (this.gsClient != null) {
this.gsClient.resumeSession();
}
}
@Override
public void gsOnSessionActive() {
Log.d("SESSION", "ACTIVE");
}
@Override
public void gsOnSessionInactive() {
Log.d("SESSION", "INACTIVE");
}
@Override
public void gsShowErrorToUser(GsErrorType et, String msg, Throwable t) {
Toast.makeText(this, msg, Toast.LENGTH_LONG).show();
}
}
So? Unleash the power of the HuaweiGameServiceClient!
Code:
...
//ACCOUNT KIT
gsClient.logIn();
gsClient.logOff();
//GAME SERVICE
//achievements
gsClient.showAchievements();
gsClient.unlockAchievement("id");
gsClient.incrementAchievement("id", increment, completionPercentage);
gsClient.fetchAchievements(new IFetchAchievementsResponseListener() {
@Override
public void onFetchAchievementsResponse(Array<IAchievement> achievements) {
}
});
//leaderboards
gsClient.showLeaderboards("id");
gsClient.submitToLeaderboard("id", score, scoreTips);
gsClient.fetchLeaderboardEntries("id", limit, isRelatedToPlayer, new IFetchLeaderBoardEntriesResponseListener() {
@Override
public void onLeaderBoardResponse(Array<ILeaderBoardEntry> leaderBoard) {
}
});
//game events
gsClient.submitEvent("id", increment);
//DRIVE KIT
gsClient.saveGameState(null, gameState, progressValue, new ISaveGameStateResponseListener() {
@Override
public void onGameStateSaved(boolean success, String errorCode) {
}
});
gsClient.loadGameState(null, new ILoadGameStateResponseListener() {
@Override
public void gsGameStateLoaded(byte[] gameState) {
}
});
gsClient.deleteGameState(null, new ISaveGameStateResponseListener() {
@Override
public void onGameStateSaved(boolean success, String errorCode) {
}
});
...
Do You want moaAAaaRrRr?
Read my article about Huawei IAP for libGDX: Medium
https://medium.com/huawei-developers/huawei-iap-for-libgdx-games-eb5aec5662af
Nice and useful article
i am implementing game using huawei game service. I am from india, here drive kit is not working, is there alternative to save game progress

Introduction to AI-Empowered Image Segmentation

Image segmentation technology is gathering steam thanks to the development of multiple fields. Take the autonomous vehicle as an example, which has been developing rapidly since last year and become a showpiece for both well-established companies and start-ups. Most of them use computer vision, which includes image segmentation, as the technical basis for self-driving cars, and it is image segmentation that allows a car to understand the situation on the road and to tell the road from the people.
Image segmentation is not only applied to autonomous vehicles, but is also used in a number of different fields, including:
Medical imaging, where it helps doctors make diagnosis and perform tests
Satellite image analysis, where it helps analyze tons of data
Media apps, where it cuts people from video to prevent bullet comments from obstructing them.
It is a widespread application. I myself am also a fan of this technology. Recently, I've tried an image segmentation service from HMS Core ML Kit, which I found outstanding. This service has an original framework for semantic segmentation, which labels each and every pixel in an image, so the service can clearly, completely cut out something as delicate as a hair. The service also excels at processing images with different qualities and dimensions. It uses algorithms of structured learning to prevent white borders — which is a common headache of segmentation algorithms — so that the edges of the segmented image appear more natural.
I'm delighted to be able to share my experience of implementing this service here.
Preparations​First, configure the Maven repository and integrate the SDK of the service. I followed the instructions here to complete all these.
1. Configure the Maven repository address
Java:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Add build dependencies
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.1.0.301'
// Import the package of the human body segmentation model.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.1.0.303'
}
3. Add the permission in the AndroidManifest.xml file.
Java:
// Permission to write to external storage.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Development Procedure​1. Dynamically request the necessary permissions
Java:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
private boolean allPermissionsGranted() {
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
return false;
}
}
return true;
}
private void getRuntimePermissions() {
List<String> allNeededPermissions = new ArrayList<>();
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
allNeededPermissions.add(permission);
}
}
if (!allNeededPermissions.isEmpty()) {
ActivityCompat.requestPermissions(
this, allNeededPermissions.toArray(new String[0]), PERMISSION_REQUESTS);
}
}
private static boolean isPermissionGranted(Context context, String permission) {
if (ContextCompat.checkSelfPermission(context, permission) == PackageManager.PERMISSION_GRANTED) {
return true;
}
return false;
}
private String[] getRequiredPermissions() {
try {
PackageInfo info =
this.getPackageManager()
.getPackageInfo(this.getPackageName(), PackageManager.GET_PERMISSIONS);
String[] ps = info.requestedPermissions;
if (ps != null && ps.length > 0) {
return ps;
} else {
return new String[0];
}
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
return new String[0];
}
}
2. Create an image segmentation analyzer
Java:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
// Set the segmentation mode to human body segmentation.
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
3. Use android.graphics.Bitmap to create an MLFrame object for the analyzer to detect images
Java:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
4. Call asyncAnalyseFrame for image segmentation
Java:
// Create a task to process the result returned by the analyzer.
Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
// Asynchronously process the result returned by the analyzer.
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override
public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {.
if (mlImageSegmentationResults != null) {
// Obtain the human body segment cut out from the image.
foreground = mlImageSegmentationResults.getForeground();
preview.setImageBitmap(MainActivity.this.foreground);
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
return;
}
});
5. Change the image background
Java:
// Obtain an image from the album.
backgroundBitmap = Utils.loadFromPath(this, id, targetedSize.first, targetedSize.second);
BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
preview.setBackground(drawable);
preview.setImageBitmap(this.foreground);
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Result​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> HUAWEI Developers official website
>> Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

Solution to Creating an Image Classifier

I don't know if it's the same for you, but I always get frustrated when sorting through my phone's album. It seems to take forever before I can find the image that I want to use. As a coder, I can't help but wonder if there's a solution for this. Is there a way to organize an entire album? Well, let's take a look at how to develop an image classifier using a service called image classification.
Development Preparations​1. Configure the Maven repository address for the SDK to be used.
Java:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the image classification SDK.
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-classification:3.3.0.300'
// Import the image classification model package.
implementation 'com.huawei.hms:ml-computer-vision-image-classification-model:3.3.0.300'
Project Configuration​1. Set the authentication information for the app.
This information can be set through an API key or access token.
Use the setAccessToken method to set an access token during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create an image classification analyzer in on-device static image detection mode.
Java:
// Method 1: Use customized parameter settings for device-based recognition.
MLLocalClassificationAnalyzerSetting setting =
new MLLocalClassificationAnalyzerSetting.Factory()
.setMinAcceptablePossibility(0.8f)
.create();
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer(setting);
// Method 2: Use default parameter settings for on-device recognition.
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer();
3. Create an MLFrame object.
Java:
// Create an MLFrame object using the bitmap which is the image data in bitmap format. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image dimensions be greater than or equal to 112 x 112 px.
MLFrame frame = MLFrame.fromBitmap(bitmap);
4. Call asyncAnalyseFrame to classify images.
Java:
Task<List<MLImageClassification>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Recognition success.
// Callback when the MLImageClassification list is returned, to obtain information like image categories.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize relevant messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
5. Stop the analyzer after recognition is complete.
Java:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Remarks​The image classification capability supports the on-device static image detection mode, on-cloud static image detection mode, and camera stream detection mode. The demo here illustrates only the first mode.
I came up with a bunch of application scenarios to use image classification, for example: education apps. With the help of image classification, such an app enables its users to categorize images taken in a period into different albums; travel apps. Image classification allows such apps to classify images according to where they are taken or by objects in the images; file sharing apps. Image classification allows users of such apps to upload and share images by image category.
References​>>Image classification Development Guide
>>Reddit to join developer discussions
>>GitHub to download the sample code
>>Stack Overflow to solve integration problems

How to Build a Translation Function

Programmers are — or should be — voracious readers. To keep up with the latest updates in the world of software, we need to be constantly scrolling through books, forum articles, news, and more.
This process is certainly mentally enriching, but it can also be a tiring and tedious one due to one major obstacle: language. I used to struggle a lot with reading articles written in another language, because I was looking up every word that I couldn't understand in the dictionary in order to make sense of what I was reading — until I developed this.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
No muss, no fuss. Just select the foreign text you don't understand and instantly translate it into a language that you want.
Now let's get to the development part. Not being much of a linguist, I knew that I would struggle to develop a translation feature for my app all on my own.
Luckily, I got a great helper — HMS Core ML Kit's translation service. It supports real-time and on-device translation, making translation possible even in the absence of an Internet connection. With the help of the translation service, language barriers become a thing of the past.
Now, I'll explain how I developed this function, using the source code for the demo above.
Development Process​Preparations​Make necessary preparations as detailed here. This includes the following:
Configure the app information.
Enable the service.
Integrate the SDK of the service.
Configure the obfuscation scripts.
Declare necessary permissions.
Function Building​1. Set the app authentication information via an access token:
Code:
MLApplication.getInstance().setAccessToken("your access token");
Or an API key:
Code:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create a real-time translator.
Code:
MLLocalTranslateSetting setting = new MLLocalTranslateSetting
.Factory()
.setSourceLangCode(mSourceLangCode)
.setTargetLangCode(mTargetLangCode)
.create();
this.localTranslator =
MLTranslatorFactory.getInstance().getLocalTranslator(setting);
3. Query the languages supported by the service.
Code:
MLTranslateLanguage.getCloudAllLanguages().addOnSuccessListener(new OnSuccessListener<Set<String>>() {
@Override
public void onSuccess(Set<String> result) {
// Callback when the supported languages are obtained.
}
});
4. Translate the text.
Code:
localTranslator.preparedModel(downloadStrategy, modelDownloadListener).addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
final Task<String> task = localTranslator.asyncTranslate(input);
task.addOnSuccessListener(new OnSuccessListener<String>() {
@Override
public void onSuccess(String text) {
displaySuccess(text, true);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
displayFailure(e);
}
});
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
displayFailure(e);
}
});
5. Release resources occupied by the translator when the translation is complete.
Code:
if (localTranslator != null) {
localTranslator.stop();
}
And voila, the translation function is built.
Besides e-book readers, there are lots of other apps that can benefit greatly from having a translation function, such as travel apps, which can use the translation service to translate foreign road signs and menus for visitors. Translation is also useful for educational apps, to help users who are not familiar with the language used in the app.
That concludes my development journey for the demo e-book reader. What other ideas and suggestions do you have for using the translation function? Feel free to provide your thoughts in the comments section.
References​Why Translation is Important In A World Where English is Everywhere
ML Kit

Categories

Resources