Images Drawn by the canvas Component Cannot Be Displayed in the Album - Huawei Developers

Symptom
After an image drawn by the canvas component is saved to the album, the image cannot be displayed in the album. Only a black background image is displayed.
The image with green background in the following figure is drawn by the canvas component.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
After the image is saved to the album, the display result is as shown in the following figure.
Cause Analysis
Huawei Quick App Center creates a ctx object each time when it calls the getContext method, and therefore different ctx objects will be returned. Some of these ctx objects are empty. An empty ctx object may be obtained in a random selection. As a result, the image saved may have no content. The code is as follows:
Code for using canvas to draw an image:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25pic() {
var test = this.$element("canvas");
var ctx = test.getContext("2d");
var img = new Image();
img.src = "http://www.huawei.com/Assets/CBG/img/logo.png";
img.onload = function () {
console.log("Image loaded successfully");
ctx.drawImage(img, 10, 10, 300, 300, );
}
img.onerror = function () {
console.log("Failed to load the image");
}
},
Code for saving the image:
save() {
var test = this.$element("canvas");
test.toTempFilePath({
fileType: "jpg",
quality: 1.0,
success: (data) => {
console.log(`Canvas toTempFilePath success ${data.uri}`)
console.log(`Canvas toTempFilePath success ${data.tempFilePath}`)
media.saveToPhotosAlbum({
uri: data.uri,
success: function (ret) {
console.log("save success: ");
},
fail: function (data, code) {
console.log("handling fail, code=" + code);
}
})
},
fail: (data) => {
console.log('Canvas toTempFilePath data =' + data);
},
complete: () => {
console.log('Canvas toTempFilePath complete.');
}
})
}
Solution
Define ctx as a global variable and check whether it is empty. Assign an initial value to an empty ctx.
Optimized code:
copy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35var ctx; // Define a global variable.
pic() {
if (!ctx) {// Check whether a ctx object is empty. If it is empty, assign the 2d value to it.
var test = this.$element("canvas");
var tt = test.getContext("2d");
ctx = tt;
}
var img = new Image();
img.src = "http://www.huawei.com/Assets/CBG/img/logo.png";
img.onload = function () {
console.log("Image loaded successfully");
ctx.drawImage(img, 10, 10, 300, 300, );
}
img.onerror = function () {
console.log("Failed to load the image");
}
}
For details about Huawei developers and HMS, visit the website.
HUAWEI Developer Forum | HUAWEI Developer
forums.developer.huawei.com

Related

How to plot a route between two cities on a map ?

In this short tutorial I will guide you step by step on how to plot a route between 2 cities (namely Toronto and Montreal in Canada) on a map using HMS Maps Kit and retrieving the data from Huawei Directions API.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Prerrequisites
I will assume you have the following knowledge:
Android development
Kotlin
Retrofit 2 and GSON libraries
Setting up your app to use HMS Maps Kit (if you don't please review this quick codelab)
Without further due, let's get started!
Showing the map
First of all, let's define our layout. It will simply be a SupportMapFragment, as follows:
Code:
<?xml version="1.0" encoding="utf-8"?>
<fragment xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/map"
android:name="com.huawei.hms.maps.SupportMapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent">
</fragment>
As you can see, we're using com.huawei.hms.maps.SupportMapFragment as the fragment. We save this layout as activity_route_demo.
Now we can define our activity and set the layout content to it as usual:
Code:
class RouteDemoActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_route_demo)
}
}
If you set up this activity as the main activity and run the map, it should show the map. I suggest you do so now to troubleshoot any possible problems connecting with HMS before proceeding.
Marking route origin and destination on the map
Now we need to mark Toronto and Montreal on the map.
Code:
class RouteDemoActivity : AppCompatActivity() {
private val TORONTO = LatLng(43.6532, -79.3832)
private val MONTREAL = LatLng(45.5017, -73.5673)
private lateinit var map: HuaweiMap
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_route_demo_hms)
(supportFragmentManager.findFragmentById(R.id.map) as SupportMapFragment).getMapAsync {
map = it
setMarkers()
}
}
private fun setMarkers() {
val startMarker = MarkerOptions().position(TORONTO)
map.addMarker(startMarker)
val endMarker = MarkerOptions().position(MONTREAL)
map.addMarker(endMarker)
val cameraUpdate = CameraUpdateFactory.newLatLngZoom(TORONTO, 5f)
map.moveCamera(cameraUpdate)
}
}
We added both markers and moved and zoomed the camera on Toronto. Note that any actions on the map are done after the map has been initialized.
Defining data structures and call for Directions API
To query Huawei Directions API we will use Retrofit and Gson, so we add the corresponding dependencies:
Code:
dependencies {
implementation 'com.squareup.retrofit2:retrofit:2.7.2'
implementation 'com.squareup.retrofit2:converter-gson:2.7.2'
implementation 'com.google.code.gson:gson:2.8.6'
}
We specify base URL for Directions API (for more details, please check the documentation)
Code:
private val BASE_URL = "https://mapapi.cloud.huawei.com/mapApi/v1/routeService/"
We define the data structures for the Directions API query:
Code:
data class RouteServiceRequest(
@SerializedName("destination")
val destination: Location = Location(),
@SerializedName("origin")
val origin: Location = Location()
)
data class Location(
@SerializedName("lat")
val lat: Double = 0.0,
@SerializedName("lng")
val lng: Double = 0.0
)
And the response:
Code:
data class Routes(
@SerializedName("returnCode")
val returnCode: String = "",
@SerializedName("returnDesc")
val returnDesc: String = "",
@SerializedName("routes")
val routes: List<Route> = listOf()
)
data class Route(
@SerializedName("bounds")
val bounds: Bounds = Bounds(),
@SerializedName("paths")
val paths: List<Path> = listOf()
)
data class Path(
@SerializedName("distance")
val distance: Int = 0,
@SerializedName("duration")
val duration: Int = 0,
@SerializedName("durationInTraffic")
val durationInTraffic: Int = 0,
@SerializedName("endLocation")
val endLocation: Location = Location(),
@SerializedName("startLocation")
val startLocation: Location = Location(),
@SerializedName("steps")
val steps: List<Step> = listOf()
)
data class Bounds(
@SerializedName("northeast")
val northeast: Location = Location(),
@SerializedName("southwest")
val southwest: Location = Location()
)
data class Step(
@SerializedName("action")
val action: String = "",
@SerializedName("distance")
val distance: Int = 0,
@SerializedName("duration")
val duration: Int = 0,
@SerializedName("endLocation")
val endLocation: Location = Location(),
@SerializedName("orientation")
val orientation: Int = 0,
@SerializedName("polyline")
val polyline: List<Location> = listOf(),
@SerializedName("roadName")
val roadName: String = "",
@SerializedName("startLocation")
val startLocation: Location = Location()
)
This is not the end. For more content about this operation, you can visit THIS ARTICLE in HUAWEI Developer Forum.

Capture Image and convert to text by using Text Recognition ML Kit

More articles like this, you can visit HUAWEI Developer Forum and Medium.​
How to integrate Text Recognition ML Kit ?
How to extract text from image of receipts, business cards, and documents?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
1.Add dependencies in build.gradle file
Code:
implementation 'com.huawei.hms:ml-computer-vision-ocr:1.0.3.300' // Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:1.0.3.300' // Import the Chinese and English model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-jk-model:1.0.3.300' // Import the Japanese and Korean model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-latin-model:1.0.3.300' // Import the Latin-based language model package.
2.In MainActivity.click btn_detect to AndroidCameraApi to capture photo
Code:
btn_detect.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent cameraActivity = new Intent(MainActivity.this, AndroidCameraApi.class);
startActivity(cameraActivity);
}
});
3.Create an AndroidCameraApi Activity to capture photo and get result with bytearray of image.
convert the bytearray to bitmap and save in sharedpreference pref.
Code:
Intent intent = new Intent(AndroidCameraApi.this, MainActivity.class);
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, baos);
byte[] b = baos.toByteArray();
String temp = Base64.encodeToString(b, Base64.DEFAULT);
SharedPreferences pref = getApplicationContext().getSharedPreferences("MyPref", 0); // 0 - for private mode
SharedPreferences.Editor editor = pref.edit();
editor.putString("data", temp);
editor.commit();
startActivity(intent);
4.Back to MainActivity.use the bitmap string and convert to bitmap by Base64 decode and set the bitmap in localAnalyzer function
Code:
SharedPreferences pref = getApplicationContext().getSharedPreferences("MyPref", 0); // 0 - for private mode
String data = pref.getString("data", null);
if (data != null) {
byte[] encodeByte = Base64.decode(data, Base64.DEFAULT);
Bitmap bitmap = BitmapFactory.decodeByteArray(encodeByte, 0,
encodeByte.length);
Log.d("Bitmap", bitmap.toString());
imageView.setImageBitmap(bitmap);
localAnalyzer(bitmap);
}
5.Pass bitmap in localAnalyzer(Local text recognition). Create the text analyzer MLTextAnalyzer to recognize text in images.
You can set MLLocalTextSetting to specify languages that can be recognized.
Create an MLFrame object using android.graphics.Bitmap. Support JPG, JPEG, PNG, and BMP .
The success convert text will return in onSuccess callback.
Code:
private MLTextAnalyzer analyzer;
Code:
private void localAnalyzer(Bitmap bitmaps) {
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("en")
.create();
this.analyzer = MLAnalyzerFactory.getInstance()
.getLocalTextAnalyzer(setting);
MLFrame frame = MLFrame.fromBitmap(bitmaps);
Task<MLText> task = this.analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText text) {
MainActivity.this.displaySuccess(text);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
MainActivity.this.displayFailure();
}
});
}
6.make use the return MLText and display on your textfield.(Block,line)
Code:
private void displaySuccess(MLText mlText) {
String result = "";
List<MLText.Block> blocks = mlText.getBlocks();
for (MLText.Block block : blocks) {
for (MLText.TextLine line : block.getContents()) {
result += line.getStringValue() + "\n";
}
}
this.mTextView.setText(result);
}
7 and also permission in manifest
Code:
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
8.Text Recognition from the capture image on the Cloud Result* (more accurate!)
Code:
private void remoteAnalyzer(Bitmap bitmaps) {
MLRemoteTextSetting setting =
new MLRemoteTextSetting.Factory()
.setTextDensityScene(MLRemoteTextSetting.OCR_COMPACT_SCENE)
.setLanguageList(new ArrayList<String>() {{
this.add("zh");
this.add("en");
}})
.setBorderType(MLRemoteTextSetting.ARC)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getRemoteTextAnalyzer(setting);
MLFrame frame = MLFrame.fromBitmap(bitmaps);
Task<MLText> task = this.analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText text) {
MainActivity.this.remoteDisplaySuccess(text);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
MainActivity.this.displayFailure();
}
});
}
Reference:
https://developer.huawei.com/consumer/en/doc/HMSCore-Guides-V5/text-recognition-0000001050040053-V5
What languages are supported ?
Can you please share me the official document of Huawei ML Kit.
Thank you

Intermediate: How to Verify Phone Number and Anonymous Account Login using Huawei Auth Service-AGC in Unity

Introduction
In this article, we will looking that how Huawei Auth Service-AGC provides secure and reliable user authentication system to your application. However building such systems is very difficult process, using Huawei Auth Service SDK only need to access Auth Service capabilities without implementing on the cloud.
Here I am covering sign in anonymously which is anonymous account sign to access your app as guest when you wish to login anonymously, Auth Service provides you unique id to uniquely identify user. And authenticating mobile number by verifying through OTP.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Overview
You need to install Unity software and I assume that you have prior
knowledge about the unity and C#.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
A Huawei phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.7 or later.
Unity software installed.
Visual Studio/Code installed.
HMS Core (APK) 4.X or later.
Integration Preparations
1. Create a project in AppGallery Connect.
2. Create Unity project.
3. Huawei HMS AGC Services to project.
https://assetstore.unity.com/packag...awei-hms-agc-services-176968#version-original
4. Download and save the configuration file.
Add the agconnect-services.json file following directory Assests > Plugins > Android
5. Add the following plugin and dependencies in LaucherTemplate.
1apply plugin: 'com.huawei.agconnect'
6. Add the following dependencies in MainTemplate.
1apply plugin: 'com.huawei.agconnect'
1
2
3implementation 'com.huawei.agconnect:agconnect-auth:1.4.2.301'
implementation 'com.huawei.hms:base:5.2.0.300'
implementation 'com.huawei.hms:hwid:5.2.0.300'
7. Add dependencies in build script repositories and all project repositories & class path in BaseProjectTemplate.
1maven { url 'https://developer.huawei.com/repo/' }
8. Create Empty Game object rename to GameManager, UI canvas input text fields and buttons and assign onclick events to respective components as shown below.
MainActivity.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96package com.huawei.AuthServiceDemo22;
import android.content.Intent;
import android.os.Bundle;
import com.hw.unity.Agc.Auth.ThirdPartyLogin.LoginManager;
import com.unity3d.player.UnityPlayerActivity;
import android.util.Log;
import com.huawei.agconnect.auth.AGConnectAuth;
import com.huawei.agconnect.auth.AGConnectAuthCredential;
import com.huawei.agconnect.auth.AGConnectUser;
import com.huawei.agconnect.auth.PhoneAuthProvider;
import com.huawei.agconnect.auth.SignInResult;
import com.huawei.agconnect.auth.VerifyCodeResult;
import com.huawei.agconnect.auth.VerifyCodeSettings;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hmf.tasks.TaskExecutors;
import java.util.Locale;
public class MainActivity extends UnityPlayerActivity {
@override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
LoginManager.getInstance().initialize(this);
Log.d("DDDD"," Inside onCreate ");
}
public static void AnoniomousLogin(){
AGConnectAuth.getInstance().signInAnonymously().addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@override
public void onSuccess(SignInResult signInResult) {
AGConnectUser user = signInResult.getUser();
String uid = user.getUid();
Log.d("DDDD"," Login Anonymous UID : "+uid);
}
}).addOnFailureListener(new OnFailureListener() {
@override
public void onFailure(Exception e) {
Log.d("DDDD"," Inside ERROR "+e.getMessage());
}
});
}
@override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
LoginManager.getInstance().onActivityResult(requestCode, resultCode, data);
}
public static void sendVerifCode(String phone) {
VerifyCodeSettings settings = VerifyCodeSettings.newBuilder()
.action(VerifyCodeSettings.ACTION_REGISTER_LOGIN)
.sendInterval(30) // Shortest sending interval, 30–120s
.build();
String countCode = "+91";
String phoneNumber = phone;
if (notEmptyString(countCode) && notEmptyString(phoneNumber)) {
Task<VerifyCodeResult> task = PhoneAuthProvider.requestVerifyCode(countCode, phoneNumber, settings);
task.addOnSuccessListener(TaskExecutors.uiThread(), new OnSuccessListener<VerifyCodeResult>() {
@override
public void onSuccess(VerifyCodeResult verifyCodeResult) {
Log.d("DDDD"," ==>"+verifyCodeResult);
}
}).addOnFailureListener(TaskExecutors.uiThread(), new OnFailureListener() {
@override
public void onFailure(Exception e) {
Log.d("DDDD"," Inside onFailure");
}
});
}
}
static boolean notEmptyString(String string) {
return string != null && !string.isEmpty() && !string.equals("");
}
public static void linkPhone(String verifyCode1,String phone) {
Log.d("DDDD", " verifyCode1 "+verifyCode1);
String phoneNumber = phone;
String countCode = "+91";
String verifyCode = verifyCode1;
Log.e("DDDD", " verifyCode "+verifyCode);
AGConnectAuthCredential credential = PhoneAuthProvider.credentialWithVerifyCode(
countCode,
phoneNumber,
null, // password, can be null
verifyCode);
AGConnectAuth.getInstance().getCurrentUser().link(credential).addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@override
public void onSuccess(SignInResult signInResult) {
String phoneNumber = signInResult.getUser().getPhone();
String uid = signInResult.getUser().getUid();
Log.d("DDDD", "phone number: " + phoneNumber + ", uid: " + uid);
}
}).addOnFailureListener(new OnFailureListener() {
@override
public void onFailure(Exception e) {
Log.e("DDDD", "Login error, please try again, error:" + e.getMessage());
}
});
}
}
GameManager.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class GameManager : MonoBehaviour
{
public InputField OtpField,inputFieldPhone;
string otp=null,phone="";
// Start is called before the first frame update
void Start()
{
inputFieldPhone.text = "9740424108";
}
public void onClickButton(){
phone = inputFieldPhone.text;
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("sendVerifCode",phone);
}
}
public void LinkPhone(){
otp = OtpField.text;
Debug.Log(" OTP "+otp);
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("linkPhone",otp,phone);
}
}
public void AnoniomousLogin(){
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("AnoniomousLogin");
}
}
}
10. Click to Build apk, choose File > Build settings > Build, to Build and Run, choose File > Build settings > Build And Run.
Result
Tips and Tricks
Add agconnect-services.json file without fail.
Make sure dependencies added in build files.
Make sure that you enabled the Auth Service in AG-Console.
Make sure that you enabled the Authentication mode in Auth Service.
Conclusion
We have learnt integration of Huawei Auth Service-AGC anonymous account login and mobile number verification through OTP in Unity Game development. Conclusion is Auth Service provides secure and reliable user authentication system to your application.
Thank you so much for reading article, hope this article helps you.
Reference
Official documentation service introduction :
Document
developer.huawei.com
Unity Auth Service Manual :
Auth Service (AGC) | HuaweiService | 1.3.4
docs.unity.cn
Auth Service CodeLabs :
https://developer.huawei.com/consumer/en/codelabsPortal/carddetails/AuthenticationService
Very interesting security.

Beginner: Find yoga pose using Huawei ML kit skeleton detection - Part 1

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Android? At the end of this tutorial, we will create the Huawei Skeleton detection in an Android application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps, as follows.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Android application
In this example, I am getting image from the gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147private fun initAnalyzer(analyzerType: Int) {
val setting = MLSkeletonAnalyzerSetting.Factory()
.setAnalyzerType(analyzerType)
.create()
analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer(setting)
imageSkeletonDetectAsync()
}
private fun initFrame(type: Int) {
imageView.invalidate()
val drawable = imageView.drawable as BitmapDrawable
val originBitmap = drawable.bitmap
val maxHeight = (imageView.parent as View).height
val targetWidth = (imageView.parent as View).width
// Update bitmap size
val scaleFactor = (originBitmap.width.toFloat() / targetWidth.toFloat())
.coerceAtLeast(originBitmap.height.toFloat() / maxHeight.toFloat())
val resizedBitmap = Bitmap.createScaledBitmap(
originBitmap,
(originBitmap.width / scaleFactor).toInt(),
(originBitmap.height / scaleFactor).toInt(),
true
)
frame = MLFrame.fromBitmap(resizedBitmap)
initAnalyzer(type)
}
private fun imageSkeletonDetectAsync() {
val task: Task<List<MLSkeleton>>? = analyzer?.asyncAnalyseFrame(frame)
task?.addOnSuccessListener { results ->
// Detection success.
val skeletons: List<MLSkeleton>? = getValidSkeletons(results)
if (skeletons != null && skeletons.isNotEmpty()) {
graphicOverlay?.clear()
val skeletonGraphic = SkeletonGraphic(graphicOverlay, results)
graphicOverlay?.add(skeletonGraphic)
} else {
Log.e(TAG, "async analyzer result is null.")
}
}?.addOnFailureListener { /* Result failure. */ }
}
private fun stopAnalyzer() {
if (analyzer != null) {
try {
analyzer?.stop()
} catch (e: IOException) {
Log.e(TAG, "Failed for analyzer: " + e.message)
}
}
}
override fun onDestroy() {
super.onDestroy()
stopAnalyzer()
}
private fun showPictureDialog() {
val pictureDialog = AlertDialog.Builder(this)
pictureDialog.setTitle("Select Action")
val pictureDialogItems = arrayOf("Select image from gallery", "Capture photo from camera")
pictureDialog.setItems(pictureDialogItems
) { dialog, which ->
when (which) {
0 -> chooseImageFromGallery()
1 -> takePhotoFromCamera()
}
}
pictureDialog.show()
}
fun chooseImageFromGallery() {
val galleryIntent = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(galleryIntent, GALLERY)
}
private fun takePhotoFromCamera() {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, CAMERA)
}
public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == GALLERY)
{
if (data != null)
{
val contentURI = data!!.data
try {
val bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, contentURI)
saveImage(bitmap)
Toast.makeText([email protected], "Image Show!", Toast.LENGTH_SHORT).show()
imageView!!.setImageBitmap(bitmap)
}
catch (e: IOException)
{
e.printStackTrace()
Toast.makeText([email protected], "Failed", Toast.LENGTH_SHORT).show()
}
}
}
else if (requestCode == CAMERA)
{
val thumbnail = data!!.extras!!.get("data") as Bitmap
imageView!!.setImageBitmap(thumbnail)
saveImage(thumbnail)
Toast.makeText([email protected], "Photo Show!", Toast.LENGTH_SHORT).show()
}
}
fun saveImage(myBitmap: Bitmap):String {
val bytes = ByteArrayOutputStream()
myBitmap.compress(Bitmap.CompressFormat.PNG, 90, bytes)
val wallpaperDirectory = File (
(Environment.getExternalStorageDirectory()).toString() + IMAGE_DIRECTORY)
Log.d("fee", wallpaperDirectory.toString())
if (!wallpaperDirectory.exists())
{
wallpaperDirectory.mkdirs()
}
try
{
Log.d("heel", wallpaperDirectory.toString())
val f = File(wallpaperDirectory, ((Calendar.getInstance()
.getTimeInMillis()).toString() + ".png"))
f.createNewFile()
val fo = FileOutputStream(f)
fo.write(bytes.toByteArray())
MediaScannerConnection.scanFile(this, arrayOf(f.getPath()), arrayOf("image/png"), null)
fo.close()
Log.d("TAG", "File Saved::--->" + f.getAbsolutePath())
return f.getAbsolutePath()
}
catch (e1: IOException){
e1.printStackTrace()
}
return ""
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure your app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding

Beginner: Find yoga pose using Huawei ML kit skeleton detection - Part 1

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Android? At the end of this tutorial, we will create the Huawei Skeleton detection in an Android application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps, as follows.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Android application
In this example, I am getting image from the gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147private fun initAnalyzer(analyzerType: Int) {
val setting = MLSkeletonAnalyzerSetting.Factory()
.setAnalyzerType(analyzerType)
.create()
analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer(setting)
imageSkeletonDetectAsync()
}
private fun initFrame(type: Int) {
imageView.invalidate()
val drawable = imageView.drawable as BitmapDrawable
val originBitmap = drawable.bitmap
val maxHeight = (imageView.parent as View).height
val targetWidth = (imageView.parent as View).width
// Update bitmap size
val scaleFactor = (originBitmap.width.toFloat() / targetWidth.toFloat())
.coerceAtLeast(originBitmap.height.toFloat() / maxHeight.toFloat())
val resizedBitmap = Bitmap.createScaledBitmap(
originBitmap,
(originBitmap.width / scaleFactor).toInt(),
(originBitmap.height / scaleFactor).toInt(),
true
)
frame = MLFrame.fromBitmap(resizedBitmap)
initAnalyzer(type)
}
private fun imageSkeletonDetectAsync() {
val task: Task<List<MLSkeleton>>? = analyzer?.asyncAnalyseFrame(frame)
task?.addOnSuccessListener { results ->
// Detection success.
val skeletons: List<MLSkeleton>? = getValidSkeletons(results)
if (skeletons != null && skeletons.isNotEmpty()) {
graphicOverlay?.clear()
val skeletonGraphic = SkeletonGraphic(graphicOverlay, results)
graphicOverlay?.add(skeletonGraphic)
} else {
Log.e(TAG, "async analyzer result is null.")
}
}?.addOnFailureListener { /* Result failure. */ }
}
private fun stopAnalyzer() {
if (analyzer != null) {
try {
analyzer?.stop()
} catch (e: IOException) {
Log.e(TAG, "Failed for analyzer: " + e.message)
}
}
}
override fun onDestroy() {
super.onDestroy()
stopAnalyzer()
}
private fun showPictureDialog() {
val pictureDialog = AlertDialog.Builder(this)
pictureDialog.setTitle("Select Action")
val pictureDialogItems = arrayOf("Select image from gallery", "Capture photo from camera")
pictureDialog.setItems(pictureDialogItems
) { dialog, which ->
when (which) {
0 -> chooseImageFromGallery()
1 -> takePhotoFromCamera()
}
}
pictureDialog.show()
}
fun chooseImageFromGallery() {
val galleryIntent = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(galleryIntent, GALLERY)
}
private fun takePhotoFromCamera() {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, CAMERA)
}
public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == GALLERY)
{
if (data != null)
{
val contentURI = data!!.data
try {
val bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, contentURI)
saveImage(bitmap)
Toast.makeText([email protected], "Image Show!", Toast.LENGTH_SHORT).show()
imageView!!.setImageBitmap(bitmap)
}
catch (e: IOException)
{
e.printStackTrace()
Toast.makeText([email protected], "Failed", Toast.LENGTH_SHORT).show()
}
}
}
else if (requestCode == CAMERA)
{
val thumbnail = data!!.extras!!.get("data") as Bitmap
imageView!!.setImageBitmap(thumbnail)
saveImage(thumbnail)
Toast.makeText([email protected], "Photo Show!", Toast.LENGTH_SHORT).show()
}
}
fun saveImage(myBitmap: Bitmap):String {
val bytes = ByteArrayOutputStream()
myBitmap.compress(Bitmap.CompressFormat.PNG, 90, bytes)
val wallpaperDirectory = File (
(Environment.getExternalStorageDirectory()).toString() + IMAGE_DIRECTORY)
Log.d("fee", wallpaperDirectory.toString())
if (!wallpaperDirectory.exists())
{
wallpaperDirectory.mkdirs()
}
try
{
Log.d("heel", wallpaperDirectory.toString())
val f = File(wallpaperDirectory, ((Calendar.getInstance()
.getTimeInMillis()).toString() + ".png"))
f.createNewFile()
val fo = FileOutputStream(f)
fo.write(bytes.toByteArray())
MediaScannerConnection.scanFile(this, arrayOf(f.getPath()), arrayOf("image/png"), null)
fo.close()
Log.d("TAG", "File Saved::--->" + f.getAbsolutePath())
return f.getAbsolutePath()
}
catch (e1: IOException){
e1.printStackTrace()
}
return ""
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure your app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
will it detect all yoga positions?

Categories

Resources