{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Flutter? At the end of this tutorial, we will create the Huawei Skeleton detection in Flutter application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both the methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps as follows.
Step 1: Create a flutter application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the downloaded plugin in pubspec.yaml.
Step 4: Add a downloaded file into the outside project directory. Declare plugin path in pubspec.yaml file under dependencies.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28dependencies:
flutter:
sdk: flutter
huawei_account:
path: ../huawei_account/
huawei_location:
path: ../huawei_location/
huawei_map:
path: ../huawei_map/
huawei_analytics:
path: ../huawei_analytics/
huawei_site:
path: ../huawei_site/
huawei_push:
path: ../huawei_push/
huawei_dtm:
path: ../huawei_dtm/
huawei_ml:
path: ../huawei_ml/
agconnect_crash: ^1.0.0
agconnect_remote_config: ^1.0.0
http: ^0.12.2
camera:
path_provider:
path:
image_picker:
fluttertoast: ^7.1.6
shared_preferences: ^0.5.12+4
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Flutter application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Flutter application
In this example, I am getting image from gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128import 'dart:io';
import 'package:flutter/material.dart';
import 'package:huawei_ml/huawei_ml.dart';
import 'package:huawei_ml/skeleton/ml_skeleton_analyzer.dart';
import 'package:huawei_ml/skeleton/ml_skeleton_analyzer_setting.dart';
import 'package:image_picker/image_picker.dart';
class SkeletonDetection extends StatefulWidget {
@override
_SkeletonDetectionState createState() => _SkeletonDetectionState();
}
class _SkeletonDetectionState extends State<SkeletonDetection> {
MLSkeletonAnalyzer analyzer;
MLSkeletonAnalyzerSetting setting;
List<MLSkeleton> skeletons;
double _x = 0;
double _y = 0;
double _score = 0;
@override
void initState() {
// TODO: implement initState
analyzer = new MLSkeletonAnalyzer();
setting = new MLSkeletonAnalyzerSetting();
super.initState();
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
_setImageView()
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
_showSelectionDialog(context);
},
child: Icon(Icons.camera_alt),
),
);
}
Future<void> _showSelectionDialog(BuildContext context) {
return showDialog(
context: context,
builder: (BuildContext context) {
return AlertDialog(
title: Text("From where do you want to take the photo?"),
content: SingleChildScrollView(
child: ListBody(
children: <Widget>[
GestureDetector(
child: Text("Gallery"),
onTap: () {
_openGallery(context);
},
),
Padding(padding: EdgeInsets.all(8.0)),
GestureDetector(
child: Text("Camera"),
onTap: () {
_openCamera();
},
)
],
),
));
});
}
File imageFile;
void _openGallery(BuildContext context) async {
var picture = await ImagePicker.pickImage(source: ImageSource.gallery);
this.setState(() {
imageFile = picture;
_skeletonDetection();
});
Navigator.of(context).pop();
}
_openCamera() async {
PickedFile pickedFile = await ImagePicker().getImage(
source: ImageSource.camera,
maxWidth: 800,
maxHeight: 800,
);
if (pickedFile != null) {
imageFile = File(pickedFile.path);
this.setState(() {
imageFile = imageFile;
_skeletonDetection();
});
}
Navigator.of(context).pop();
}
Widget _setImageView() {
if (imageFile != null) {
return Image.file(imageFile, width: 500, height: 500);
} else {
return Text("Please select an image");
}
}
_skeletonDetection() async {
// Create a skeleton analyzer.
analyzer = new MLSkeletonAnalyzer();
// Configure the recognition settings.
setting = new MLSkeletonAnalyzerSetting();
setting.path = imageFile.path;
setting.analyzerType = MLSkeletonAnalyzerSetting.TYPE_NORMAL; // Normal posture.
// Get recognition result asynchronously.
List<MLSkeleton> list = await analyzer.asyncSkeletonDetection(setting);
print("Result data: "+list[0].toJson().toString());
// After the recognition ends, stop the analyzer.
bool res = await analyzer.stopSkeletonDetection();
}
}
Result
Tips and Tricks
Download the latest HMS Flutter plugin.
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking image from a camera or gallery make sure your app has camera and storage permission
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
Related
Introduction
In this article, we will looking that how Huawei Auth Service-AGC provides secure and reliable user authentication system to your application. However building such systems is very difficult process, using Huawei Auth Service SDK only need to access Auth Service capabilities without implementing on the cloud.
Here I am covering sign in anonymously which is anonymous account sign to access your app as guest when you wish to login anonymously, Auth Service provides you unique id to uniquely identify user. And authenticating mobile number by verifying through OTP.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Overview
You need to install Unity software and I assume that you have prior
knowledge about the unity and C#.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
A Huawei phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.7 or later.
Unity software installed.
Visual Studio/Code installed.
HMS Core (APK) 4.X or later.
Integration Preparations
1. Create a project in AppGallery Connect.
2. Create Unity project.
3. Huawei HMS AGC Services to project.
https://assetstore.unity.com/packag...awei-hms-agc-services-176968#version-original
4. Download and save the configuration file.
Add the agconnect-services.json file following directory Assests > Plugins > Android
5. Add the following plugin and dependencies in LaucherTemplate.
1apply plugin: 'com.huawei.agconnect'
6. Add the following dependencies in MainTemplate.
1apply plugin: 'com.huawei.agconnect'
1
2
3implementation 'com.huawei.agconnect:agconnect-auth:1.4.2.301'
implementation 'com.huawei.hms:base:5.2.0.300'
implementation 'com.huawei.hms:hwid:5.2.0.300'
7. Add dependencies in build script repositories and all project repositories & class path in BaseProjectTemplate.
1maven { url 'https://developer.huawei.com/repo/' }
8. Create Empty Game object rename to GameManager, UI canvas input text fields and buttons and assign onclick events to respective components as shown below.
MainActivity.java
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96package com.huawei.AuthServiceDemo22;
import android.content.Intent;
import android.os.Bundle;
import com.hw.unity.Agc.Auth.ThirdPartyLogin.LoginManager;
import com.unity3d.player.UnityPlayerActivity;
import android.util.Log;
import com.huawei.agconnect.auth.AGConnectAuth;
import com.huawei.agconnect.auth.AGConnectAuthCredential;
import com.huawei.agconnect.auth.AGConnectUser;
import com.huawei.agconnect.auth.PhoneAuthProvider;
import com.huawei.agconnect.auth.SignInResult;
import com.huawei.agconnect.auth.VerifyCodeResult;
import com.huawei.agconnect.auth.VerifyCodeSettings;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hmf.tasks.TaskExecutors;
import java.util.Locale;
public class MainActivity extends UnityPlayerActivity {
@override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
LoginManager.getInstance().initialize(this);
Log.d("DDDD"," Inside onCreate ");
}
public static void AnoniomousLogin(){
AGConnectAuth.getInstance().signInAnonymously().addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@override
public void onSuccess(SignInResult signInResult) {
AGConnectUser user = signInResult.getUser();
String uid = user.getUid();
Log.d("DDDD"," Login Anonymous UID : "+uid);
}
}).addOnFailureListener(new OnFailureListener() {
@override
public void onFailure(Exception e) {
Log.d("DDDD"," Inside ERROR "+e.getMessage());
}
});
}
@override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
LoginManager.getInstance().onActivityResult(requestCode, resultCode, data);
}
public static void sendVerifCode(String phone) {
VerifyCodeSettings settings = VerifyCodeSettings.newBuilder()
.action(VerifyCodeSettings.ACTION_REGISTER_LOGIN)
.sendInterval(30) // Shortest sending interval, 30–120s
.build();
String countCode = "+91";
String phoneNumber = phone;
if (notEmptyString(countCode) && notEmptyString(phoneNumber)) {
Task<VerifyCodeResult> task = PhoneAuthProvider.requestVerifyCode(countCode, phoneNumber, settings);
task.addOnSuccessListener(TaskExecutors.uiThread(), new OnSuccessListener<VerifyCodeResult>() {
@override
public void onSuccess(VerifyCodeResult verifyCodeResult) {
Log.d("DDDD"," ==>"+verifyCodeResult);
}
}).addOnFailureListener(TaskExecutors.uiThread(), new OnFailureListener() {
@override
public void onFailure(Exception e) {
Log.d("DDDD"," Inside onFailure");
}
});
}
}
static boolean notEmptyString(String string) {
return string != null && !string.isEmpty() && !string.equals("");
}
public static void linkPhone(String verifyCode1,String phone) {
Log.d("DDDD", " verifyCode1 "+verifyCode1);
String phoneNumber = phone;
String countCode = "+91";
String verifyCode = verifyCode1;
Log.e("DDDD", " verifyCode "+verifyCode);
AGConnectAuthCredential credential = PhoneAuthProvider.credentialWithVerifyCode(
countCode,
phoneNumber,
null, // password, can be null
verifyCode);
AGConnectAuth.getInstance().getCurrentUser().link(credential).addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@override
public void onSuccess(SignInResult signInResult) {
String phoneNumber = signInResult.getUser().getPhone();
String uid = signInResult.getUser().getUid();
Log.d("DDDD", "phone number: " + phoneNumber + ", uid: " + uid);
}
}).addOnFailureListener(new OnFailureListener() {
@override
public void onFailure(Exception e) {
Log.e("DDDD", "Login error, please try again, error:" + e.getMessage());
}
});
}
}
GameManager.cs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
public class GameManager : MonoBehaviour
{
public InputField OtpField,inputFieldPhone;
string otp=null,phone="";
// Start is called before the first frame update
void Start()
{
inputFieldPhone.text = "9740424108";
}
public void onClickButton(){
phone = inputFieldPhone.text;
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("sendVerifCode",phone);
}
}
public void LinkPhone(){
otp = OtpField.text;
Debug.Log(" OTP "+otp);
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("linkPhone",otp,phone);
}
}
public void AnoniomousLogin(){
using (AndroidJavaClass javaClass = new AndroidJavaClass("com.huawei.AuthServiceDemo22.MainActivity"))
{
javaClass.CallStatic("AnoniomousLogin");
}
}
}
10. Click to Build apk, choose File > Build settings > Build, to Build and Run, choose File > Build settings > Build And Run.
Result
Tips and Tricks
Add agconnect-services.json file without fail.
Make sure dependencies added in build files.
Make sure that you enabled the Auth Service in AG-Console.
Make sure that you enabled the Authentication mode in Auth Service.
Conclusion
We have learnt integration of Huawei Auth Service-AGC anonymous account login and mobile number verification through OTP in Unity Game development. Conclusion is Auth Service provides secure and reliable user authentication system to your application.
Thank you so much for reading article, hope this article helps you.
Reference
Official documentation service introduction :
Document
developer.huawei.com
Unity Auth Service Manual :
Auth Service (AGC) | HuaweiService | 1.3.4
docs.unity.cn
Auth Service CodeLabs :
https://developer.huawei.com/consumer/en/codelabsPortal/carddetails/AuthenticationService
Very interesting security.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Android? At the end of this tutorial, we will create the Huawei Skeleton detection in an Android application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps, as follows.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Android application
In this example, I am getting image from the gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147private fun initAnalyzer(analyzerType: Int) {
val setting = MLSkeletonAnalyzerSetting.Factory()
.setAnalyzerType(analyzerType)
.create()
analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer(setting)
imageSkeletonDetectAsync()
}
private fun initFrame(type: Int) {
imageView.invalidate()
val drawable = imageView.drawable as BitmapDrawable
val originBitmap = drawable.bitmap
val maxHeight = (imageView.parent as View).height
val targetWidth = (imageView.parent as View).width
// Update bitmap size
val scaleFactor = (originBitmap.width.toFloat() / targetWidth.toFloat())
.coerceAtLeast(originBitmap.height.toFloat() / maxHeight.toFloat())
val resizedBitmap = Bitmap.createScaledBitmap(
originBitmap,
(originBitmap.width / scaleFactor).toInt(),
(originBitmap.height / scaleFactor).toInt(),
true
)
frame = MLFrame.fromBitmap(resizedBitmap)
initAnalyzer(type)
}
private fun imageSkeletonDetectAsync() {
val task: Task<List<MLSkeleton>>? = analyzer?.asyncAnalyseFrame(frame)
task?.addOnSuccessListener { results ->
// Detection success.
val skeletons: List<MLSkeleton>? = getValidSkeletons(results)
if (skeletons != null && skeletons.isNotEmpty()) {
graphicOverlay?.clear()
val skeletonGraphic = SkeletonGraphic(graphicOverlay, results)
graphicOverlay?.add(skeletonGraphic)
} else {
Log.e(TAG, "async analyzer result is null.")
}
}?.addOnFailureListener { /* Result failure. */ }
}
private fun stopAnalyzer() {
if (analyzer != null) {
try {
analyzer?.stop()
} catch (e: IOException) {
Log.e(TAG, "Failed for analyzer: " + e.message)
}
}
}
override fun onDestroy() {
super.onDestroy()
stopAnalyzer()
}
private fun showPictureDialog() {
val pictureDialog = AlertDialog.Builder(this)
pictureDialog.setTitle("Select Action")
val pictureDialogItems = arrayOf("Select image from gallery", "Capture photo from camera")
pictureDialog.setItems(pictureDialogItems
) { dialog, which ->
when (which) {
0 -> chooseImageFromGallery()
1 -> takePhotoFromCamera()
}
}
pictureDialog.show()
}
fun chooseImageFromGallery() {
val galleryIntent = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(galleryIntent, GALLERY)
}
private fun takePhotoFromCamera() {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, CAMERA)
}
public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == GALLERY)
{
if (data != null)
{
val contentURI = data!!.data
try {
val bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, contentURI)
saveImage(bitmap)
Toast.makeText([email protected], "Image Show!", Toast.LENGTH_SHORT).show()
imageView!!.setImageBitmap(bitmap)
}
catch (e: IOException)
{
e.printStackTrace()
Toast.makeText([email protected], "Failed", Toast.LENGTH_SHORT).show()
}
}
}
else if (requestCode == CAMERA)
{
val thumbnail = data!!.extras!!.get("data") as Bitmap
imageView!!.setImageBitmap(thumbnail)
saveImage(thumbnail)
Toast.makeText([email protected], "Photo Show!", Toast.LENGTH_SHORT).show()
}
}
fun saveImage(myBitmap: Bitmap):String {
val bytes = ByteArrayOutputStream()
myBitmap.compress(Bitmap.CompressFormat.PNG, 90, bytes)
val wallpaperDirectory = File (
(Environment.getExternalStorageDirectory()).toString() + IMAGE_DIRECTORY)
Log.d("fee", wallpaperDirectory.toString())
if (!wallpaperDirectory.exists())
{
wallpaperDirectory.mkdirs()
}
try
{
Log.d("heel", wallpaperDirectory.toString())
val f = File(wallpaperDirectory, ((Calendar.getInstance()
.getTimeInMillis()).toString() + ".png"))
f.createNewFile()
val fo = FileOutputStream(f)
fo.write(bytes.toByteArray())
MediaScannerConnection.scanFile(this, arrayOf(f.getPath()), arrayOf("image/png"), null)
fo.close()
Log.d("TAG", "File Saved::--->" + f.getAbsolutePath())
return f.getAbsolutePath()
}
catch (e1: IOException){
e1.printStackTrace()
}
return ""
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure your app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
Huawei Games is a group of APIs from Huawei to simplify some basic game features like leaderboards, achievements, events and online matches.
Gaming technologies are constantly evolving. Nevertheless, a lot of core gameplay elements have remained unchanged for decades. High scores, leaderboards, quests, achievements, and multiplayer support are examples. If you are developing a game for the Android platform, you don't have to implement any of those elements manually. You can simply use the Huawei Game services APIs instead.
Features of Huawei Game services
Huawei ID sign-in
Real-name authentication
Bulletins
Achievements
Events
Leaderboard
Saved games
Player statistics
Integration of Game Service
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project. Select Game App.
Step 3: Set the data storage location based on the current location.
Step 4: Enabling Account Kit and Game Service. Open AppGallery connect, choose Manage API > Account kit and Game service.
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
Follow the steps.
Step 1: Create a flutter application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level Gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Download the plugin Account kit and Game Kit.
Step 4: Add a downloaded file into the outside project directory. Declare plugin path in pubspec.yaml file under dependencies.
1
2
3
4
5
6
7dependencies:
flutter:
sdk: flutter
huawei_account:
path: ../huawei_account/
huawei_gameservice:
path: ../huawei_gameservice/
Step 5: Build flutter application
In this example, I’m building a Tic Tac Toe game application using the Huawei Game service. And we will see the following feature in this article.
Sign In
Initialization
Getting player information
Saving player information
Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66void signInWithHuaweiAccount() async {
HmsAuthParamHelper authParamHelper = new HmsAuthParamHelper();
authParamHelper
..setIdToken()
..setAuthorizationCode()
..setAccessToken()
..setProfile()
..setEmail()
..setScopeList([HmsScope.openId, HmsScope.email, HmsScope.profile])
..setRequestCode(8888);
try {
final HmsAuthHuaweiId accountInfo =
await HmsAuthService.signIn(authParamHelper: authParamHelper);
setState(() {
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => HomePage(),
));
});
} on Exception catch (exception) {
print(exception.toString());
print("error: " + exception.toString());
}
}
void silentSignInHuaweiAccount() async {
HmsAuthParamHelper authParamHelper = new HmsAuthParamHelper();
try {
final HmsAuthHuaweiId accountInfo =
await HmsAuthService.silentSignIn(authParamHelper: authParamHelper);
if (accountInfo.unionId != null) {
print("Open Id: ${accountInfo.openId}");
print("Display name: ${accountInfo.displayName}");
print("Profile DP: ${accountInfo.avatarUriString}");
print("Email Id: ${accountInfo.email}");
Validator()
.showToast("Signed in successful as ${accountInfo.displayName}");
}
} on Exception catch (exception) {
print(exception.toString());
print('Login_provider:Can not SignIn silently');
Validator().showToast("SCan not SignIn silently ${exception.toString()}");
}
}
Future signOut() async {
final signOutResult = await HmsAuthService.signOut();
if (signOutResult) {
Validator().showToast("Signed out successful");
/* Route route = MaterialPageRoute(builder: (context) => SignInPage());
Navigator.pushReplacement(context, route);*/
} else {
print('Login_provider:signOut failed');
}
}
Future revokeAuthorization() async {
final bool revokeResult = await HmsAuthService.revokeAuthorization();
if (revokeResult) {
Validator().showToast("Revoked Auth Successfully");
} else {
Validator().showToast('Login_provider:Failed to Revoked Auth');
}
}
Users can sign in with Huawei Account. They can play games.
The above shows the sign-in with Huawei Account.
Initialization
Call the JosAppsClient.init method to initialize the game.
1
2
3void init() async {
await JosAppsClient.init();
}
Getting player information
The below code gives the player information like playerId, openId, Player name etc. in which level user is playing.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15Future<void> getPlayerInformation() async {
try {
Player player = await PlayersClient.getCurrentPlayer();
print("Player ID: " + player.playerId);
print("Open ID: " + player.openId);
print("Access Token: " + player.accessToken);
print("Player Name: " + player.displayName);
setState(() {
playerName = "Player Name: "+player.displayName;
savePlayerInformation(player.playerId, player.openId);
});
} on PlatformException catch (e) {
print("Error on getGamePlayer API, Error: ${e.code}, Error Description: ${GameServiceResultCodes.getStatusCodeMessage(e.code)}");
}
}
Saving player information
Player information can be updated using AppPlayerInfo. Player level, role, rank, etc. The following code shows the saving player information.
To save player information playerId and openId are mandatory.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15void savePlayerInformation(String id, String openid) async {
try {
AppPlayerInfo info = new AppPlayerInfo(
rank: "1",
role: "beginner",
area: "2",
society: "Game",
playerId: id,
openId: openid);
await PlayersClient.savePlayerInfo(info);
} on PlatformException catch (e) {
print(
"Error on SavePlayer Info API, Error: ${e.code}, Error Description: ${GameServiceResultCodes.getStatusCodeMessage(e.code)}");
}
}
Result
Tips and Tricks
Download the latest HMS Flutter plugin.
Check dependencies downloaded properly.
Latest HMS Core APK is required.
Set minimum SDK 19 or later.
Conclusion
In this article, we have learnt the followings:
Creating an application on AGC
Integration of account kit and game service
Sign In
Initialization
Getting player information
Saving player information
In an upcoming article, I’ll come up with a new concept of game service.
Reference
Game service
useful share
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Android? At the end of this tutorial, we will create the Huawei Skeleton detection in an Android application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps, as follows.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Android application
In this example, I am getting image from the gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147private fun initAnalyzer(analyzerType: Int) {
val setting = MLSkeletonAnalyzerSetting.Factory()
.setAnalyzerType(analyzerType)
.create()
analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer(setting)
imageSkeletonDetectAsync()
}
private fun initFrame(type: Int) {
imageView.invalidate()
val drawable = imageView.drawable as BitmapDrawable
val originBitmap = drawable.bitmap
val maxHeight = (imageView.parent as View).height
val targetWidth = (imageView.parent as View).width
// Update bitmap size
val scaleFactor = (originBitmap.width.toFloat() / targetWidth.toFloat())
.coerceAtLeast(originBitmap.height.toFloat() / maxHeight.toFloat())
val resizedBitmap = Bitmap.createScaledBitmap(
originBitmap,
(originBitmap.width / scaleFactor).toInt(),
(originBitmap.height / scaleFactor).toInt(),
true
)
frame = MLFrame.fromBitmap(resizedBitmap)
initAnalyzer(type)
}
private fun imageSkeletonDetectAsync() {
val task: Task<List<MLSkeleton>>? = analyzer?.asyncAnalyseFrame(frame)
task?.addOnSuccessListener { results ->
// Detection success.
val skeletons: List<MLSkeleton>? = getValidSkeletons(results)
if (skeletons != null && skeletons.isNotEmpty()) {
graphicOverlay?.clear()
val skeletonGraphic = SkeletonGraphic(graphicOverlay, results)
graphicOverlay?.add(skeletonGraphic)
} else {
Log.e(TAG, "async analyzer result is null.")
}
}?.addOnFailureListener { /* Result failure. */ }
}
private fun stopAnalyzer() {
if (analyzer != null) {
try {
analyzer?.stop()
} catch (e: IOException) {
Log.e(TAG, "Failed for analyzer: " + e.message)
}
}
}
override fun onDestroy() {
super.onDestroy()
stopAnalyzer()
}
private fun showPictureDialog() {
val pictureDialog = AlertDialog.Builder(this)
pictureDialog.setTitle("Select Action")
val pictureDialogItems = arrayOf("Select image from gallery", "Capture photo from camera")
pictureDialog.setItems(pictureDialogItems
) { dialog, which ->
when (which) {
0 -> chooseImageFromGallery()
1 -> takePhotoFromCamera()
}
}
pictureDialog.show()
}
fun chooseImageFromGallery() {
val galleryIntent = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(galleryIntent, GALLERY)
}
private fun takePhotoFromCamera() {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, CAMERA)
}
public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == GALLERY)
{
if (data != null)
{
val contentURI = data!!.data
try {
val bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, contentURI)
saveImage(bitmap)
Toast.makeText([email protected], "Image Show!", Toast.LENGTH_SHORT).show()
imageView!!.setImageBitmap(bitmap)
}
catch (e: IOException)
{
e.printStackTrace()
Toast.makeText([email protected], "Failed", Toast.LENGTH_SHORT).show()
}
}
}
else if (requestCode == CAMERA)
{
val thumbnail = data!!.extras!!.get("data") as Bitmap
imageView!!.setImageBitmap(thumbnail)
saveImage(thumbnail)
Toast.makeText([email protected], "Photo Show!", Toast.LENGTH_SHORT).show()
}
}
fun saveImage(myBitmap: Bitmap):String {
val bytes = ByteArrayOutputStream()
myBitmap.compress(Bitmap.CompressFormat.PNG, 90, bytes)
val wallpaperDirectory = File (
(Environment.getExternalStorageDirectory()).toString() + IMAGE_DIRECTORY)
Log.d("fee", wallpaperDirectory.toString())
if (!wallpaperDirectory.exists())
{
wallpaperDirectory.mkdirs()
}
try
{
Log.d("heel", wallpaperDirectory.toString())
val f = File(wallpaperDirectory, ((Calendar.getInstance()
.getTimeInMillis()).toString() + ".png"))
f.createNewFile()
val fo = FileOutputStream(f)
fo.write(bytes.toByteArray())
MediaScannerConnection.scanFile(this, arrayOf(f.getPath()), arrayOf("image/png"), null)
fo.close()
Log.d("TAG", "File Saved::--->" + f.getAbsolutePath())
return f.getAbsolutePath()
}
catch (e1: IOException){
e1.printStackTrace()
}
return ""
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure your app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
will it detect all yoga positions?
Introduction
In this article, I will cover live yoga pose detection. In my last article, I’ve written yoga pose detection using the Huawei ML kit. If you have not read my previous article refer to link Beginner: Find yoga pose using Huawei ML kit skeleton detection - Part 1.
In this article, I will cover live yoga detection.
Definitely, you will have a question about how does this application help?
Let’s take an example, most people attend yoga classes due to COVID-19 nobody is able to attend the yoga classes. So using the Huawei ML kit Skeleton detection record your yoga session video and send it to your yoga master he will check your body joints which is shown in the video. And he will explain what are the mistakes you have done in that recorded yoga session.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit.
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio(Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting> Manage API > ML Kit
Step 2: Build Android application
In this example, I’m detecting yoga poses live using the camera.
While building the application, follow the steps.
Step 1: Create a Skeleton analyzer.
1
2private static MLSkeletonAnalyzer analyzer = null;
analyzer = MLSkeletonAnalyzerFactory.getInstance().skeletonAnalyzer
Step 2: Create SkeletonAnalyzerTransactor class to process the result.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65import android.app.Activity
import android.util.Log
import app.dtse.hmsskeletondetection.demo.utils.SkeletonUtils
import app.dtse.hmsskeletondetection.demo.views.graphic.SkeletonGraphic
import app.dtse.hmsskeletondetection.demo.views.overlay.GraphicOverlay
import com.huawei.hms.mlsdk.common.LensEngine
import com.huawei.hms.mlsdk.common.MLAnalyzer
import com.huawei.hms.mlsdk.common.MLAnalyzer.MLTransactor
import com.huawei.hms.mlsdk.skeleton.MLSkeleton
import com.huawei.hms.mlsdk.skeleton.MLSkeletonAnalyzer
import java.util.*
class SkeletonTransactor(
private val analyzer: MLSkeletonAnalyzer,
private val graphicOverlay: GraphicOverlay,
private val lensEngine: LensEngine,
private val activity: Activity?
) : MLTransactor<MLSkeleton?> {
private val templateList: List<MLSkeleton>
private var zeroCount = 0
override fun transactResult(results: MLAnalyzer.Result<MLSkeleton?>) {
Log.e(TAG, "detect success")
graphicOverlay.clear()
val items = results.analyseList
val resultsList: MutableList<MLSkeleton?> = ArrayList()
for (i in 0 until items.size()) {
resultsList.add(items.valueAt(i))
}
if (resultsList.size <= 0) {
return
}
val similarity = 0.8f
val skeletonGraphic = SkeletonGraphic(graphicOverlay, resultsList)
graphicOverlay.addGraphic(skeletonGraphic)
graphicOverlay.postInvalidate()
val result = analyzer.caluteSimilarity(resultsList, templateList)
if (result >= similarity) {
zeroCount = if (zeroCount > 0) {
return
} else {
0
}
zeroCount++
} else {
zeroCount = 0
return
}
lensEngine.photograph(null, { bytes ->
SkeletonUtils.takePictureListener.picture(bytes)
activity?.finish()
})
}
override fun destroy() {
Log.e(TAG, "detect fail")
}
companion object {
private const val TAG = "SkeletonTransactor"
}
init {
templateList = SkeletonUtils.getTemplateData()
}
}
Step 3: Set Detection Result Processor to Bind the Analyzer.
1analyzer!!.setTransactor(SkeletonTransactor(analyzer!!, overlay!!, lensEngine!!, activity))
Step 4: Create LensEngine.
1
2
3
4
5
6lensEngine = LensEngine.Creator(context, analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create()
Step 5: Open the camera.
Step 6: Release resources.
1
2
3
4
5
6
7
8
9
10
11if (lensEngine != null) {
lensEngine!!.close()
}if (lensEngine != null) {
lensEngine!!.release()
}if (analyzer != null) {
try {
analyzer!!.stop()
} catch (e: IOException) {
Log.e(TAG, "e=" + e.message)
}
}
Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure the app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMALand TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
Basavaraj.navi said:
Introduction
In this article, I will cover live yoga pose detection. In my last article, I’ve written yoga pose detection using the Huawei ML kit. If you have not read my previous article refer to link Beginner: Find yoga pose using Huawei ML kit skeleton detection - Part 1.
In this article, I will cover live yoga detection.
Definitely, you will have a question about how does this application help?
Let’s take an example, most people attend yoga classes due to COVID-19 nobody is able to attend the yoga classes. So using the Huawei ML kit Skeleton detection record your yoga session video and send it to your yoga master he will check your body joints which is shown in the video. And he will explain what are the mistakes you have done in that recorded yoga session.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit.
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio(Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting> Manage API > ML Kit
Step 2: Build Android application
In this example, I’m detecting yoga poses live using the camera.
While building the application, follow the steps.
Step 1: Create a Skeleton analyzer.
1
2private static MLSkeletonAnalyzer analyzer = null;
analyzer = MLSkeletonAnalyzerFactory.getInstance().skeletonAnalyzer
Step 2: Create SkeletonAnalyzerTransactor class to process the result.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65import android.app.Activity
import android.util.Log
import app.dtse.hmsskeletondetection.demo.utils.SkeletonUtils
import app.dtse.hmsskeletondetection.demo.views.graphic.SkeletonGraphic
import app.dtse.hmsskeletondetection.demo.views.overlay.GraphicOverlay
import com.huawei.hms.mlsdk.common.LensEngine
import com.huawei.hms.mlsdk.common.MLAnalyzer
import com.huawei.hms.mlsdk.common.MLAnalyzer.MLTransactor
import com.huawei.hms.mlsdk.skeleton.MLSkeleton
import com.huawei.hms.mlsdk.skeleton.MLSkeletonAnalyzer
import java.util.*
class SkeletonTransactor(
private val analyzer: MLSkeletonAnalyzer,
private val graphicOverlay: GraphicOverlay,
private val lensEngine: LensEngine,
private val activity: Activity?
) : MLTransactor<MLSkeleton?> {
private val templateList: List<MLSkeleton>
private var zeroCount = 0
override fun transactResult(results: MLAnalyzer.Result<MLSkeleton?>) {
Log.e(TAG, "detect success")
graphicOverlay.clear()
val items = results.analyseList
val resultsList: MutableList<MLSkeleton?> = ArrayList()
for (i in 0 until items.size()) {
resultsList.add(items.valueAt(i))
}
if (resultsList.size <= 0) {
return
}
val similarity = 0.8f
val skeletonGraphic = SkeletonGraphic(graphicOverlay, resultsList)
graphicOverlay.addGraphic(skeletonGraphic)
graphicOverlay.postInvalidate()
val result = analyzer.caluteSimilarity(resultsList, templateList)
if (result >= similarity) {
zeroCount = if (zeroCount > 0) {
return
} else {
0
}
zeroCount++
} else {
zeroCount = 0
return
}
lensEngine.photograph(null, { bytes ->
SkeletonUtils.takePictureListener.picture(bytes)
activity?.finish()
})
}
override fun destroy() {
Log.e(TAG, "detect fail")
}
companion object {
private const val TAG = "SkeletonTransactor"
}
init {
templateList = SkeletonUtils.getTemplateData()
}
}
Step 3: Set Detection Result Processor to Bind the Analyzer.
1analyzer!!.setTransactor(SkeletonTransactor(analyzer!!, overlay!!, lensEngine!!, activity))
Step 4: Create LensEngine.
1
2
3
4
5
6lensEngine = LensEngine.Creator(context, analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create()
Step 5: Open the camera.
Step 6: Release resources.
1
2
3
4
5
6
7
8
9
10
11if (lensEngine != null) {
lensEngine!!.close()
}if (lensEngine != null) {
lensEngine!!.release()
}if (analyzer != null) {
try {
analyzer!!.stop()
} catch (e: IOException) {
Log.e(TAG, "e=" + e.message)
}
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure the app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMALand TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
Click to expand...
Click to collapse