Introduction
In this article, I will cover live yoga pose detection. In my last article, I’ve written yoga pose detection using the Huawei ML kit. If you have not read my previous article refer to link Beginner: Find yoga pose using Huawei ML kit skeleton detection - Part 1.
In this article, I will cover live yoga detection.
Definitely, you will have a question about how does this application help?
Let’s take an example, most people attend yoga classes due to COVID-19 nobody is able to attend the yoga classes. So using the Huawei ML kit Skeleton detection record your yoga session video and send it to your yoga master he will check your body joints which is shown in the video. And he will explain what are the mistakes you have done in that recorded yoga session.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit.
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio(Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting> Manage API > ML Kit
Step 2: Build Android application
In this example, I’m detecting yoga poses live using the camera.
While building the application, follow the steps.
Step 1: Create a Skeleton analyzer.
1
2private static MLSkeletonAnalyzer analyzer = null;
analyzer = MLSkeletonAnalyzerFactory.getInstance().skeletonAnalyzer
Step 2: Create SkeletonAnalyzerTransactor class to process the result.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65import android.app.Activity
import android.util.Log
import app.dtse.hmsskeletondetection.demo.utils.SkeletonUtils
import app.dtse.hmsskeletondetection.demo.views.graphic.SkeletonGraphic
import app.dtse.hmsskeletondetection.demo.views.overlay.GraphicOverlay
import com.huawei.hms.mlsdk.common.LensEngine
import com.huawei.hms.mlsdk.common.MLAnalyzer
import com.huawei.hms.mlsdk.common.MLAnalyzer.MLTransactor
import com.huawei.hms.mlsdk.skeleton.MLSkeleton
import com.huawei.hms.mlsdk.skeleton.MLSkeletonAnalyzer
import java.util.*
class SkeletonTransactor(
private val analyzer: MLSkeletonAnalyzer,
private val graphicOverlay: GraphicOverlay,
private val lensEngine: LensEngine,
private val activity: Activity?
) : MLTransactor<MLSkeleton?> {
private val templateList: List<MLSkeleton>
private var zeroCount = 0
override fun transactResult(results: MLAnalyzer.Result<MLSkeleton?>) {
Log.e(TAG, "detect success")
graphicOverlay.clear()
val items = results.analyseList
val resultsList: MutableList<MLSkeleton?> = ArrayList()
for (i in 0 until items.size()) {
resultsList.add(items.valueAt(i))
}
if (resultsList.size <= 0) {
return
}
val similarity = 0.8f
val skeletonGraphic = SkeletonGraphic(graphicOverlay, resultsList)
graphicOverlay.addGraphic(skeletonGraphic)
graphicOverlay.postInvalidate()
val result = analyzer.caluteSimilarity(resultsList, templateList)
if (result >= similarity) {
zeroCount = if (zeroCount > 0) {
return
} else {
0
}
zeroCount++
} else {
zeroCount = 0
return
}
lensEngine.photograph(null, { bytes ->
SkeletonUtils.takePictureListener.picture(bytes)
activity?.finish()
})
}
override fun destroy() {
Log.e(TAG, "detect fail")
}
companion object {
private const val TAG = "SkeletonTransactor"
}
init {
templateList = SkeletonUtils.getTemplateData()
}
}
Step 3: Set Detection Result Processor to Bind the Analyzer.
1analyzer!!.setTransactor(SkeletonTransactor(analyzer!!, overlay!!, lensEngine!!, activity))
Step 4: Create LensEngine.
1
2
3
4
5
6lensEngine = LensEngine.Creator(context, analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create()
Step 5: Open the camera.
Step 6: Release resources.
1
2
3
4
5
6
7
8
9
10
11if (lensEngine != null) {
lensEngine!!.close()
}if (lensEngine != null) {
lensEngine!!.release()
}if (analyzer != null) {
try {
analyzer!!.stop()
} catch (e: IOException) {
Log.e(TAG, "e=" + e.message)
}
}
Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure the app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMALand TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
Basavaraj.navi said:
Introduction
In this article, I will cover live yoga pose detection. In my last article, I’ve written yoga pose detection using the Huawei ML kit. If you have not read my previous article refer to link Beginner: Find yoga pose using Huawei ML kit skeleton detection - Part 1.
In this article, I will cover live yoga detection.
Definitely, you will have a question about how does this application help?
Let’s take an example, most people attend yoga classes due to COVID-19 nobody is able to attend the yoga classes. So using the Huawei ML kit Skeleton detection record your yoga session video and send it to your yoga master he will check your body joints which is shown in the video. And he will explain what are the mistakes you have done in that recorded yoga session.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit.
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio(Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting> Manage API > ML Kit
Step 2: Build Android application
In this example, I’m detecting yoga poses live using the camera.
While building the application, follow the steps.
Step 1: Create a Skeleton analyzer.
1
2private static MLSkeletonAnalyzer analyzer = null;
analyzer = MLSkeletonAnalyzerFactory.getInstance().skeletonAnalyzer
Step 2: Create SkeletonAnalyzerTransactor class to process the result.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65import android.app.Activity
import android.util.Log
import app.dtse.hmsskeletondetection.demo.utils.SkeletonUtils
import app.dtse.hmsskeletondetection.demo.views.graphic.SkeletonGraphic
import app.dtse.hmsskeletondetection.demo.views.overlay.GraphicOverlay
import com.huawei.hms.mlsdk.common.LensEngine
import com.huawei.hms.mlsdk.common.MLAnalyzer
import com.huawei.hms.mlsdk.common.MLAnalyzer.MLTransactor
import com.huawei.hms.mlsdk.skeleton.MLSkeleton
import com.huawei.hms.mlsdk.skeleton.MLSkeletonAnalyzer
import java.util.*
class SkeletonTransactor(
private val analyzer: MLSkeletonAnalyzer,
private val graphicOverlay: GraphicOverlay,
private val lensEngine: LensEngine,
private val activity: Activity?
) : MLTransactor<MLSkeleton?> {
private val templateList: List<MLSkeleton>
private var zeroCount = 0
override fun transactResult(results: MLAnalyzer.Result<MLSkeleton?>) {
Log.e(TAG, "detect success")
graphicOverlay.clear()
val items = results.analyseList
val resultsList: MutableList<MLSkeleton?> = ArrayList()
for (i in 0 until items.size()) {
resultsList.add(items.valueAt(i))
}
if (resultsList.size <= 0) {
return
}
val similarity = 0.8f
val skeletonGraphic = SkeletonGraphic(graphicOverlay, resultsList)
graphicOverlay.addGraphic(skeletonGraphic)
graphicOverlay.postInvalidate()
val result = analyzer.caluteSimilarity(resultsList, templateList)
if (result >= similarity) {
zeroCount = if (zeroCount > 0) {
return
} else {
0
}
zeroCount++
} else {
zeroCount = 0
return
}
lensEngine.photograph(null, { bytes ->
SkeletonUtils.takePictureListener.picture(bytes)
activity?.finish()
})
}
override fun destroy() {
Log.e(TAG, "detect fail")
}
companion object {
private const val TAG = "SkeletonTransactor"
}
init {
templateList = SkeletonUtils.getTemplateData()
}
}
Step 3: Set Detection Result Processor to Bind the Analyzer.
1analyzer!!.setTransactor(SkeletonTransactor(analyzer!!, overlay!!, lensEngine!!, activity))
Step 4: Create LensEngine.
1
2
3
4
5
6lensEngine = LensEngine.Creator(context, analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create()
Step 5: Open the camera.
Step 6: Release resources.
1
2
3
4
5
6
7
8
9
10
11if (lensEngine != null) {
lensEngine!!.close()
}if (lensEngine != null) {
lensEngine!!.release()
}if (analyzer != null) {
try {
analyzer!!.stop()
} catch (e: IOException) {
Log.e(TAG, "e=" + e.message)
}
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure the app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMALand TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
Click to expand...
Click to collapse
Related
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hand keypoint detection is the process of finding fingertips, knuckles and wrists in an image. Hand keypoint detection and hand gesture recognition is still a challenging problem in computer vision domain. It is really a tough work to build your own model for hand keypoint detection as it is hard to collect a large enough hand dataset and it requires expertise in this domain.
Hand keypoint detection can be used in variety of scenarios. For example, it can be used during artistic creation, users can convert the detected hand keypoints into a 2D model, and synchronize the model to a character’s model to produce a vivid 2D animation. You can create a puppet animation game using the above idea. Another example may be creating a rock paper scissors game. Or if you take it further, you can create a sign language to text conversion application. As you see varieties to possible usage scenarios are abundant and there is no limit to ideas.
Hand keypoint detection service is a brand-new feature that is added to Huawei Machine Learning Kit family. It has recently been released and it is making developers and computer vision geeks really excited! It detects 21 points of a hand and can detect up to ten hands in an image. It can detect hands in a static image or in a camera stream. Currently, it does not support scenarios where your hand is blocked by more than 50% or you wear gloves. You don’t need an internet connection as this is a device side capability and what is more: It is completely free!
It wouldn’t be a nice practice only to read related documents and forget about it after a few days. So I created a simple demo application that counts fingers and tells us the number we show by hand. I strongly advise you to develop your hand keypoint detection application beside me. I developed the application in Android Studio in Kotlin. Now, I am going to explain you how to build this application step by step. Don’t hesitate to ask questions in the comments if you face any issues.
1.Firstly, let’s create our project on Android Studio. I named my project as HandKeyPointDetectionDemo. I am sure you can find better names for your application. We can create our project by selecting Empty Activity option and then follow the steps described in this post to create and sign our project in App Gallery Connect.
2. In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.
3. Now we have integrated Huawei Mobile Services (HMS) into our project. Now let’s follow the documentation on developer.huawei.com and find the packages to add to our project. In the website click Developer / HMS Core/ AI / ML Kit. Here you will find introductory information to services, references, SDKs to download and others. Under ML Kit tab follow Android / Getting Started / Integrating HMS Core SDK / Adding Build Dependencies / Integrating the Hand Keypoint Detection SDK. We can follow the guide here to add hand detection capability to our project. We have also one meta-data tag to be added into our AndroidManifest.xml file. After the integration your app-level build.gradle file will look like this.
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 30
buildToolsVersion "30.0.2"
defaultConfig {
applicationId "com.demo.handkeypointdetection"
minSdkVersion 21
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.1'
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'androidx.constraintlayout:constraintlayout:2.0.1'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
//AppGalleryConnect Core
implementation 'com.huawei.agconnect:agconnect-core:1.3.1.300'
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.2.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.2.300'
}
Our project-level build.gradle file:
Code:
buildscript {
ext.kotlin_version = "1.4.0"
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
And don’t forget to add related meta-data tags into your AndroidManifest.xml.
Code:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.demo.handkeypointdetection">
<uses-permission android:name="android.permission.CAMERA" />
<application
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "handkeypoint"/>
</application>
</manifest>
4. I created a class named HandKeyPointDetector. This class will be called from our activity or fragment. Its init method has two parameters context and a viewgroup. We will add our views on rootLayout.
Code:
fun init(context: Context, rootLayout: ViewGroup) {
mContext = context
mRootLayout = rootLayout
addSurfaceViews()
}
5. We are going to detect hand key points in a camera stream, so we create a surfaceView for camera preview and another surfaceView to draw somethings. The surfaceView that is going to be used as overlay should be transparent. Then, we add our views to our rootLayout passed as a parameter from our activity. Lastly we add SurfaceHolder.Callback to our surfaceHolder to know when it is ready.
Code:
private fun addSurfaceViews() {
val surfaceViewCamera = SurfaceView(mContext).also {
it.layoutParams = LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)
mSurfaceHolderCamera = it.holder
}
val surfaceViewOverlay = SurfaceView(mContext).also {
it.layoutParams = LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)
mSurfaceHolderOverlay = it.holder
mSurfaceHolderOverlay.setFormat(PixelFormat.TRANSPARENT)
mHandKeyPointTransactor.setOverlay(mSurfaceHolderOverlay)
}
mRootLayout.addView(surfaceViewCamera)
mRootLayout.addView(surfaceViewOverlay)
mSurfaceHolderCamera.addCallback(surfaceHolderCallback)
}
6. Inside our surfaceHolderCallback we override three methods: surfaceCreated, surfacehanged and surfaceDestroyed.
Code:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceCreated(holder: SurfaceHolder) {
createAnalyzer()
}
override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {
prepareLensEngine(width, height)
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder) {
mLensEngine.release()
}
}
7. createAnalyzer method creates MLKeyPointAnalyzer with settings. If you want you can use default settings also. Scene type can be keypoint and rectangle around hands or we can use TYPE_ALL for both. Max hand results can be up to MLHandKeypointAnalyzerSetting.MAX_HANDS_NUM which is 10 currently. As we will count fingers of 2 hands, I set it to 2.
Code:
private fun createAnalyzer() {
val settings = MLHandKeypointAnalyzerSetting.Factory()
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
.setMaxHandResults(2)
.create()
mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(settings)
mAnalyzer.setTransactor(mHandKeyPointTransactor)
}
8. LensEngine is responsible for handling camera frames for us. All we need to do is to prepare it with right dimensions according to orientation, choose the camera we want to work with, apply fps an so on.
Code:
private fun prepareLensEngine(width: Int, height: Int) {
val dimen1: Int
val dimen2: Int
if (mContext.resources.configuration.orientation == Configuration.ORIENTATION_LANDSCAPE) {
dimen1 = width
dimen2 = height
} else {
dimen1 = height
dimen2 = width
}
mLensEngine = LensEngine.Creator(mContext, mAnalyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(dimen1, dimen2)
.applyFps(5F)
.enableAutomaticFocus(true)
.create()
}
9. When you no longer need the analyzer stop it and release resources.
Code:
fun stopAnalyzer() {
mAnalyzer.stop()
}
10. As you can see in step-7 we used mHandKeyPointTransactor. It is a custom class that we created named HandKeyPointTransactor, which inherits MLAnalyzer.MLTransactor<MLHandKeypoints>. It has two overriden methods inside. transactResult and destroy. Detected results will fall inside transactResult method and then we will try to find the number.
Code:
override fun transactResult(result: MLAnalyzer.Result<MLHandKeypoints>?) {
if (result == null)
return
val canvas = mOverlay?.lockCanvas() ?: return
//Clear canvas.
canvas.drawColor(0, PorterDuff.Mode.CLEAR)
//Find the number shown by our hands.
val numberString = analyzeHandsAndGetNumber(result)
//Find the middle of the canvas
val centerX = canvas.width / 2F
val centerY = canvas.height / 2F
//Draw a text that writes the number we found.
canvas.drawText(numberString, centerX, centerY, Paint().also {
it.style = Paint.Style.FILL
it.textSize = 100F
it.color = Color.GREEN
})
mOverlay?.unlockCanvasAndPost(canvas)
}
11. We will check hand by hand and then finger by finger to find the fingers that are up to find the number.
Code:
private fun analyzeHandsAndGetNumber(result: MLAnalyzer.Result<MLHandKeypoints>): String {
val hands = ArrayList<Hand>()
var number = 0
for (key in result.analyseList.keyIterator()) {
hands.add(Hand())
for (value in result.analyseList.valueIterator()) {
number += hands.last().createHand(value.handKeypoints).getNumber()
}
}
return number.toString()
}
For more information, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202369245767250343&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Android? At the end of this tutorial, we will create the Huawei Skeleton detection in an Android application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps, as follows.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Android application
In this example, I am getting image from the gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147private fun initAnalyzer(analyzerType: Int) {
val setting = MLSkeletonAnalyzerSetting.Factory()
.setAnalyzerType(analyzerType)
.create()
analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer(setting)
imageSkeletonDetectAsync()
}
private fun initFrame(type: Int) {
imageView.invalidate()
val drawable = imageView.drawable as BitmapDrawable
val originBitmap = drawable.bitmap
val maxHeight = (imageView.parent as View).height
val targetWidth = (imageView.parent as View).width
// Update bitmap size
val scaleFactor = (originBitmap.width.toFloat() / targetWidth.toFloat())
.coerceAtLeast(originBitmap.height.toFloat() / maxHeight.toFloat())
val resizedBitmap = Bitmap.createScaledBitmap(
originBitmap,
(originBitmap.width / scaleFactor).toInt(),
(originBitmap.height / scaleFactor).toInt(),
true
)
frame = MLFrame.fromBitmap(resizedBitmap)
initAnalyzer(type)
}
private fun imageSkeletonDetectAsync() {
val task: Task<List<MLSkeleton>>? = analyzer?.asyncAnalyseFrame(frame)
task?.addOnSuccessListener { results ->
// Detection success.
val skeletons: List<MLSkeleton>? = getValidSkeletons(results)
if (skeletons != null && skeletons.isNotEmpty()) {
graphicOverlay?.clear()
val skeletonGraphic = SkeletonGraphic(graphicOverlay, results)
graphicOverlay?.add(skeletonGraphic)
} else {
Log.e(TAG, "async analyzer result is null.")
}
}?.addOnFailureListener { /* Result failure. */ }
}
private fun stopAnalyzer() {
if (analyzer != null) {
try {
analyzer?.stop()
} catch (e: IOException) {
Log.e(TAG, "Failed for analyzer: " + e.message)
}
}
}
override fun onDestroy() {
super.onDestroy()
stopAnalyzer()
}
private fun showPictureDialog() {
val pictureDialog = AlertDialog.Builder(this)
pictureDialog.setTitle("Select Action")
val pictureDialogItems = arrayOf("Select image from gallery", "Capture photo from camera")
pictureDialog.setItems(pictureDialogItems
) { dialog, which ->
when (which) {
0 -> chooseImageFromGallery()
1 -> takePhotoFromCamera()
}
}
pictureDialog.show()
}
fun chooseImageFromGallery() {
val galleryIntent = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(galleryIntent, GALLERY)
}
private fun takePhotoFromCamera() {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, CAMERA)
}
public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == GALLERY)
{
if (data != null)
{
val contentURI = data!!.data
try {
val bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, contentURI)
saveImage(bitmap)
Toast.makeText([email protected], "Image Show!", Toast.LENGTH_SHORT).show()
imageView!!.setImageBitmap(bitmap)
}
catch (e: IOException)
{
e.printStackTrace()
Toast.makeText([email protected], "Failed", Toast.LENGTH_SHORT).show()
}
}
}
else if (requestCode == CAMERA)
{
val thumbnail = data!!.extras!!.get("data") as Bitmap
imageView!!.setImageBitmap(thumbnail)
saveImage(thumbnail)
Toast.makeText([email protected], "Photo Show!", Toast.LENGTH_SHORT).show()
}
}
fun saveImage(myBitmap: Bitmap):String {
val bytes = ByteArrayOutputStream()
myBitmap.compress(Bitmap.CompressFormat.PNG, 90, bytes)
val wallpaperDirectory = File (
(Environment.getExternalStorageDirectory()).toString() + IMAGE_DIRECTORY)
Log.d("fee", wallpaperDirectory.toString())
if (!wallpaperDirectory.exists())
{
wallpaperDirectory.mkdirs()
}
try
{
Log.d("heel", wallpaperDirectory.toString())
val f = File(wallpaperDirectory, ((Calendar.getInstance()
.getTimeInMillis()).toString() + ".png"))
f.createNewFile()
val fo = FileOutputStream(f)
fo.write(bytes.toByteArray())
MediaScannerConnection.scanFile(this, arrayOf(f.getPath()), arrayOf("image/png"), null)
fo.close()
Log.d("TAG", "File Saved::--->" + f.getAbsolutePath())
return f.getAbsolutePath()
}
catch (e1: IOException){
e1.printStackTrace()
}
return ""
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure your app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Flutter? At the end of this tutorial, we will create the Huawei Skeleton detection in Flutter application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both the methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps as follows.
Step 1: Create a flutter application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the downloaded plugin in pubspec.yaml.
Step 4: Add a downloaded file into the outside project directory. Declare plugin path in pubspec.yaml file under dependencies.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28dependencies:
flutter:
sdk: flutter
huawei_account:
path: ../huawei_account/
huawei_location:
path: ../huawei_location/
huawei_map:
path: ../huawei_map/
huawei_analytics:
path: ../huawei_analytics/
huawei_site:
path: ../huawei_site/
huawei_push:
path: ../huawei_push/
huawei_dtm:
path: ../huawei_dtm/
huawei_ml:
path: ../huawei_ml/
agconnect_crash: ^1.0.0
agconnect_remote_config: ^1.0.0
http: ^0.12.2
camera:
path_provider:
path:
image_picker:
fluttertoast: ^7.1.6
shared_preferences: ^0.5.12+4
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Flutter application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Flutter application
In this example, I am getting image from gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128import 'dart:io';
import 'package:flutter/material.dart';
import 'package:huawei_ml/huawei_ml.dart';
import 'package:huawei_ml/skeleton/ml_skeleton_analyzer.dart';
import 'package:huawei_ml/skeleton/ml_skeleton_analyzer_setting.dart';
import 'package:image_picker/image_picker.dart';
class SkeletonDetection extends StatefulWidget {
@override
_SkeletonDetectionState createState() => _SkeletonDetectionState();
}
class _SkeletonDetectionState extends State<SkeletonDetection> {
MLSkeletonAnalyzer analyzer;
MLSkeletonAnalyzerSetting setting;
List<MLSkeleton> skeletons;
double _x = 0;
double _y = 0;
double _score = 0;
@override
void initState() {
// TODO: implement initState
analyzer = new MLSkeletonAnalyzer();
setting = new MLSkeletonAnalyzerSetting();
super.initState();
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
_setImageView()
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
_showSelectionDialog(context);
},
child: Icon(Icons.camera_alt),
),
);
}
Future<void> _showSelectionDialog(BuildContext context) {
return showDialog(
context: context,
builder: (BuildContext context) {
return AlertDialog(
title: Text("From where do you want to take the photo?"),
content: SingleChildScrollView(
child: ListBody(
children: <Widget>[
GestureDetector(
child: Text("Gallery"),
onTap: () {
_openGallery(context);
},
),
Padding(padding: EdgeInsets.all(8.0)),
GestureDetector(
child: Text("Camera"),
onTap: () {
_openCamera();
},
)
],
),
));
});
}
File imageFile;
void _openGallery(BuildContext context) async {
var picture = await ImagePicker.pickImage(source: ImageSource.gallery);
this.setState(() {
imageFile = picture;
_skeletonDetection();
});
Navigator.of(context).pop();
}
_openCamera() async {
PickedFile pickedFile = await ImagePicker().getImage(
source: ImageSource.camera,
maxWidth: 800,
maxHeight: 800,
);
if (pickedFile != null) {
imageFile = File(pickedFile.path);
this.setState(() {
imageFile = imageFile;
_skeletonDetection();
});
}
Navigator.of(context).pop();
}
Widget _setImageView() {
if (imageFile != null) {
return Image.file(imageFile, width: 500, height: 500);
} else {
return Text("Please select an image");
}
}
_skeletonDetection() async {
// Create a skeleton analyzer.
analyzer = new MLSkeletonAnalyzer();
// Configure the recognition settings.
setting = new MLSkeletonAnalyzerSetting();
setting.path = imageFile.path;
setting.analyzerType = MLSkeletonAnalyzerSetting.TYPE_NORMAL; // Normal posture.
// Get recognition result asynchronously.
List<MLSkeleton> list = await analyzer.asyncSkeletonDetection(setting);
print("Result data: "+list[0].toJson().toString());
// After the recognition ends, stop the analyzer.
bool res = await analyzer.stopSkeletonDetection();
}
}
Result
Tips and Tricks
Download the latest HMS Flutter plugin.
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking image from a camera or gallery make sure your app has camera and storage permission
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
Huawei Games is a group of APIs from Huawei to simplify some basic game features like leaderboards, achievements, events and online matches.
Gaming technologies are constantly evolving. Nevertheless, a lot of core gameplay elements have remained unchanged for decades. High scores, leaderboards, quests, achievements, and multiplayer support are examples. If you are developing a game for the Android platform, you don't have to implement any of those elements manually. You can simply use the Huawei Game services APIs instead.
Features of Huawei Game services
Huawei ID sign-in
Real-name authentication
Bulletins
Achievements
Events
Leaderboard
Saved games
Player statistics
Integration of Game Service
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project. Select Game App.
Step 3: Set the data storage location based on the current location.
Step 4: Enabling Account Kit and Game Service. Open AppGallery connect, choose Manage API > Account kit and Game service.
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
Follow the steps.
Step 1: Create a flutter application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level Gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Download the plugin Account kit and Game Kit.
Step 4: Add a downloaded file into the outside project directory. Declare plugin path in pubspec.yaml file under dependencies.
1
2
3
4
5
6
7dependencies:
flutter:
sdk: flutter
huawei_account:
path: ../huawei_account/
huawei_gameservice:
path: ../huawei_gameservice/
Step 5: Build flutter application
In this example, I’m building a Tic Tac Toe game application using the Huawei Game service. And we will see the following feature in this article.
Sign In
Initialization
Getting player information
Saving player information
Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66void signInWithHuaweiAccount() async {
HmsAuthParamHelper authParamHelper = new HmsAuthParamHelper();
authParamHelper
..setIdToken()
..setAuthorizationCode()
..setAccessToken()
..setProfile()
..setEmail()
..setScopeList([HmsScope.openId, HmsScope.email, HmsScope.profile])
..setRequestCode(8888);
try {
final HmsAuthHuaweiId accountInfo =
await HmsAuthService.signIn(authParamHelper: authParamHelper);
setState(() {
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => HomePage(),
));
});
} on Exception catch (exception) {
print(exception.toString());
print("error: " + exception.toString());
}
}
void silentSignInHuaweiAccount() async {
HmsAuthParamHelper authParamHelper = new HmsAuthParamHelper();
try {
final HmsAuthHuaweiId accountInfo =
await HmsAuthService.silentSignIn(authParamHelper: authParamHelper);
if (accountInfo.unionId != null) {
print("Open Id: ${accountInfo.openId}");
print("Display name: ${accountInfo.displayName}");
print("Profile DP: ${accountInfo.avatarUriString}");
print("Email Id: ${accountInfo.email}");
Validator()
.showToast("Signed in successful as ${accountInfo.displayName}");
}
} on Exception catch (exception) {
print(exception.toString());
print('Login_provider:Can not SignIn silently');
Validator().showToast("SCan not SignIn silently ${exception.toString()}");
}
}
Future signOut() async {
final signOutResult = await HmsAuthService.signOut();
if (signOutResult) {
Validator().showToast("Signed out successful");
/* Route route = MaterialPageRoute(builder: (context) => SignInPage());
Navigator.pushReplacement(context, route);*/
} else {
print('Login_provider:signOut failed');
}
}
Future revokeAuthorization() async {
final bool revokeResult = await HmsAuthService.revokeAuthorization();
if (revokeResult) {
Validator().showToast("Revoked Auth Successfully");
} else {
Validator().showToast('Login_provider:Failed to Revoked Auth');
}
}
Users can sign in with Huawei Account. They can play games.
The above shows the sign-in with Huawei Account.
Initialization
Call the JosAppsClient.init method to initialize the game.
1
2
3void init() async {
await JosAppsClient.init();
}
Getting player information
The below code gives the player information like playerId, openId, Player name etc. in which level user is playing.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15Future<void> getPlayerInformation() async {
try {
Player player = await PlayersClient.getCurrentPlayer();
print("Player ID: " + player.playerId);
print("Open ID: " + player.openId);
print("Access Token: " + player.accessToken);
print("Player Name: " + player.displayName);
setState(() {
playerName = "Player Name: "+player.displayName;
savePlayerInformation(player.playerId, player.openId);
});
} on PlatformException catch (e) {
print("Error on getGamePlayer API, Error: ${e.code}, Error Description: ${GameServiceResultCodes.getStatusCodeMessage(e.code)}");
}
}
Saving player information
Player information can be updated using AppPlayerInfo. Player level, role, rank, etc. The following code shows the saving player information.
To save player information playerId and openId are mandatory.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15void savePlayerInformation(String id, String openid) async {
try {
AppPlayerInfo info = new AppPlayerInfo(
rank: "1",
role: "beginner",
area: "2",
society: "Game",
playerId: id,
openId: openid);
await PlayersClient.savePlayerInfo(info);
} on PlatformException catch (e) {
print(
"Error on SavePlayer Info API, Error: ${e.code}, Error Description: ${GameServiceResultCodes.getStatusCodeMessage(e.code)}");
}
}
Result
Tips and Tricks
Download the latest HMS Flutter plugin.
Check dependencies downloaded properly.
Latest HMS Core APK is required.
Set minimum SDK 19 or later.
Conclusion
In this article, we have learnt the followings:
Creating an application on AGC
Integration of account kit and game service
Sign In
Initialization
Getting player information
Saving player information
In an upcoming article, I’ll come up with a new concept of game service.
Reference
Game service
useful share
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Android? At the end of this tutorial, we will create the Huawei Skeleton detection in an Android application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps, as follows.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Android application
In this example, I am getting image from the gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147private fun initAnalyzer(analyzerType: Int) {
val setting = MLSkeletonAnalyzerSetting.Factory()
.setAnalyzerType(analyzerType)
.create()
analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer(setting)
imageSkeletonDetectAsync()
}
private fun initFrame(type: Int) {
imageView.invalidate()
val drawable = imageView.drawable as BitmapDrawable
val originBitmap = drawable.bitmap
val maxHeight = (imageView.parent as View).height
val targetWidth = (imageView.parent as View).width
// Update bitmap size
val scaleFactor = (originBitmap.width.toFloat() / targetWidth.toFloat())
.coerceAtLeast(originBitmap.height.toFloat() / maxHeight.toFloat())
val resizedBitmap = Bitmap.createScaledBitmap(
originBitmap,
(originBitmap.width / scaleFactor).toInt(),
(originBitmap.height / scaleFactor).toInt(),
true
)
frame = MLFrame.fromBitmap(resizedBitmap)
initAnalyzer(type)
}
private fun imageSkeletonDetectAsync() {
val task: Task<List<MLSkeleton>>? = analyzer?.asyncAnalyseFrame(frame)
task?.addOnSuccessListener { results ->
// Detection success.
val skeletons: List<MLSkeleton>? = getValidSkeletons(results)
if (skeletons != null && skeletons.isNotEmpty()) {
graphicOverlay?.clear()
val skeletonGraphic = SkeletonGraphic(graphicOverlay, results)
graphicOverlay?.add(skeletonGraphic)
} else {
Log.e(TAG, "async analyzer result is null.")
}
}?.addOnFailureListener { /* Result failure. */ }
}
private fun stopAnalyzer() {
if (analyzer != null) {
try {
analyzer?.stop()
} catch (e: IOException) {
Log.e(TAG, "Failed for analyzer: " + e.message)
}
}
}
override fun onDestroy() {
super.onDestroy()
stopAnalyzer()
}
private fun showPictureDialog() {
val pictureDialog = AlertDialog.Builder(this)
pictureDialog.setTitle("Select Action")
val pictureDialogItems = arrayOf("Select image from gallery", "Capture photo from camera")
pictureDialog.setItems(pictureDialogItems
) { dialog, which ->
when (which) {
0 -> chooseImageFromGallery()
1 -> takePhotoFromCamera()
}
}
pictureDialog.show()
}
fun chooseImageFromGallery() {
val galleryIntent = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(galleryIntent, GALLERY)
}
private fun takePhotoFromCamera() {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, CAMERA)
}
public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == GALLERY)
{
if (data != null)
{
val contentURI = data!!.data
try {
val bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, contentURI)
saveImage(bitmap)
Toast.makeText([email protected], "Image Show!", Toast.LENGTH_SHORT).show()
imageView!!.setImageBitmap(bitmap)
}
catch (e: IOException)
{
e.printStackTrace()
Toast.makeText([email protected], "Failed", Toast.LENGTH_SHORT).show()
}
}
}
else if (requestCode == CAMERA)
{
val thumbnail = data!!.extras!!.get("data") as Bitmap
imageView!!.setImageBitmap(thumbnail)
saveImage(thumbnail)
Toast.makeText([email protected], "Photo Show!", Toast.LENGTH_SHORT).show()
}
}
fun saveImage(myBitmap: Bitmap):String {
val bytes = ByteArrayOutputStream()
myBitmap.compress(Bitmap.CompressFormat.PNG, 90, bytes)
val wallpaperDirectory = File (
(Environment.getExternalStorageDirectory()).toString() + IMAGE_DIRECTORY)
Log.d("fee", wallpaperDirectory.toString())
if (!wallpaperDirectory.exists())
{
wallpaperDirectory.mkdirs()
}
try
{
Log.d("heel", wallpaperDirectory.toString())
val f = File(wallpaperDirectory, ((Calendar.getInstance()
.getTimeInMillis()).toString() + ".png"))
f.createNewFile()
val fo = FileOutputStream(f)
fo.write(bytes.toByteArray())
MediaScannerConnection.scanFile(this, arrayOf(f.getPath()), arrayOf("image/png"), null)
fo.close()
Log.d("TAG", "File Saved::--->" + f.getAbsolutePath())
return f.getAbsolutePath()
}
catch (e1: IOException){
e1.printStackTrace()
}
return ""
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure your app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
will it detect all yoga positions?