{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Android? At the end of this tutorial, we will create the Huawei Skeleton detection in an Android application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps, as follows.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Android application
In this example, I am getting image from the gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147private fun initAnalyzer(analyzerType: Int) {
val setting = MLSkeletonAnalyzerSetting.Factory()
.setAnalyzerType(analyzerType)
.create()
analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer(setting)
imageSkeletonDetectAsync()
}
private fun initFrame(type: Int) {
imageView.invalidate()
val drawable = imageView.drawable as BitmapDrawable
val originBitmap = drawable.bitmap
val maxHeight = (imageView.parent as View).height
val targetWidth = (imageView.parent as View).width
// Update bitmap size
val scaleFactor = (originBitmap.width.toFloat() / targetWidth.toFloat())
.coerceAtLeast(originBitmap.height.toFloat() / maxHeight.toFloat())
val resizedBitmap = Bitmap.createScaledBitmap(
originBitmap,
(originBitmap.width / scaleFactor).toInt(),
(originBitmap.height / scaleFactor).toInt(),
true
)
frame = MLFrame.fromBitmap(resizedBitmap)
initAnalyzer(type)
}
private fun imageSkeletonDetectAsync() {
val task: Task<List<MLSkeleton>>? = analyzer?.asyncAnalyseFrame(frame)
task?.addOnSuccessListener { results ->
// Detection success.
val skeletons: List<MLSkeleton>? = getValidSkeletons(results)
if (skeletons != null && skeletons.isNotEmpty()) {
graphicOverlay?.clear()
val skeletonGraphic = SkeletonGraphic(graphicOverlay, results)
graphicOverlay?.add(skeletonGraphic)
} else {
Log.e(TAG, "async analyzer result is null.")
}
}?.addOnFailureListener { /* Result failure. */ }
}
private fun stopAnalyzer() {
if (analyzer != null) {
try {
analyzer?.stop()
} catch (e: IOException) {
Log.e(TAG, "Failed for analyzer: " + e.message)
}
}
}
override fun onDestroy() {
super.onDestroy()
stopAnalyzer()
}
private fun showPictureDialog() {
val pictureDialog = AlertDialog.Builder(this)
pictureDialog.setTitle("Select Action")
val pictureDialogItems = arrayOf("Select image from gallery", "Capture photo from camera")
pictureDialog.setItems(pictureDialogItems
) { dialog, which ->
when (which) {
0 -> chooseImageFromGallery()
1 -> takePhotoFromCamera()
}
}
pictureDialog.show()
}
fun chooseImageFromGallery() {
val galleryIntent = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(galleryIntent, GALLERY)
}
private fun takePhotoFromCamera() {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, CAMERA)
}
public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == GALLERY)
{
if (data != null)
{
val contentURI = data!!.data
try {
val bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, contentURI)
saveImage(bitmap)
Toast.makeText([email protected], "Image Show!", Toast.LENGTH_SHORT).show()
imageView!!.setImageBitmap(bitmap)
}
catch (e: IOException)
{
e.printStackTrace()
Toast.makeText([email protected], "Failed", Toast.LENGTH_SHORT).show()
}
}
}
else if (requestCode == CAMERA)
{
val thumbnail = data!!.extras!!.get("data") as Bitmap
imageView!!.setImageBitmap(thumbnail)
saveImage(thumbnail)
Toast.makeText([email protected], "Photo Show!", Toast.LENGTH_SHORT).show()
}
}
fun saveImage(myBitmap: Bitmap):String {
val bytes = ByteArrayOutputStream()
myBitmap.compress(Bitmap.CompressFormat.PNG, 90, bytes)
val wallpaperDirectory = File (
(Environment.getExternalStorageDirectory()).toString() + IMAGE_DIRECTORY)
Log.d("fee", wallpaperDirectory.toString())
if (!wallpaperDirectory.exists())
{
wallpaperDirectory.mkdirs()
}
try
{
Log.d("heel", wallpaperDirectory.toString())
val f = File(wallpaperDirectory, ((Calendar.getInstance()
.getTimeInMillis()).toString() + ".png"))
f.createNewFile()
val fo = FileOutputStream(f)
fo.write(bytes.toByteArray())
MediaScannerConnection.scanFile(this, arrayOf(f.getPath()), arrayOf("image/png"), null)
fo.close()
Log.d("TAG", "File Saved::--->" + f.getAbsolutePath())
return f.getAbsolutePath()
}
catch (e1: IOException){
e1.printStackTrace()
}
return ""
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure your app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
Related
Overview
In this article, I will create a demo app along with the integration of HMS ML Kit which based on Cross platform Technology Xamarin. User can easily scan any objects from this application with camera using Object Detection and Tracking API and choose best price and details of object. The following object categories are supported: household products, fashion goods, food, places, plants, faces, and others.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Service Introduction
HMS ML Kit allows your apps to easily leverage Huawei's long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries.
A user can take a photo of an Object through camera or gallery. Then the Object Detection and Tracking service searches for the same or similar objects in the pre-established object image library and returns the IDs of those object and related information.
We can capture or choose from gallery any kind object based image to buy or check the price of an object using Machine Learning. It will give the other options, so that you can improve your buying skills.
Prerequisite
1. Xamarin Framework
2. Huawei phone
3. Visual Studio 2019
App Gallery Integration process
1. Sign In and Create or Choose a project on AppGallery Connect portal.
2. Add SHA-256 key.
3. Navigate to Project settings and download the configuration file.
4. Navigate to General Information, and then provide Data Storage location.
5. Navigate to Manage APIs and enable APIs which require by application.
Xamarin ML Kit Setup Process
1. Download Xamarin Plugin all the aar and zip files from below url:
https://developer.huawei.com/consum...Library-V1/xamarin-plugin-0000001053510381-V1
2. Open the XHms-ML-Kit-Library-Project.sln solution in Visual Studio.
3. Navigate to Solution Explore and right-click on jar Add > Exsiting Item and choose aar file which download in Step 1.
4. Right click on added aar file then choose Properties > Build Action > LibraryProjectZip
Note: Repeat Step 3 & 4 for all aar file.
5. Build the Library and make dll files.
Xamarin App Development
1. Open Visual Studio 2019 and Create A New Project.
2. Navigate to Solution Explore > Project > Assets > Add Json file.
3. Navigate to Solution Explore > Project > Add > Add New Folder.
4. Navigate to Folder(created) > Add > Add Existing and add all DLL files.
5. Select all DLL files.
6. Right-click on Properties, choose Build Action > None.
7. Navigate to Solution Explore > Project > Reference > Right Click > Add References, then navigate to Browse and add all DLL files from recently added folder.
8. Added reference, then click OK.
ML Object Detection and Tracking API Integration
Camera stream detection
You can process camera streams, convert video frames into an MLFrame object, and detect objects using the static image detection method. If the synchronous detection API is called, you can also use the LensEngine class built in the SDK to detect objects in camera streams. The sample code is as follows:
1. Create an object analyzer.
1
2
3
4
5
6
7
8// Create an object analyzer
// Use MLObjectAnalyzerSetting.TypeVideo for video stream detection.
// Use MLObjectAnalyzerSetting.TypePicture for static image detection.
MLObjectAnalyzerSetting setting = new MLObjectAnalyzerSetting.Factory().SetAnalyzerType(MLObjectAnalyzerSetting.TypeVideo)
.AllowMultiResults()
.AllowClassification()
.Create();
analyzer = MLAnalyzerFactory.Instance.GetLocalObjectAnalyzer(setting);
2. Create the ObjectAnalyzerTransactor class for processing detection results. This class implements the MLAnalyzer.IMLTransactor API and uses the TransactResult method in this API to obtain the detection results and implement specific services.
1
2
3
4
5
6
7
8
9
10
11
12public class ObjectAnalyseMLTransactor : Java.Lang.Object, MLAnalyzer.IMLTransactor
{
public void Destroy()
{
}
public void TransactResult(MLAnalyzer.Result results)
{
SparseArray objectSparseArray = results.AnalyseList;
}
}
3. Set the detection result processor to bind the analyzer to the result processor.
1analyzer.SetTransactor(new ObjectAnalyseMLTransactor());
4. Create an instance of the LensEngine class provided by the HMS Core ML SDK to capture dynamic camera streams and pass the streams to the analyzer.
1
2
3
4
5
6
7Context context = this.ApplicationContext;
// Create LensEngine
LensEngine lensEngine = new LensEngine.Creator(context, this.analyzer).SetLensType(this.lensType)
.ApplyDisplayDimension(640, 480)
.ApplyFps(25.0f)
.EnableAutomaticFocus(true)
.Create();
5. Call the run method to start the camera and read camera streams for detection.
1
2
3
4
5
6
7
8
9
10
11
12 if (lensEngine != null)
{
try
{
preview.start(lensEngine , overlay);
}
catch (Exception e)
{
lensEngine .Release();
lensEngine = null;
}
}
6. After the detection is complete, stop the analyzer to release detection resources.
1
2
3
4
5
6if (analyzer != null) {
analyzer.Stop();
}
if (lensEngine != null) {
lensEngine.Release();
}
LiveObjectAnalyseActivity.cs
This activity performs all the operation regarding object detecting and tracking with camera.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Android;
using Android.App;
using Android.Content;
using Android.Content.PM;
using Android.OS;
using Android.Runtime;
using Android.Support.V4.App;
using Android.Support.V7.App;
using Android.Util;
using Android.Views;
using Android.Widget;
using Com.Huawei.Hms.Mlsdk;
using Com.Huawei.Hms.Mlsdk.Common;
using Com.Huawei.Hms.Mlsdk.Objects;
using HmsXamarinMLDemo.Camera;
namespace HmsXamarinMLDemo.MLKitActivities.ImageRelated.Object
{
[Activity(Label = "LiveObjectAnalyseActivity")]
public class LiveObjectAnalyseActivity : AppCompatActivity, View.IOnClickListener
{
private const string Tag = "LiveObjectAnalyseActivity";
private const int CameraPermissionCode = 1;
public const int StopPreview = 1;
public const int StartPreview = 2;
private MLObjectAnalyzer analyzer;
private LensEngine mLensEngine;
private bool isStarted = true;
private LensEnginePreview mPreview;
private GraphicOverlay mOverlay;
private int lensType = LensEngine.BackLens;
public bool mlsNeedToDetect = true;
public ObjectAnalysisHandler mHandler;
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
this.SetContentView(Resource.Layout.activity_live_object_analyse);
if (savedInstanceState != null)
{
this.lensType = savedInstanceState.GetInt("lensType");
}
this.mPreview = (LensEnginePreview)this.FindViewById(Resource.Id.object_preview);
this.mOverlay = (GraphicOverlay)this.FindViewById(Resource.Id.object_overlay);
this.CreateObjectAnalyzer();
this.FindViewById(Resource.Id.detect_start).SetOnClickListener(this);
mHandler = new ObjectAnalysisHandler(this);
// Checking Camera Permissions
if (ActivityCompat.CheckSelfPermission(this, Manifest.Permission.Camera) == Permission.Granted)
{
this.CreateLensEngine();
}
else
{
this.RequestCameraPermission();
}
}
//Request permission
private void RequestCameraPermission()
{
string[] permissions = new string[] { Manifest.Permission.Camera };
if (!ActivityCompat.ShouldShowRequestPermissionRationale(this, Manifest.Permission.Camera))
{
ActivityCompat.RequestPermissions(this, permissions, CameraPermissionCode);
return;
}
}
/// <summary>
/// Start Lens Engine on OnResume() event.
/// </summary>
protected override void OnResume()
{
base.OnResume();
this.StartLensEngine();
}
/// <summary>
/// Stop Lens Engine on OnPause() event.
/// </summary>
protected override void OnPause()
{
base.OnPause();
this.mPreview.stop();
}
/// <summary>
/// Stop analyzer on OnDestroy() event.
/// </summary>
protected override void OnDestroy()
{
base.OnDestroy();
if (this.mLensEngine != null)
{
this.mLensEngine.Release();
}
if (this.analyzer != null)
{
try
{
this.analyzer.Stop();
}
catch (Exception e)
{
Log.Info(LiveObjectAnalyseActivity.Tag, "Stop failed: " + e.Message);
}
}
}
public override void OnRequestPermissionsResult(int requestCode, string[] permissions, [GeneratedEnum] Permission[] grantResults)
{
if (requestCode != LiveObjectAnalyseActivity.CameraPermissionCode)
{
base.OnRequestPermissionsResult(requestCode, permissions, grantResults);
return;
}
if (grantResults.Length != 0 && grantResults[0] == Permission.Granted)
{
this.CreateLensEngine();
return;
}
}
protected override void OnSaveInstanceState(Bundle outState)
{
outState.PutInt("lensType", this.lensType);
base.OnSaveInstanceState(outState);
}
private void StopPreviewAction()
{
this.mlsNeedToDetect = false;
if (this.mLensEngine != null)
{
this.mLensEngine.Release();
}
if (this.analyzer != null)
{
try
{
this.analyzer.Stop();
}
catch (Exception e)
{
Log.Info("object", "Stop failed: " + e.Message);
}
}
this.isStarted = false;
}
private void StartPreviewAction()
{
if (this.isStarted)
{
return;
}
this.CreateObjectAnalyzer();
this.mPreview.release();
this.CreateLensEngine();
this.StartLensEngine();
this.isStarted = true;
}
private void CreateLensEngine()
{
Context context = this.ApplicationContext;
// Create LensEngine
this.mLensEngine = new LensEngine.Creator(context, this.analyzer).SetLensType(this.lensType)
.ApplyDisplayDimension(640, 480)
.ApplyFps(25.0f)
.EnableAutomaticFocus(true)
.Create();
}
private void StartLensEngine()
{
if (this.mLensEngine != null)
{
try
{
this.mPreview.start(this.mLensEngine, this.mOverlay);
}
catch (Exception e)
{
Log.Info(LiveObjectAnalyseActivity.Tag, "Failed to start lens engine.", e);
this.mLensEngine.Release();
this.mLensEngine = null;
}
}
}
public void OnClick(View v)
{
this.mHandler.SendEmptyMessage(LiveObjectAnalyseActivity.StartPreview);
}
private void CreateObjectAnalyzer()
{
// Create an object analyzer
// Use MLObjectAnalyzerSetting.TypeVideo for video stream detection.
// Use MLObjectAnalyzerSetting.TypePicture for static image detection.
MLObjectAnalyzerSetting setting =
new MLObjectAnalyzerSetting.Factory().SetAnalyzerType(MLObjectAnalyzerSetting.TypeVideo)
.AllowMultiResults()
.AllowClassification()
.Create();
this.analyzer = MLAnalyzerFactory.Instance.GetLocalObjectAnalyzer(setting);
this.analyzer.SetTransactor(new ObjectAnalyseMLTransactor(this));
}
public class ObjectAnalysisHandler : Android.OS.Handler
{
private LiveObjectAnalyseActivity liveObjectAnalyseActivity;
public ObjectAnalysisHandler(LiveObjectAnalyseActivity LiveObjectAnalyseActivity)
{
this.liveObjectAnalyseActivity = LiveObjectAnalyseActivity;
}
public override void HandleMessage(Message msg)
{
base.HandleMessage(msg);
switch (msg.What)
{
case LiveObjectAnalyseActivity.StartPreview:
this.liveObjectAnalyseActivity.mlsNeedToDetect = true;
//Log.d("object", "start to preview");
this.liveObjectAnalyseActivity.StartPreviewAction();
break;
case LiveObjectAnalyseActivity.StopPreview:
this.liveObjectAnalyseActivity.mlsNeedToDetect = false;
//Log.d("object", "stop to preview");
this.liveObjectAnalyseActivity.StopPreviewAction();
break;
default:
break;
}
}
}
public class ObjectAnalyseMLTransactor : Java.Lang.Object, MLAnalyzer.IMLTransactor
{
private LiveObjectAnalyseActivity liveObjectAnalyseActivity;
public ObjectAnalyseMLTransactor(LiveObjectAnalyseActivity LiveObjectAnalyseActivity)
{
this.liveObjectAnalyseActivity = LiveObjectAnalyseActivity;
}
public void Destroy()
{
}
public void TransactResult(MLAnalyzer.Result result)
{
if (!liveObjectAnalyseActivity.mlsNeedToDetect) {
return;
}
this.liveObjectAnalyseActivity.mOverlay.Clear();
SparseArray objectSparseArray = result.AnalyseList;
for (int i = 0; i < objectSparseArray.Size(); i++)
{
MLObjectGraphic graphic = new MLObjectGraphic(liveObjectAnalyseActivity.mOverlay, ((MLObject)(objectSparseArray.ValueAt(i))));
liveObjectAnalyseActivity.mOverlay.Add(graphic);
}
// When you need to implement a scene that stops after recognizing specific content
// and continues to recognize after finishing processing, refer to this code
for (int i = 0; i < objectSparseArray.Size(); i++)
{
if (((MLObject)(objectSparseArray.ValueAt(i))).TypeIdentity == MLObject.TypeFood)
{
liveObjectAnalyseActivity.mlsNeedToDetect = true;
liveObjectAnalyseActivity.mHandler.SendEmptyMessage(LiveObjectAnalyseActivity.StopPreview);
}
}
}
}
}
}
Xamarin App Build
1. Navigate to Solution Explore > Project > Right Click > Archive/View Archive to generate SHA-256 for build release and Click on Distribute.
2. Choose Distribution Channel > Ad Hoc to sign apk.
3. Choose Demo Keystore to release apk.
4. Finally here is the Result.
Tips and Tricks
1. HUAWEI ML Kit complies with GDPR requirements for data processing.
2. HUAWEI ML Kit does not support the recognition of the object distance and colour.
3. Images in PNG, JPG, JPEG, and BMP formats are supported. GIF images are not supported.
Conclusion
In this article, we have learned how to integrate HMS ML Kit in Xamarin based Android application. User can easily search objects online with the help of Object Detection and Tracking API in this application.
Thanks for reading this article.
Be sure to like and comments to this article, if you found it helpful. It means a lot to me.
References
https://developer.huawei.com/consum...n-Guides/object-detect-track-0000001052607676
Very useful article, thanks for sharing.
Very interesting article.
Does it supports custom models?
Overview
In this article, I will create a demo app along with the integration of HMS Account & Awareness Kit which is based on Cross platform Technology Xamarin. User can easily login with Huawei Id and get the details of their city weather information. I have implemented Huawei Id for login and Weather Awareness for weather forecasting.
Account Kit Service Introduction
HMS Account Kit allows you to connect to the Huawei ecosystem using your HUAWEI ID from a range of devices, such as mobile phones, tablets, and smart screens.
It’s a simple, secure, and quick sign-in and authorization functions. Instead of entering accounts and passwords and waiting for authentication.
Complies with international standards and protocols such as OAuth2.0 and OpenID Connect, and supports two-factor authentication (password authentication and mobile number authentication) to ensure high security.
Weather Awareness Service Introduction
HMS Weather Awareness Kit allows your app with the ability to obtain contextual information including users' current time, location, behavior, audio device status, ambient light, weather, and nearby beacons. Your app can gain insight into a user's current situation more efficiently, making it possible to deliver a smarter, more considerate user experience.
Prerequisite
1. Xamarin Framework
2. Huawei phone
3. Visual Studio 2019
App Gallery Integration process
1. Sign In and Create or Choose a project on AppGallery Connect portal.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2. Add SHA-256 key.
3. Navigate to Project settings and download the configuration file.
4. Navigate to General Information, and then provide Data Storage location.
5. Navigate to Manage APIs and enable APIs which require by application.
Xamarin Account Kit Setup Process
1. Download Xamarin Plugin all the aar and zip files from below url:
https://developer.huawei.com/consum...y-V1/xamarin-sdk-download-0000001050768441-V1
2. Open the XHwid-5.03.302.sln solution in Visual Studio.
Xamarin Weather Awareness Kit Setup Process
1. Download Xamarin Plugin all the aar and zip files from below url:
https://developer.huawei.com/consum...Plugin-Library-V1/xamarin-0000001061535799-V1
2. Open the XAwarness-1.0.7.303.sln solution in Visual Studio.
3. Navigate to Solution Explore and right click on jar Add > Exsiting Item and choose aar file which download in Step 1.
4. Right click on added aar file, then choose Properties > Build Action > LibraryProjectZip
Note: Repeat Step 3 & 4 for all aar file.
5. Build the Library and make dll files.
Xamarin App Development
1. Open Visual Studio 2019 and Create A New Project.
2. Navigate to Solution Explore > Project > Assets > Add Json file.
3. Navigate to Solution Explore > Project > Add > Add New Folder.
4. Navigate to Folder(created) > Add > Add Existing and add all dll files.
5. Right-click on Properties, choose Build Action > None
6. Navigate to Solution Explore > Project > Reference > Right Click > Add References, then navigate to Browse and add all dll files from recently added folder.
7. Added reference, then click OK.
Account Kit Integration
Development Procedure
1. Call the HuaweiIdAuthParamsHelper.SetAuthorizationCode method to send an authorization request.
1
2
3
4
5HuaweiIdAutParams mAuthParam;
mAuthParam = new HuaweiIdAuthParamsHelper(HuaweiIdAuthParams.DefaultAuthRequestParam)
.SetProfile()
.SetAuthorizationCode()
.CreateParams();
2. Call the GetService method of HuaweiIdAuthManager to initialize the IHuaweiIdAuthService object.
1
2IHuaweiIdAuthService mAuthManager;
mAuthManager = HuaweiIdAuthManager.GetService(this, mAuthParam);
3. Call the IHuaweiIdAuthService.SignInIntent method to bring up the HUAWEI ID authorization & sign-in screen.
1StartActivityForResult(mAuthManager.SignInIntent, 8888);
4. Process the result after the authorization & sign-in is complete.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19protected override void OnActivityResult(int requestCode, Result resultCode, Intent data)
{
base.OnActivityResult(requestCode, resultCode, data);
if (requestCode == 8888)
{
//login success
Task authHuaweiIdTask = HuaweiIdAuthManager.ParseAuthResultFromIntent(data);
if (authHuaweiIdTask.IsSuccessful)
{
AuthHuaweiId huaweiAccount = (AuthHuaweiId)authHuaweiIdTask.TaskResult();
Log.Info(TAG, "signIn get code success.");
Log.Info(TAG, "ServerAuthCode: " + huaweiAccount.AuthorizationCode);
}
else
{
Log.Info(TAG, "signIn failed: " +((ApiException)authHuaweiIdTask.Exception).StatusCode);
}
}
}
LoginActivity.cs
This activity perform all the operation regarding login with Huawei Id.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137using Android.App;
using Android.Content;
using Android.Content.PM;
using Android.OS;
using Android.Runtime;
using Android.Support.V4.App;
using Android.Support.V4.Content;
using Android.Support.V7.App;
using Android.Util;
using Android.Views;
using Android.Widget;
using Com.Huawei.Agconnect.Config;
using Com.Huawei.Hmf.Tasks;
using Com.Huawei.Hms.Common;
using Com.Huawei.Hms.Support.Hwid;
using Com.Huawei.Hms.Support.Hwid.Request;
using Com.Huawei.Hms.Support.Hwid.Result;
using Com.Huawei.Hms.Support.Hwid.Service;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace WeatherAppDemo
{
[Activity(Label = "LoginActivity", Theme = "@style/AppTheme", MainLauncher = true)]
public class LoginActivity : AppCompatActivity
{
private static String TAG = "LoginActivity";
private HuaweiIdAuthParams mAuthParam;
public static IHuaweiIdAuthService mAuthManager;
private Button btnLoginWithHuaweiId;
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
Xamarin.Essentials.Platform.Init(this, savedInstanceState);
SetContentView(Resource.Layout.login_activity);
btnLoginWithHuaweiId = FindViewById<Button>(Resource.Id.btn_huawei_id);
btnLoginWithHuaweiId.Click += delegate
{
// Write code for Huawei id button click
mAuthParam = new HuaweiIdAuthParamsHelper(HuaweiIdAuthParams.DefaultAuthRequestParam)
.SetIdToken().SetEmail()
.SetAccessToken()
.CreateParams();
mAuthManager = HuaweiIdAuthManager.GetService(this, mAuthParam);
StartActivityForResult(mAuthManager.SignInIntent, 1011);
};
checkPermission(new string[] { Android.Manifest.Permission.Internet,
Android.Manifest.Permission.AccessNetworkState,
Android.Manifest.Permission.ReadSms,
Android.Manifest.Permission.ReceiveSms,
Android.Manifest.Permission.SendSms,
Android.Manifest.Permission.BroadcastSms}, 100);
}
public void checkPermission(string[] permissions, int requestCode)
{
foreach (string permission in permissions)
{
if (ContextCompat.CheckSelfPermission(this, permission) == Permission.Denied)
{
ActivityCompat.RequestPermissions(this, permissions, requestCode);
}
}
}
public override void OnRequestPermissionsResult(int requestCode, string[] permissions, [GeneratedEnum] Android.Content.PM.Permission[] grantResults)
{
Xamarin.Essentials.Platform.OnRequestPermissionsResult(requestCode, permissions, grantResults);
base.OnRequestPermissionsResult(requestCode, permissions, grantResults);
}
protected override void AttachBaseContext(Context context)
{
base.AttachBaseContext(context);
AGConnectServicesConfig config = AGConnectServicesConfig.FromContext(context);
config.OverlayWith(new HmsLazyInputStream(context));
}
protected override void OnActivityResult(int requestCode, Result resultCode, Intent data)
{
base.OnActivityResult(requestCode, resultCode, data);
if (requestCode == 1011 || requestCode == 1022)
{
//login success
Task authHuaweiIdTask = HuaweiIdAuthManager.ParseAuthResultFromIntent(data);
if (authHuaweiIdTask.IsSuccessful)
{
AuthHuaweiId huaweiAccount = (AuthHuaweiId)authHuaweiIdTask.TaskResult();
Log.Info(TAG, "signIn get code success.");
Log.Info(TAG, "ServerAuthCode: " + huaweiAccount.AuthorizationCode);
Toast.MakeText(Android.App.Application.Context, "SignIn Success", ToastLength.Short).Show();
navigateToHomeScreen(huaweiAccount);
}
else
{
Log.Info(TAG, "signIn failed: " + ((ApiException)authHuaweiIdTask.Exception).StatusCode);
Toast.MakeText(Android.App.Application.Context, ((ApiException)authHuaweiIdTask.Exception).StatusCode.ToString(), ToastLength.Short).Show();
Toast.MakeText(Android.App.Application.Context, "SignIn Failed", ToastLength.Short).Show();
}
}
}
private void showLogoutButton()
{
/*logout.Visibility = Android.Views.ViewStates.Visible;*/
}
private void hideLogoutButton()
{
/*logout.Visibility = Android.Views.ViewStates.Gone;*/
}
private void navigateToHomeScreen(AuthHuaweiId data)
{
Intent intent = new Intent(this, typeof(MainActivity));
intent.PutExtra("name", data.DisplayName.ToString());
intent.PutExtra("email", data.Email.ToString());
intent.PutExtra("image", data.PhotoUriString.ToString());
StartActivity(intent);
Finish();
}
}
}
Weather Awareness API Integration
Assigning Permissions in the Manifest File
Before calling the weather awareness capability, assign required permissions in the manifest file.
1
2<!-- Location permission. This permission is sensitive and needs to be dynamically applied for in the code after being declared. -->
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
Developing Capabilities
Call the weather capability API through the Capture Client object.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24private async void GetWeatherStatus()
{
var weatherTask = Awareness.GetCaptureClient(this).GetWeatherByDeviceAsync();
await weatherTask;
if (weatherTask.IsCompleted && weatherTask.Result != null)
{
IWeatherStatus weatherStatus = weatherTask.Result.WeatherStatus;
WeatherSituation weatherSituation = weatherStatus.WeatherSituation;
Situation situation = weatherSituation.Situation;
string result = $"City:{weatherSituation.City.Name}\n";
result += $"Weather id is {situation.WeatherId}\n";
result += $"CN Weather id is {situation.CnWeatherId}\n";
result += $"Temperature is {situation.TemperatureC}Celcius";
result += $",{situation.TemperatureF}Farenheit\n";
result += $"Wind speed is {situation.WindSpeed}km/h\n";
result += $"Wind direction is {situation.WindDir}\n";
result += $"Humidity is {situation.Humidity}%";
}
else
{
var exception = weatherTask.Exception;
string errorMessage = $"{AwarenessStatusCodes.GetMessage(exception.GetStatusCode())}: {exception.Message}";
}
}
MainActivity.cs
This activity perform all the operation regarding Weather Awareness api like current city weather and other information.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135using System;
using Android;
using Android.App;
using Android.OS;
using Android.Runtime;
using Android.Support.Design.Widget;
using Android.Support.V4.View;
using Android.Support.V4.Widget;
using Android.Support.V7.App;
using Android.Views;
using Com.Huawei.Hms.Kit.Awareness;
using Com.Huawei.Hms.Kit.Awareness.Status;
using Com.Huawei.Hms.Kit.Awareness.Status.Weather;
namespace WeatherAppDemo
{
[Activity(Label = "@string/app_name", Theme = "@style/AppTheme.NoActionBar")]
public class MainActivity : AppCompatActivity, NavigationView.IOnNavigationItemSelectedListener
{
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
Xamarin.Essentials.Platform.Init(this, savedInstanceState);
SetContentView(Resource.Layout.activity_main);
Android.Support.V7.Widget.Toolbar toolbar = FindViewById<Android.Support.V7.Widget.Toolbar>(Resource.Id.toolbar);
SetSupportActionBar(toolbar);
DrawerLayout drawer = FindViewById<DrawerLayout>(Resource.Id.drawer_layout);
ActionBarDrawerToggle toggle = new ActionBarDrawerToggle(this, drawer, toolbar, Resource.String.navigation_drawer_open, Resource.String.navigation_drawer_close);
drawer.AddDrawerListener(toggle);
toggle.SyncState();
NavigationView navigationView = FindViewById<NavigationView>(Resource.Id.nav_view);
navigationView.SetNavigationItemSelectedListener(this);
}
private async void GetWeatherStatus()
{
var weatherTask = Awareness.GetCaptureClient(this).GetWeatherByDeviceAsync();
await weatherTask;
if (weatherTask.IsCompleted && weatherTask.Result != null)
{
IWeatherStatus weatherStatus = weatherTask.Result.WeatherStatus;
WeatherSituation weatherSituation = weatherStatus.WeatherSituation;
Situation situation = weatherSituation.Situation;
string result = $"City:{weatherSituation.City.Name}\n";
result += $"Weather id is {situation.WeatherId}\n";
result += $"CN Weather id is {situation.CnWeatherId}\n";
result += $"Temperature is {situation.TemperatureC}Celcius";
result += $",{situation.TemperatureF}Farenheit\n";
result += $"Wind speed is {situation.WindSpeed}km/h\n";
result += $"Wind direction is {situation.WindDir}\n";
result += $"Humidity is {situation.Humidity}%";
}
else
{
var exception = weatherTask.Exception;
string errorMessage = $"{AwarenessStatusCodes.GetMessage(exception.GetStatusCode())}: {exception.Message}";
}
}
public override void OnBackPressed()
{
DrawerLayout drawer = FindViewById<DrawerLayout>(Resource.Id.drawer_layout);
if(drawer.IsDrawerOpen(GravityCompat.Start))
{
drawer.CloseDrawer(GravityCompat.Start);
}
else
{
base.OnBackPressed();
}
}
public override bool OnCreateOptionsMenu(IMenu menu)
{
MenuInflater.Inflate(Resource.Menu.menu_main, menu);
return true;
}
public override bool OnOptionsItemSelected(IMenuItem item)
{
int id = item.ItemId;
if (id == Resource.Id.action_settings)
{
return true;
}
return base.OnOptionsItemSelected(item);
}
public bool OnNavigationItemSelected(IMenuItem item)
{
int id = item.ItemId;
if (id == Resource.Id.nav_camera)
{
// Handle the camera action
}
else if (id == Resource.Id.nav_gallery)
{
}
else if (id == Resource.Id.nav_slideshow)
{
}
else if (id == Resource.Id.nav_manage)
{
}
else if (id == Resource.Id.nav_share)
{
}
else if (id == Resource.Id.nav_send)
{
}
DrawerLayout drawer = FindViewById<DrawerLayout>(Resource.Id.drawer_layout);
drawer.CloseDrawer(GravityCompat.Start);
return true;
}
public override void OnRequestPermissionsResult(int requestCode, string[] permissions, [GeneratedEnum] Android.Content.PM.Permission[] grantResults)
{
Xamarin.Essentials.Platform.OnRequestPermissionsResult(requestCode, permissions, grantResults);
base.OnRequestPermissionsResult(requestCode, permissions, grantResults);
}
}
}
Xamarin App Build Result
1. Navigate to Solution Explore > Project > Right Click > Archive/View Archive to generate SHA-256 for build release and Click on Distribute.
2. Choose Distribution Channel > Ad Hoc to sign apk.
3. Choose Demo keystore to release apk.
4. Build succeed and Save apk file.
5. Finally here is the result.
Tips and Tricks
1. Awareness Kit supports wearable Android devices, but HUAWEI HMS Core 4.0 is not deployed on devices other than mobile phones. Therefore, wearable devices are not supported currently.
2. Cloud capabilities are required to sense time information and weather.
3. 10012: HMS Core does not have the behaviour recognition permission.
Conclusion
In this article, we have learned how to integrate HMS Weather Awareness and Account Kit in Xamarin based Android application. User can easily login and check weather forecast.
Thanks for reading this article.
Be sure to like and comments to this article, if you found it helpful. It means a lot to me.
References
https://developer.huawei.com/consum...lugin-Guides/sign-in-idtoken-0000001051086088
https://developer.huawei.com/consum...-Guides/service-introduction-0000001062540020
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Flutter? At the end of this tutorial, we will create the Huawei Skeleton detection in Flutter application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both the methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps as follows.
Step 1: Create a flutter application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the downloaded plugin in pubspec.yaml.
Step 4: Add a downloaded file into the outside project directory. Declare plugin path in pubspec.yaml file under dependencies.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28dependencies:
flutter:
sdk: flutter
huawei_account:
path: ../huawei_account/
huawei_location:
path: ../huawei_location/
huawei_map:
path: ../huawei_map/
huawei_analytics:
path: ../huawei_analytics/
huawei_site:
path: ../huawei_site/
huawei_push:
path: ../huawei_push/
huawei_dtm:
path: ../huawei_dtm/
huawei_ml:
path: ../huawei_ml/
agconnect_crash: ^1.0.0
agconnect_remote_config: ^1.0.0
http: ^0.12.2
camera:
path_provider:
path:
image_picker:
fluttertoast: ^7.1.6
shared_preferences: ^0.5.12+4
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Flutter application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Flutter application
In this example, I am getting image from gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128import 'dart:io';
import 'package:flutter/material.dart';
import 'package:huawei_ml/huawei_ml.dart';
import 'package:huawei_ml/skeleton/ml_skeleton_analyzer.dart';
import 'package:huawei_ml/skeleton/ml_skeleton_analyzer_setting.dart';
import 'package:image_picker/image_picker.dart';
class SkeletonDetection extends StatefulWidget {
@override
_SkeletonDetectionState createState() => _SkeletonDetectionState();
}
class _SkeletonDetectionState extends State<SkeletonDetection> {
MLSkeletonAnalyzer analyzer;
MLSkeletonAnalyzerSetting setting;
List<MLSkeleton> skeletons;
double _x = 0;
double _y = 0;
double _score = 0;
@override
void initState() {
// TODO: implement initState
analyzer = new MLSkeletonAnalyzer();
setting = new MLSkeletonAnalyzerSetting();
super.initState();
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
_setImageView()
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: () {
_showSelectionDialog(context);
},
child: Icon(Icons.camera_alt),
),
);
}
Future<void> _showSelectionDialog(BuildContext context) {
return showDialog(
context: context,
builder: (BuildContext context) {
return AlertDialog(
title: Text("From where do you want to take the photo?"),
content: SingleChildScrollView(
child: ListBody(
children: <Widget>[
GestureDetector(
child: Text("Gallery"),
onTap: () {
_openGallery(context);
},
),
Padding(padding: EdgeInsets.all(8.0)),
GestureDetector(
child: Text("Camera"),
onTap: () {
_openCamera();
},
)
],
),
));
});
}
File imageFile;
void _openGallery(BuildContext context) async {
var picture = await ImagePicker.pickImage(source: ImageSource.gallery);
this.setState(() {
imageFile = picture;
_skeletonDetection();
});
Navigator.of(context).pop();
}
_openCamera() async {
PickedFile pickedFile = await ImagePicker().getImage(
source: ImageSource.camera,
maxWidth: 800,
maxHeight: 800,
);
if (pickedFile != null) {
imageFile = File(pickedFile.path);
this.setState(() {
imageFile = imageFile;
_skeletonDetection();
});
}
Navigator.of(context).pop();
}
Widget _setImageView() {
if (imageFile != null) {
return Image.file(imageFile, width: 500, height: 500);
} else {
return Text("Please select an image");
}
}
_skeletonDetection() async {
// Create a skeleton analyzer.
analyzer = new MLSkeletonAnalyzer();
// Configure the recognition settings.
setting = new MLSkeletonAnalyzerSetting();
setting.path = imageFile.path;
setting.analyzerType = MLSkeletonAnalyzerSetting.TYPE_NORMAL; // Normal posture.
// Get recognition result asynchronously.
List<MLSkeleton> list = await analyzer.asyncSkeletonDetection(setting);
print("Result data: "+list[0].toJson().toString());
// After the recognition ends, stop the analyzer.
bool res = await analyzer.stopSkeletonDetection();
}
}
Result
Tips and Tricks
Download the latest HMS Flutter plugin.
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking image from a camera or gallery make sure your app has camera and storage permission
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, I will explain what is Skeleton detection? How does Skeleton detection work in Android? At the end of this tutorial, we will create the Huawei Skeleton detection in an Android application using Huawei ML Kit.
What is Skeleton detection?
Huawei ML Kit Skeleton detection service detects the human body. So, represents the orientation of a person in a graphical format. Essentially, it’s a set of coordinates that can be connected to describe the position of the person. This service detects and locates key points of the human body such as the top of the head, neck, shoulders, elbows, wrists, hips, knees, and ankles. Currently, full-body and half-body static image recognition and real-time camera stream recognition are supported.
What is the use of Skeleton detection?
Definitely, everyone will have the question like what is the use of it. For example, if you want to develop a fitness application, you can understand and help the user with coordinates from skeleton detection to see if the user has made the exact movements during exercises or you could develop a game about dance movements Using this service and ML kit can understand easily whether the user has done proper excise or not.
How does it work?
You can use skeleton detection over a static image or over a real-time camera stream. Either way, you can get the coordinates of the human body. Of course, when taking them, it’s looking out for critical areas like head, neck, shoulders, elbows, wrists, hips, knees, and ankles. At the same time, both methods will detect multiple human bodies.
There are two attributes to detect skeleton.
1. TYPE_NORMAL
2. TYPE_YOGA
TYPE_NORMAL: If you send the analyzer type as TYPE_NORMAL, perceives skeletal points for normal standing position.
TYPE_YOGA: If you send the analyzer type as TYPE_YOGA, it picks up skeletal points for yoga posture.
Note: The default mode is to detect skeleton points for normal postures.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
This step involves a couple of steps, as follows.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
This step involves a couple of steps, as follows.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting > Manage API > ML Kit
Step 2: Build Android application
In this example, I am getting image from the gallery or Camera and getting the skeleton detection and joints points from the ML kit skeleton detection.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147private fun initAnalyzer(analyzerType: Int) {
val setting = MLSkeletonAnalyzerSetting.Factory()
.setAnalyzerType(analyzerType)
.create()
analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer(setting)
imageSkeletonDetectAsync()
}
private fun initFrame(type: Int) {
imageView.invalidate()
val drawable = imageView.drawable as BitmapDrawable
val originBitmap = drawable.bitmap
val maxHeight = (imageView.parent as View).height
val targetWidth = (imageView.parent as View).width
// Update bitmap size
val scaleFactor = (originBitmap.width.toFloat() / targetWidth.toFloat())
.coerceAtLeast(originBitmap.height.toFloat() / maxHeight.toFloat())
val resizedBitmap = Bitmap.createScaledBitmap(
originBitmap,
(originBitmap.width / scaleFactor).toInt(),
(originBitmap.height / scaleFactor).toInt(),
true
)
frame = MLFrame.fromBitmap(resizedBitmap)
initAnalyzer(type)
}
private fun imageSkeletonDetectAsync() {
val task: Task<List<MLSkeleton>>? = analyzer?.asyncAnalyseFrame(frame)
task?.addOnSuccessListener { results ->
// Detection success.
val skeletons: List<MLSkeleton>? = getValidSkeletons(results)
if (skeletons != null && skeletons.isNotEmpty()) {
graphicOverlay?.clear()
val skeletonGraphic = SkeletonGraphic(graphicOverlay, results)
graphicOverlay?.add(skeletonGraphic)
} else {
Log.e(TAG, "async analyzer result is null.")
}
}?.addOnFailureListener { /* Result failure. */ }
}
private fun stopAnalyzer() {
if (analyzer != null) {
try {
analyzer?.stop()
} catch (e: IOException) {
Log.e(TAG, "Failed for analyzer: " + e.message)
}
}
}
override fun onDestroy() {
super.onDestroy()
stopAnalyzer()
}
private fun showPictureDialog() {
val pictureDialog = AlertDialog.Builder(this)
pictureDialog.setTitle("Select Action")
val pictureDialogItems = arrayOf("Select image from gallery", "Capture photo from camera")
pictureDialog.setItems(pictureDialogItems
) { dialog, which ->
when (which) {
0 -> chooseImageFromGallery()
1 -> takePhotoFromCamera()
}
}
pictureDialog.show()
}
fun chooseImageFromGallery() {
val galleryIntent = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(galleryIntent, GALLERY)
}
private fun takePhotoFromCamera() {
val cameraIntent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(cameraIntent, CAMERA)
}
public override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == GALLERY)
{
if (data != null)
{
val contentURI = data!!.data
try {
val bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, contentURI)
saveImage(bitmap)
Toast.makeText([email protected], "Image Show!", Toast.LENGTH_SHORT).show()
imageView!!.setImageBitmap(bitmap)
}
catch (e: IOException)
{
e.printStackTrace()
Toast.makeText([email protected], "Failed", Toast.LENGTH_SHORT).show()
}
}
}
else if (requestCode == CAMERA)
{
val thumbnail = data!!.extras!!.get("data") as Bitmap
imageView!!.setImageBitmap(thumbnail)
saveImage(thumbnail)
Toast.makeText([email protected], "Photo Show!", Toast.LENGTH_SHORT).show()
}
}
fun saveImage(myBitmap: Bitmap):String {
val bytes = ByteArrayOutputStream()
myBitmap.compress(Bitmap.CompressFormat.PNG, 90, bytes)
val wallpaperDirectory = File (
(Environment.getExternalStorageDirectory()).toString() + IMAGE_DIRECTORY)
Log.d("fee", wallpaperDirectory.toString())
if (!wallpaperDirectory.exists())
{
wallpaperDirectory.mkdirs()
}
try
{
Log.d("heel", wallpaperDirectory.toString())
val f = File(wallpaperDirectory, ((Calendar.getInstance()
.getTimeInMillis()).toString() + ".png"))
f.createNewFile()
val fo = FileOutputStream(f)
fo.write(bytes.toByteArray())
MediaScannerConnection.scanFile(this, arrayOf(f.getPath()), arrayOf("image/png"), null)
fo.close()
Log.d("TAG", "File Saved::--->" + f.getAbsolutePath())
return f.getAbsolutePath()
}
catch (e1: IOException){
e1.printStackTrace()
}
return ""
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure your app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMAL and TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
will it detect all yoga positions?
Introduction
In this article, I will cover live yoga pose detection. In my last article, I’ve written yoga pose detection using the Huawei ML kit. If you have not read my previous article refer to link Beginner: Find yoga pose using Huawei ML kit skeleton detection - Part 1.
In this article, I will cover live yoga detection.
Definitely, you will have a question about how does this application help?
Let’s take an example, most people attend yoga classes due to COVID-19 nobody is able to attend the yoga classes. So using the Huawei ML kit Skeleton detection record your yoga session video and send it to your yoga master he will check your body joints which is shown in the video. And he will explain what are the mistakes you have done in that recorded yoga session.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit.
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio(Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting> Manage API > ML Kit
Step 2: Build Android application
In this example, I’m detecting yoga poses live using the camera.
While building the application, follow the steps.
Step 1: Create a Skeleton analyzer.
1
2private static MLSkeletonAnalyzer analyzer = null;
analyzer = MLSkeletonAnalyzerFactory.getInstance().skeletonAnalyzer
Step 2: Create SkeletonAnalyzerTransactor class to process the result.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65import android.app.Activity
import android.util.Log
import app.dtse.hmsskeletondetection.demo.utils.SkeletonUtils
import app.dtse.hmsskeletondetection.demo.views.graphic.SkeletonGraphic
import app.dtse.hmsskeletondetection.demo.views.overlay.GraphicOverlay
import com.huawei.hms.mlsdk.common.LensEngine
import com.huawei.hms.mlsdk.common.MLAnalyzer
import com.huawei.hms.mlsdk.common.MLAnalyzer.MLTransactor
import com.huawei.hms.mlsdk.skeleton.MLSkeleton
import com.huawei.hms.mlsdk.skeleton.MLSkeletonAnalyzer
import java.util.*
class SkeletonTransactor(
private val analyzer: MLSkeletonAnalyzer,
private val graphicOverlay: GraphicOverlay,
private val lensEngine: LensEngine,
private val activity: Activity?
) : MLTransactor<MLSkeleton?> {
private val templateList: List<MLSkeleton>
private var zeroCount = 0
override fun transactResult(results: MLAnalyzer.Result<MLSkeleton?>) {
Log.e(TAG, "detect success")
graphicOverlay.clear()
val items = results.analyseList
val resultsList: MutableList<MLSkeleton?> = ArrayList()
for (i in 0 until items.size()) {
resultsList.add(items.valueAt(i))
}
if (resultsList.size <= 0) {
return
}
val similarity = 0.8f
val skeletonGraphic = SkeletonGraphic(graphicOverlay, resultsList)
graphicOverlay.addGraphic(skeletonGraphic)
graphicOverlay.postInvalidate()
val result = analyzer.caluteSimilarity(resultsList, templateList)
if (result >= similarity) {
zeroCount = if (zeroCount > 0) {
return
} else {
0
}
zeroCount++
} else {
zeroCount = 0
return
}
lensEngine.photograph(null, { bytes ->
SkeletonUtils.takePictureListener.picture(bytes)
activity?.finish()
})
}
override fun destroy() {
Log.e(TAG, "detect fail")
}
companion object {
private const val TAG = "SkeletonTransactor"
}
init {
templateList = SkeletonUtils.getTemplateData()
}
}
Step 3: Set Detection Result Processor to Bind the Analyzer.
1analyzer!!.setTransactor(SkeletonTransactor(analyzer!!, overlay!!, lensEngine!!, activity))
Step 4: Create LensEngine.
1
2
3
4
5
6lensEngine = LensEngine.Creator(context, analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create()
Step 5: Open the camera.
Step 6: Release resources.
1
2
3
4
5
6
7
8
9
10
11if (lensEngine != null) {
lensEngine!!.close()
}if (lensEngine != null) {
lensEngine!!.release()
}if (analyzer != null) {
try {
analyzer!!.stop()
} catch (e: IOException) {
Log.e(TAG, "e=" + e.message)
}
}
Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure the app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMALand TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
Basavaraj.navi said:
Introduction
In this article, I will cover live yoga pose detection. In my last article, I’ve written yoga pose detection using the Huawei ML kit. If you have not read my previous article refer to link Beginner: Find yoga pose using Huawei ML kit skeleton detection - Part 1.
In this article, I will cover live yoga detection.
Definitely, you will have a question about how does this application help?
Let’s take an example, most people attend yoga classes due to COVID-19 nobody is able to attend the yoga classes. So using the Huawei ML kit Skeleton detection record your yoga session video and send it to your yoga master he will check your body joints which is shown in the video. And he will explain what are the mistakes you have done in that recorded yoga session.
Integration of Skeleton Detection
1. Configure the application on the AGC.
2. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Enabling ML. Open AppGallery connect, choose Manage API > ML Kit.
Step 5: Generating a Signing Certificate Fingerprint.
Step 6: Configuring the Signing Certificate Fingerprint.
Step 7: Download your agconnect-services.json file, paste it into the app root directory.
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio(Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
1
2apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
1
2maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add the dependencies in build.gradle
1
2
3implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-yoga-model:2.0.4.300'
To achieve the Skeleton detection example, follow the steps.
1. AGC Configuration
2. Build Android application
Step 1: AGC Configuration
1. Sign in to AppGallery Connect and select My apps.
2. Select the app in which you want to integrate the Huawei ML kit.
3. Navigate to Project Setting> Manage API > ML Kit
Step 2: Build Android application
In this example, I’m detecting yoga poses live using the camera.
While building the application, follow the steps.
Step 1: Create a Skeleton analyzer.
1
2private static MLSkeletonAnalyzer analyzer = null;
analyzer = MLSkeletonAnalyzerFactory.getInstance().skeletonAnalyzer
Step 2: Create SkeletonAnalyzerTransactor class to process the result.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65import android.app.Activity
import android.util.Log
import app.dtse.hmsskeletondetection.demo.utils.SkeletonUtils
import app.dtse.hmsskeletondetection.demo.views.graphic.SkeletonGraphic
import app.dtse.hmsskeletondetection.demo.views.overlay.GraphicOverlay
import com.huawei.hms.mlsdk.common.LensEngine
import com.huawei.hms.mlsdk.common.MLAnalyzer
import com.huawei.hms.mlsdk.common.MLAnalyzer.MLTransactor
import com.huawei.hms.mlsdk.skeleton.MLSkeleton
import com.huawei.hms.mlsdk.skeleton.MLSkeletonAnalyzer
import java.util.*
class SkeletonTransactor(
private val analyzer: MLSkeletonAnalyzer,
private val graphicOverlay: GraphicOverlay,
private val lensEngine: LensEngine,
private val activity: Activity?
) : MLTransactor<MLSkeleton?> {
private val templateList: List<MLSkeleton>
private var zeroCount = 0
override fun transactResult(results: MLAnalyzer.Result<MLSkeleton?>) {
Log.e(TAG, "detect success")
graphicOverlay.clear()
val items = results.analyseList
val resultsList: MutableList<MLSkeleton?> = ArrayList()
for (i in 0 until items.size()) {
resultsList.add(items.valueAt(i))
}
if (resultsList.size <= 0) {
return
}
val similarity = 0.8f
val skeletonGraphic = SkeletonGraphic(graphicOverlay, resultsList)
graphicOverlay.addGraphic(skeletonGraphic)
graphicOverlay.postInvalidate()
val result = analyzer.caluteSimilarity(resultsList, templateList)
if (result >= similarity) {
zeroCount = if (zeroCount > 0) {
return
} else {
0
}
zeroCount++
} else {
zeroCount = 0
return
}
lensEngine.photograph(null, { bytes ->
SkeletonUtils.takePictureListener.picture(bytes)
activity?.finish()
})
}
override fun destroy() {
Log.e(TAG, "detect fail")
}
companion object {
private const val TAG = "SkeletonTransactor"
}
init {
templateList = SkeletonUtils.getTemplateData()
}
}
Step 3: Set Detection Result Processor to Bind the Analyzer.
1analyzer!!.setTransactor(SkeletonTransactor(analyzer!!, overlay!!, lensEngine!!, activity))
Step 4: Create LensEngine.
1
2
3
4
5
6lensEngine = LensEngine.Creator(context, analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create()
Step 5: Open the camera.
Step 6: Release resources.
1
2
3
4
5
6
7
8
9
10
11if (lensEngine != null) {
lensEngine!!.close()
}if (lensEngine != null) {
lensEngine!!.release()
}if (analyzer != null) {
try {
analyzer!!.stop()
} catch (e: IOException) {
Log.e(TAG, "e=" + e.message)
}
}
Result
Tips and Tricks
Check dependencies downloaded properly.
Latest HMS Core APK is required.
If you are taking an image from a camera or gallery make sure the app has camera and storage permission.
Conclusion
In this article, we have learned the integration of the Huawei ML kit, and what is skeleton detection, how it works, what is the use of it, how to get the Joints point from the skeleton detection, types of detections like TYPE_NORMALand TYPE_YOGA.
Reference
Skeleton Detection
Happy coding
Click to expand...
Click to collapse