Intermediate: How to integrate Huawei Awareness Capture API into fitness app (Flutter) - Huawei Developers

Introduction
In this article, we will learn how to implement Huawei Awareness kit features so we can easily integrate these features in to our Fitness application. Providing dynamic and real time information to users is an important point. This article I will cover Time Awareness details and Weather Awareness based details.
      
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What is Huawei Awareness kit Service?
Huawei Awareness kit supports to get the app insight into a users’ current situation more efficiently, making it possible to deliver a smarter, more considerate user experience and it provides the users’ current time, location, behavior, audio device status, ambient light, weather, and nearby beacons, application status, and DarkMode.
Huawei Awareness Kit also strongly emphasizes both the power and memory consumption when accessing these features and helping to ensure that the battery life and memory usage of your apps.
To use these features, Awareness Kit has two different sections:
Capture API
The Capture API allows your app to request the current user status, such as time, location, behavior, application, dark mode, Wi-Fi, screen and headset.
1. Users’ current location.
2. The local time of an arbitrary location given, and additional information regarding that region such as weekdays, holidays etc.
3. Users’ ambient illumination levels.
4. Users’ current behavior, meaning their physical activity including walking, staying still, running, driving or cycling.
5. The weather status of the area the user is located in, inclusive of temperature, wind, humidity, and weather id and province name.
6. The audio device status, specifically the ability to detect whether headphones have been plugged in or not.
7. Beacons located nearby.
8. Wi-Fi status whether user connected Wi-Fi or not.
9. The device dark mode status, using this we can identify the mode.
Barrier API
The Barrier API allows your app to set a combination of contextual conditions. When the preset contextual conditions are met, your app will receive a notification.
Advantages
1. Converged: Multi-dimensional and evolvable awareness capabilities can be called in a unified manner.
2. Accurate: The synergy of hardware and software makes data acquisition more accurate and efficient.
3. Fast: On-chip processing of local service requests and nearby access of cloud services promise a faster service response.
4. Economical: Sharing awareness capabilities avoids separate interactions between apps and the device, reducing system resource consumption. Collaborating with the EMUI (or Magic UI) and Kirin chip, Awareness Kit can even achieve optimal performance with the lowest power consumption.
Requirements
1. Any operating system(i.e. MacOS, Linux and Windows)
2. Any IDE with Flutter SDK installed (i.e. IntelliJ, Android Studio and VsCode etc.)
3. A little knowledge of Dart and Flutter.
4. A Brain to think
5. Minimum API Level 24 is required.
6. Required EMUI 5.0 and later version devices.
Setting up the Awareness kit
1. Before start creating application make sure we connect our project to AppGallery. For more information check this link
2. After that follow the URL for cross-platform plugins. Download required plugins.
3. Enable the Awareness kit in the Manage API section and add the plugin.
4. Add the required dependencies to the build.gradle file under root folder.
Java:
maven {url 'http://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
5. Add the required permissions to the AndroidManifest.xml file under app/src/main folder.
Java:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.INTERNET" />
6. After completing all the above steps, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. You can find all the plugins in pub.dev with the latest versions.
Java:
huawei_awareness:
path: ../huawei_awareness/
After adding them, run flutter pub get command. Now all the plugins are ready to use.
Note: Set multiDexEnabled to true in the android/app directory, so the app will not crash.
Use Awareness to get the Weather information
The Service will help you in fitness activity based on weather condition user can choose whether he wonts to choose indoor activity or outdoor activity.
Before calling service we need to request the permissions once app launching.
Java:
@override
void initState() {
checkPermissions();
requestPermissions();
super.initState();
}
void checkPermissions() async {
locationPermission = await AwarenessUtilsClient.hasLocationPermission();
backgroundLocationPermission =
await AwarenessUtilsClient.hasBackgroundLocationPermission();
activityRecognitionPermission =
await AwarenessUtilsClient.hasActivityRecognitionPermission();
if (locationPermission &&
backgroundLocationPermission &&
activityRecognitionPermission) {
setState(() {
permissions = true;
notifyAwareness();
});
}
}
void requestPermissions() async {
if (locationPermission == false) {
bool status = await AwarenessUtilsClient.requestLocationPermission();
setState(() {
locationPermission = status;
});
}
if (backgroundLocationPermission == false) {
bool status =
await AwarenessUtilsClient.requestBackgroundLocationPermission();
setState(() {
locationPermission = status;
});
}
if (activityRecognitionPermission == false) {
bool status =
await AwarenessUtilsClient.requestActivityRecognitionPermission();
setState(() {
locationPermission = status;
});
checkPermissions();
}
}
Once all the permissions are enabled then we need to call the awareness service.
Java:
captureWeatherByDevice() async {
WeatherResponse response = await AwarenessCaptureClient.getWeatherByDevice();
setState(() {
List<HourlyWeather> hourlyList = response.hourlyWeather;
hourlyWeather = hourlyList[0];
switch (hourlyWeather.weatherId) {
case 1:
assetImage = "sunny.png";
break;
case 4:
assetImage = "cloudy.jpg";
break;
case 7:
assetImage = "cloudy.png";
break;
case 18:
assetImage = "rain.png";
break;
default:
assetImage = "sunny.png";
break;
}
setState(() {
timeInfoStr =timeInfoStr + ' ' + hourlyWeather.tempC.toString() + ' ' + "°C";
});
});
}
It will return the WeatherResponse class instance containing information including, but not limited to, area, temperature, humidity, wind speed and direction etc. According to the results of the temperature, humidity and wind speed, we create various if conditions to check whether those results are within normal ranges and we give values to created temporary integers accordingly. We will use these integers later when we send a notification to our device.
Use Awareness to get the Time Categories information
Awareness Kit can detect the time where the user is located, including whether it is weekend/holiday or workday, time of sunrise/sunset, and other detailed information. You can also set time-sensitive notifications, such as those notifying the user based on conditions. For example if tomorrow holiday we can notify to the user, so that user can plan for the day.
Code:
getTimeCategories() async {
TimeCategoriesResponse response =
await AwarenessCaptureClient.getTimeCategories();
if (response != null) {
setState(() {
List<int> categoriesList = response.timeCategories;
var categories = categoriesList[2];
switch (categories) {
case 1:
timeInfoStr = "Good Morning ❤";
break;
case 2:
timeInfoStr = "Good Afternoon ❤";
break;
case 3:
timeInfoStr = "Good Evening ❤";
break;
case 4:
timeInfoStr = "Good Night ❤";
break;
default:
timeInfoStr = "Unknown";
break;
}
});
}
}
GetTimeCategories method would return the TimeCategoriesResponse in an array if successful.
Note: Refer this URL for constant values
Demo
  
Tips and Tricks
1. Download latest HMS Flutter plugin.
2. Set minSDK version to 24 or later.
3. Do not forget to click pug get after adding dependencies.
4. Latest HMS Core APK is required.
5. Refer this URL for supported Devices list
Conclusion
In this article, I have covered two services Time Awareness and Weather Awareness. Based on weather condition app will suggest few activities to the user and it will notify the temperature.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Awareness Kit URL
Original Source

Related

Building Contextual Apps with Huawei Awareness Kit : Capture API

More articles like this, you can visit HUAWEI Developer Forum​
Hello everyone, in this article we’re going to take a look at the Awareness Kit features so we can easily put these features to our apps.
Providing dynamic and effective user experiences to the app is an important point. Huawei Awareness Kit allows this to be done quickly and economically.It has collection of both contextual and location based features such as users’ current time, location, behavior, audio device status, ambient light, weather, and nearby beacons.
Huawei Awareness Kit also strongly emphasizes both the power and memory consumption when accessing these features and helping to ensure that the battery life and memory usage of your apps.
To use these features, Awareness Kit has two different sections:
Capture API
The Capture API allows your app to request the current user status, such as time, location, behavior, and whether a headset is connected.
Barrier API
The Barrier API allows your app to set a combination of contextual conditions. When the preset contextual conditions are met, your app will receive a notification.
I created a sample app which I’ve open-sourced on GitHub for enthusiasts who want to use this kit.
Setting up the Awareness Kit
You’ll need to do setup before you can use the Awareness Kit.You can follow the official documentation on how to prepare app on App Gallery Connect.
Add the required dependencies to the build.gradle file under app folder.We need awareness and location kit dependencies.
Code:
implementation 'com.huawei.hms:awareness:1.0.3.300'
implementation 'com.huawei.hms:location:4.0.4.300'
Add the required permissions to the AndroidManifest.xml file under app/src/main folder.
Code:
<!-- Location permission, which is sensitive. After the permission is declared, you need to dynamically apply for it in the code. -->
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <!-- Location permission. If only the IP address-based time capture function is used, you can declare only this permission. -->
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <!-- If your app uses the barrier capability and runs on Android 10 or later, you need to add the background location access permission, which is also a sensitive permission. -->
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
<uses-permission android:name="android.permission.BLUETOOTH" /> <!-- Behavior detection permission (Android 10 or later). This permission is sensitive and needs to be dynamically applied for in the code after being declared. -->
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" /> <!-- Behavior detection permission (Android 9). This permission is sensitive and needs to be dynamically applied for in the code after being declared. -->
<uses-permission android:name="com.huawei.hms.permission.ACTIVITY_RECOGNITION" />
Don’t forget checking permissions in the app.
Checking location permission:
Code:
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.ACCESS_FINE_LOCATION), PERMISSION_REQUEST_ACCESS_FINE_LOCATION)
return
}
Capture API
Firstly, We’re going to examine the Capture API in depth.This API allows us to request the current user status from current environment.Using this API we can retrieve data such as:
Obtains the current local time or time of a specified location, such as working day, weekend, etc.
Getting the users’ current location.
Obtains the current activity behavior, such as walking, running, cycling, driving, or staying still.
Indicates whether the device has approached, connected to, or disconnected from a registered beacon.
Retrieve info related to the user currently has their headphones plugged in or unplugged.
Obtains the status of an audio device (connected or disconnected).
Obtains the illuminance of the environment in lux unit where the device is located.
The weather status based on where the user is currently located.
Let’s take a deep look at these different awarenesses that we can retrieve from the Capture API.
Time Awareness
Using the Capture API we can detect whether the holiday information of most countries/regions and sunrise and sunset time of all cities around the world. Some simple use cases could be found in:
A meditation app that wishes to show a notification to remind morning rest
A gift card sending app that may wish to send virtual holiday wish cards to users
To obtain the time categories from the Capture API we simply need to call the timeCategories method.This will return instance of the TimeCategoriesResponse class that if successful, will contain information about the time categories as integer array.
Code:
private fun getTimeCategories() {
try {
// Use getTimeCategories() to get the information about the current time of the user location.
// Time information includes whether the current day is a workday or a holiday, and whether the current day is in the morning, afternoon, or evening, or at the night.
val task = Awareness.getCaptureClient(this).timeCategories
task.addOnSuccessListener { timeCategoriesResponse ->
val timeCategories = timeCategoriesResponse.timeCategories
// do anything with time categories array
}.addOnFailureListener { e ->
Toast.makeText(
applicationContext, "get Time Categories failed",
Toast.LENGTH_SHORT
).show()
Log.e(TAG, "get Time Categories failed", e)
}
} catch (e: Exception) {
logViewTv.text = "get Time Categories failed.Exception:" + e.message
}
}
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Location Awareness
Using the Capture API we can get location information where the user currently is. Some simple use cases could be found in:
A shopping app can show their nearby stores in current user location if there is.
An event app can show the upcoming events based on the appropriate location.
To get the location from the Capture API we simply need to call the location method.This will return instance of the LocationResponse class that if successful, will contain information about the users current location includes latitude, longitude, accuracy and altitude.
Code:
private fun getLocation(){
Awareness.getCaptureClient(this).location
.addOnSuccessListener { locationResponse ->
val location: Location = locationResponse.location
val latitude = location.latitude
val longitude = location.longitude
val accuracy = location.accuracy
val altitude = location.altitude
}
.addOnFailureListener {
logViewTv.text = "get location failed:" + it.message
}
}
Behavior Awareness
We can use the Capture API to detect the current activity behavior. Some simple use cases could be found in:
A fitness app can log user activity in background and make a graph from this data
An ebook listening app that wishes to recommend listening book when user is walking.
To get the current user behavior from the Capture API we simply need to call the behavior method.This will return instance of the BehaviorResponse class that if successful, will contain information about the users current context.
Code:
private fun getUserBehavior(){
Awareness.getCaptureClient(this).behavior
.addOnSuccessListener { behaviorResponse ->
val behaviorStatus = behaviorResponse.behaviorStatus
val mostLikelyBehavior = behaviorStatus.mostLikelyBehavior
val mostLikelyBehaviors = behaviorStatus.probableBehavior
val mostProbableActivity = mostLikelyBehavior.type
val detectedBehavior: Long = behaviorStatus.time
val elapsedTime: Long = behaviorStatus.elapsedRealtimeMillis
}
.addOnFailureListener {
logViewTv.text = "get behavior failed: " + it.message
}
}
Beacon Awareness
We can use the Capture API to get registered beacons in nearby. You need to register beacon devices with your project. For details, please refer to Beacon Management. Some simple use cases could be found in:
When the user enters a store, the application can show a special discount to the person.
A bus app that wishes to show push notifications about buses going in that direction.
To get any nearby registered beacons from the Capture API we simply need to call the getBeaconStatus method. This will return instance of the BeaconStatusResponse class that if successful, will contain information about nearby registered beacons.
Code:
private fun getBeacons(){
val namespace = "sample namespace"
val type = "sample type"
val content = byteArrayOf(
's'.toByte(),
'a'.toByte(),
'm'.toByte(),
'p'.toByte(),
'l'.toByte(),
'e'.toByte()
)
val filter = BeaconStatus.Filter.match(namespace, type, content)
Awareness.getCaptureClient(this).getBeaconStatus(filter)
.addOnSuccessListener { beaconStatusResponse ->
val beaconDataList = beaconStatusResponse.beaconStatus.beaconData
if (beaconDataList != null && beaconDataList.size != 0) {
var i = 1
val builder = StringBuilder()
for (beaconData in beaconDataList) {
builder.append("Beacon Data ").append(i)
builder.append(" namespace:").append(beaconData.namespace)
builder.append(",type:").append(beaconData.type)
builder.append(",content:").append(Arrays.toString(beaconData.content))
builder.append("; ")
i++
}
} else {
logViewTv.text = "no beacons match filter nearby"
}
}
.addOnFailureListener { e ->
logViewTv.text = "get beacon status failed: " + e.message
}
}
Headset Awareness
Using the Capture API we can get information about headphones connected. Some simple use cases could be found in:
A music app stop playing song on their app when the user disconnects their headphones.
An ebook listening app can show a notification to allow quick access to their app when the user connects their headphones.
To obtain the headphone status from the Capture API we simply need to call the headsetStatus method .This will return instance of the HeadsetStatusResponse class that if successful, will contain information about the devices current headphone status.
Bluetooth Car Stereo Awareness
Using the Capture API we can get information about bluetooth car stereo connected. Some simple use cases could be found in:
A music app that wishes to show a notification to play a song when the car stereo connects.
A radio app that wishes to ask playing a radio when the car stereo connects.
To get the car stereo status from the Capture API we simply need to call the getBluetoothStatus method .This will return instance of the BluetoothStatusResponse class that if successful, will contain information about the car stereo status.
Code:
private fun getCarStereoStatus() {
val deviceType = 0 // Value 0 indicates a Bluetooth car stereo.
Awareness.getCaptureClient(this).getBluetoothStatus(deviceType)
.addOnSuccessListener { bluetoothStatusResponse ->
val bluetoothStatus = bluetoothStatusResponse.bluetoothStatus
val status = bluetoothStatus.status
val stateStr = "The Bluetooth car stereo is " + if (status == BluetoothStatus.CONNECTED) "connected" else "disconnected"
}
.addOnFailureListener { e ->
logViewTv.text = "get bluetooth status failed: " + e.message
}
}
Ambient Light Awareness
Using the Capture API we can obtain information about ambient light. Some simple use cases could be found in:
A book reading application can adjust the brightness of the screen according to the ambient light to consider the eyesight.
A video watching application can adjust the brightness of the screen according to the ambient light to consider the eyesight.
To get the ambient light from the Capture API we simply need to call the lightIntensity method .This will return instance of the AmbientLightResponse class that if successful, will contain information about the ambient light as lux.
Code:
private fun getAmbientLight() {
Awareness.getCaptureClient(this).lightIntensity
.addOnSuccessListener { ambientLightResponse ->
val ambientLightStatus = ambientLightResponse.ambientLightStatus
logViewTv.text = "Light intensity is " + ambientLightStatus.lightIntensity + " lux"
}
.addOnFailureListener { e ->
logViewTv.text = "get light intensity failed" + e.message
}
}
Weather Awareness
We can also use the Capture API to retrieve the weather status of where the user is currently located. Some simple use cases could be found in:
A music app can recommend a playlist to listen to, depending on the weather.
A fitness app can give a weather alert to users before running outside.
To get the weather status from the Capture API we simply need to call the weatherByDevice method .This will return instance of the WeatherStatusResponse class that if successful.
Code:
private fun getWeather(){
Awareness.getCaptureClient(this).weatherByDevice
.addOnSuccessListener { weatherStatusResponse ->
val weatherStatus = weatherStatusResponse.weatherStatus
val weatherSituation = weatherStatus.weatherSituation
val situation = weatherSituation.situation
cityNameTv.text = "City: " + weatherSituation.city.name
weatherIdTv.text = "Weather id is: " + situation.weatherId
cnWeatherIdTv.text = "CN Weather id is: " + situation.cnWeatherId
temperatureCTv.text = "Temperature is: " + situation.temperatureC + "℃"
temperatureFTv.text = "Temperature is: " + situation.temperatureF + "℉"
windSpeedTv.text = "Wind speed is: " + situation.windSpeed
windDirectionTv.text = "Wind direction is: " + situation.windDir
humidityTv.text = "Humidity is : " + situation.humidity
}
.addOnFailureListener {
logViewTv.text = "get weather failed: " + it.message
}
}
Can Awareness will be useful for restaurants for promoting their customers with a message.

Detect Face Movements With ML Kit

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Detecting and tracking the user face movements can be very powerful feature that you can develop in your application. With this feature that you can manage and get input requests from the user face movements to execute some intents in the app.
Requirements to use ML Kit:
Android Studio 3.X or later version
JDK 1.8 or later
To able to use HMS ML Kit, you need to integrate HMS Core to your project and also add HMS ML Kit SDK. Add Write External Storage and Camera permissions afterwards.
You can click this link to integrate HMS Core to your project.
After integrating HMS Core, add HMS ML Kitdependencies in build.gradle file in app directory.
Code:
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.5.300'
implementation 'com.huawei.hms:ml-computer-vision-face-3d-model:2.0.5.300'
Add permissions to the Manifest File:
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Development Process:
In this article, we are going to use HMS ML Kit in order to track face movements from the user. In order to use this feature, we are going to use 3D features of ML Kit. We are going to use one activity and a helper class.
XML Structure of Activity:
XML:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<SurfaceView
android:id="@+id/surfaceViewCamera"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
We are going to use a surfaceView to use camera in the mobile phone screen. We are going to add a callback to our surfaceHolderCamera to track when it’s created, changed and destroyed.
Implementing HMS ML Kit Default View:
Java:
private lateinit var mAnalyzer: ML3DFaceAnalyzer
private lateinit var mLensEngine: LensEngine
private lateinit var mFaceAnalyzerTransactor: FaceAnalyzerTransactor
private var surfaceHolderCamera: SurfaceHolder? = null
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, 0)
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 0 && grantResults.isNotEmpty() && hasPermissions(requiredPermissions))
init()
}
private fun init() {
mAnalyzer = createAnalyzer()
mFaceAnalyzerTransactor = FaceAnalyzerTransactor()
mAnalyzer.setTransactor(mFaceAnalyzerTransactor)
prepareViews()
}
private fun prepareViews() {
surfaceHolderCamera = surfaceViewCamera.holder
surfaceHolderCamera?.addCallback(surfaceHolderCallback)
}
ML3DFaceAnalyzer → A face analyzer, which is used to detect 3D faces.
Lens Engine → A class with the camera initialization, frame obtaining, and logic control functions encapsulated.
FaceAnalyzerTransactor → A class for processing recognition results. This class implements MLAnalyzer.MLTransactor (A general API that needs to be implemented by a detection result processor to process detection results) and uses the transactResult method in this class to obtain the recognition results and implement specific services.
SurfaceHolder → By adding a callback to surface view, we will detect the surface changes like surface changed, created and destroyed. We will use this object callback to handle different situation in the application.
In this code block, we basically request the permissions. If the user has already given his/her permissions in init function we will create a ML3DFaceAnalyzer in order to detect 3D faces. We will handle the surface change sitations with the surfaceHolderCallback functions. Afterwards, to process the results we will create FaceAnalyzerTransactor object and the transactor of the analyzer. After everything is set, we will start to analyze the face of the user by prepareView function.
Java:
private fun createAnalyzer(): ML3DFaceAnalyzer {
val settings = ML3DFaceAnalyzerSetting.Factory()
.setTracingAllowed(true)
.setPerformanceType(ML3DFaceAnalyzerSetting.TYPE_PRECISION)
.create()
return MLAnalyzerFactory.getInstance().get3DFaceAnalyzer(settings)
}
In this code block, we will adjust the settings for our 3DFaceAnalyzer object.
setTracingAllowed → Indicates whether to enable face tracking. Due to we desire to trace the face, we set this value true.
setPerformanceType → There are two preference ways to set this value. Speed/precision preference mode for the analyzer to choose precision or speed priority in performing face detection. We choosed TYPE_PRECISION is this example.
By MLAnalyzerFactory.getInstance().get3DFaceAnalyzer(settings) method we will create a 3DFaceAnalyzer object.
Handling Surface Changes
Java:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(p0: SurfaceHolder, p1: Int, p2: Int, p3: Int) {
mLensEngine = createLensEngine(p1, p2)
mLensEngine.run(p0)
}
override fun surfaceDestroyed(p0: SurfaceHolder) {
mLensEngine.release()
}
override fun surfaceCreated(p0: SurfaceHolder) {
}
}
private fun createLensEngine(width: Int, height: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(this, mAnalyzer)
.applyFps(20F)
.setLensType(LensEngine.FRONT_LENS)
.enableAutomaticFocus(true)
return if (resources.configuration.orientation == Configuration.ORIENTATION_PORTRAIT) {
lensEngineCreator.let {
it.applyDisplayDimension(height, width)
it.create()
}
} else {
lensEngineCreator.let {
it.applyDisplayDimension(width, height)
it.create()
}
}
}
When surface is changed, we will create a LensEngine. We will get the width and height values of the surface in the surfaceChanged function by p0 and p1. When we create LensEngine, we will use these parameters. We created a LensEngine by setting the context and the 3DFaceAnalyzer that we have already created.
applyFps → Sets the preview frame rate (FPS) of a camera. The preview frame rate of a camera depends on the firmware capability of the camera.
setLensType → Sets the camera type, BACK_LENS for rear camera and FRONT_LENS for front camera. In this example, we will use front camera.
enableAutomaticFocus → Enables or disables the automatic focus function for a camera.
surfaceChanged method will be executed whenever the mobile screen orientation changes. We also handled this situation in createLensEngine function. A LensEngine will be created the width and height values of the mobile phone by checking the orientation.
run → Starts LensEngine. During startup, the LensEngine selects an appropriate camera based on the user’s requirements on the frame rate and preview image size, initializes parameters of the selected camera, and starts an analyzer thread to analyze and process frame data.
release → Releases resources occupied by LensEngine. We use this function whenever the surfaceDestroyed method is called.
Analysing Results
Java:
class FaceAnalyzerTransactor: MLTransactor<ML3DFace> {
override fun transactResult(results: MLAnalyzer.Result<ML3DFace>?) {
val items: SparseArray<ML3DFace> = results!!.analyseList
Log.i("FaceOrientation:X ",items.get(0).get3DFaceEulerX().toString())
Log.i("FaceOrientation:Y",items.get(0).get3DFaceEulerY().toString())
Log.i("FaceOrientation:Z",items.get(0).get3DFaceEulerZ().toString())
}
override fun destroy() {
}
}
We created a FaceAnalyzerTransactor object in init method. This object is used to process the results.
Due to it implemented from MLTransactor class, we have to override destroy and transactResult methods. With transactResult method we can obtain the recognition results.MLAnalyzer. It obtains the detection as a result list. Due to there will only one face in this example, we will get the first ML3DFace object of the result list.
ML3DFace represents a 3D face detected. It has features include the width, height, rotation degree, 3D coordinate points, and projection matrix.
get3DFaceEulerX() → 3D face pitch angle. A positive value indicates that the face is looking up, and a negative value indicates that the face is looking down.
get3DFaceEulerY() → 3D face yaw angle. A positive value indicates that the face turns to the right side of the image, and a negative value indicates that the face turns to the left side of the image.
get3DFaceEulerZ() → 3D face roll angle. A positive value indicates a clockwise rotation. A negative value indicates a counter-clockwise rotation.
All above methods results change from -1 to +1 according to the face movement. In this example, I will show the demonstration of get3DFaceEulerY() method.
Not making any head movements
In this example, outputs are between 0.04 and 0.1 when I did not move my head. If the values are between -0.1 to 0.1, the user possibly does not move his/her head or makes small movements.
Changing head direction to right
When I moved my head to the right direction outputs were generally between 0.5 and 1. So it can be said that if the values are bigger than 0.5, the user moved his head to right direction.
Changing head direction to left
When I moved my head to the left direction outputs were generally between -0.5 and -1. So it can be said that if the values are less than -0.5, the user moved his head to left direction.
As you can see, HMS ML Kit can detect the user face and track face movements successfully. This project and codes can be accessible from the github link in references area.
You can also implement other ML Kit features like Text-related services, Language/Voice-related services, Language/Voice-related services, Image-related services, Face/Body-related services and Natural Language Processing services.
References:
ML Kit
ML Kit Documentation
Face Detection With ML Kit
Github
Very interesting feature.
Rebis said:
Very interesting feature.
Click to expand...
Click to collapse
I can't wait to use it.
Hi, this service will work on server side?
What is the accuracy rate in low light ?
Which permissions are required?

Intermediate: How to integrate Huawei Awareness kit Barrier API with Local Notification into fitness app (Flutter)

Introduction
In this article, we will learn how to implement Huawei Awareness kit features with Local Notification service to send notification when certain condition met. We can create our own conditions to be met and observe them and notify to the user even when the application is not running.
The Awareness Kit is quite a comprehensive detection and event SDK. If you're developing a context-aware app for Huawei devices, this is definitely the library for you.
        
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What can we do using Awareness kit?
With HUAWEI Awareness Kit, you can obtain a lot of different contextual information about users’ location, behavior, weather, current time, device status, ambient light, audio device status and makes it easier to provide more refined user experience.
Basic Usage
There are quite a few awareness "modules" in this SDK: Time Awareness, Location Awareness, Behavior Awareness, Beacon Awareness, Audio Device Status Awareness, Ambient Light Awareness, and Weather Awareness. Read on to find out how and when to use them.
Each of these modules has two modes: capture, which is an on-demand information retrieval; and barrier, which triggers an action when a specified condition is met.
Use case
The Barrier API allows us to set specific barriers for specific conditions in our app and when these conditions are satisfies, our app will be notified, so we could take action based on our conditions. In this sample, when user starts the activity and app notifies to the user “please connect the head set to listen music” while doing activity.
Table of content
1. Project setup
2. Headset capture API
3. Headset Barrier API
4. Flutter Local notification plugin
Advantages
1. Converged: Multi-dimensional and evolvable awareness capabilities can be called in a unified manner.
2. Accurate: The synergy of hardware and software makes data acquisition more accurate and efficient.
3. Fast: On-chip processing of local service requests and nearby access of cloud services promise a faster service response.
4. Economical: Sharing awareness capabilities avoids separate interactions between apps and the device, reducing system resource consumption. Collaborating with the EMUI (or Magic UI) and Kirin chip, Awareness Kit can even achieve optimal performance with the lowest power consumption.
Requirements
1. Any operating system(i.e. MacOS, Linux and Windows)
2. Any IDE with Flutter SDK installed (i.e. IntelliJ, Android Studio and VsCode etc.)
3. A little knowledge of Dart and Flutter.
4. A Brain to think
5. Minimum API Level 24 is required.
6. Required EMUI 5.0 and later version devices.
Setting up the Awareness kit
1. Firstly create a developer account in AppGallery Connect. After create your developer account, you can create a new project and new app. For more information check this link.
2. Generating a Signing certificate fingerprint, follow below command.
Code:
keytool -genkey -keystore <application_project_dir>\android\app\<signing_certificate_fingerprint_filename>.jks
-storepass <store_password> -alias <alias> -keypass<key_password> -keysize 2048 -keyalg RSA -validity 36500
3. The above command creates the keystore file in appdir/android/app.
4. Now we need to obtain the SHA256 key, follow the command.
Code:
keytool -list -v -keystore <application_project_dir>\android\app\<signing_certificate_fingerprint_filename>.jks
5. Enable the Awareness kit in the Manage API section and add the plugin.
6. After configuring project, we need to download agconnect-services.json file in your project and add into your project.
7. After that follow the URL for cross-platform plugins. Download required plugins.
  
8. The following dependencies for HMS usage need to be added to build.gradle file under the android directory.
Code:
buildscript {
ext.kotlin_version = '1.3.50'
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.5.0'
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
9. Then add the following line of code to the build.gradle file under the android/app directory.
Code:
apply plugin: 'com.huawei.agconnect'
10. Add the required permissions to the AndroidManifest.xml file under app>src>main folder.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.INTERNET" />
11. After completing all the above steps, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. You can find all the plugins in pub.dev with the latest versions.
Code:
huawei_awareness:
path: ../huawei_awareness/
12. To display local notification, we need to add flutter local notification plugin.
Code:
flutter_local_notifications: ^3.0.1+6
After adding them, run flutter pub get command. Now all the plugins are ready to use.
Note: Set multiDexEnabled to true in the android/app directory, so that app will not crash.
Code Implementation
Use Capture and Barrier API to get Headset status
This service will helps in your application before starting activity, it will remind you to connect the headset to play music.
We need to request the runtime permissions before calling any service.
Code:
@override
void initState() {
checkPermissions();
requestPermissions();
super.initState();
}
void checkPermissions() async {
locationPermission = await AwarenessUtilsClient.hasLocationPermission();
backgroundLocationPermission =
await AwarenessUtilsClient.hasBackgroundLocationPermission();
activityRecognitionPermission =
await AwarenessUtilsClient.hasActivityRecognitionPermission();
if (locationPermission &&
backgroundLocationPermission &&
activityRecognitionPermission) {
setState(() {
permissions = true;
notifyAwareness();
});
}
}
void requestPermissions() async {
if (locationPermission == false) {
bool status = await AwarenessUtilsClient.requestLocationPermission();
setState(() {
locationPermission = status;
});
}
if (backgroundLocationPermission == false) {
bool status =
await AwarenessUtilsClient.requestBackgroundLocationPermission();
setState(() {
locationPermission = status;
});
}
if (activityRecognitionPermission == false) {
bool status =
await AwarenessUtilsClient.requestActivityRecognitionPermission();
setState(() {
locationPermission = status;
});
checkPermissions();
}
}
Capture API: Now all the permissions are allowed, once app launched if we want to check the headset status, then we need to call the Capture API using getHeadsetStatus(), only one time this service will cal.
Code:
checkHeadsetStatus() async {
HeadsetResponse response = await AwarenessCaptureClient.getHeadsetStatus();
setState(() {
switch (response.headsetStatus) {
case HeadsetStatus.Disconnected:
_showNotification(
"Music", "Please connect headset before start activity");
break;
}
});
log(response.toJson(), name: "captureHeadset");
}
Barrier API: If you want to set some conditions into your app, then we need to use Barrier API. This service keep on listening event once satisfies the conditions automatically it will notifies to user. for example we mentioned some condition like we need to play music once activity starts, now user connects the headset automatically it will notifies to the user headset connected and music playing.
First we need to set barrier condition, it means the barrier will triggered when conditions satisfies.
Code:
String headSetBarrier = "HeadSet";
AwarenessBarrier headsetBarrier = HeadsetBarrier.keeping(
barrierLabel: headSetBarrier, headsetStatus: HeadsetStatus.Connected);
Add the barrier using updateBarriers() this method will return whether barrier added or not.
Code:
bool status =await AwarenessBarrierClient.updateBarriers(barrier: headsetBarrier);
If status returns true it means barrier successfully added, now we need to declare StreamSubscription to listen event, it will keep on update the data once condition satisfies it will trigger.
Code:
if(status){
log("Headset Barrier added.");
StreamSubscription<dynamic> subscription;
subscription = AwarenessBarrierClient.onBarrierStatusStream.listen((event) {
if (mounted) {
setState(() {
switch (event.presentStatus) {
case HeadsetStatus.Connected:
_showNotification("Cool HeadSet", "Headset Connected,Want to listen some music?");
isPlaying = true;
print("Headset Status: Connected");
break;
case HeadsetStatus.Disconnected:
_showNotification("HeadSet", "Headset Disconnected, your music stopped");
print("Headset Status: Disconnected");
isPlaying = false;
break;
case HeadsetStatus.Unknown:
_showNotification("HeadSet", "Your headset Unknown");
print("Headset Status: Unknown");
isPlaying = false;
break;
}
});
}
}, onError: (error) {
log(error.toString());
});
}else{
log("Headset Barrier not added.");
}
Need of Local Push Notification?
1. We can Schedule notification.
2. No web request is required to display Local notification.
3. No limit of notification per user.
4. Originate from the same device and displayed on the same device.
5. Alert the user or remind the user to perform some task.
This package provides us the required functionality of Local Notification. Using this package we can integrate our app with Local Notification in android and ios app both.
Add the following permission to integrate your app with the ability of scheduled notification.
Code:
<uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" />
<uses-permission android:name="android.permission.VIBRATE" />
Add inside application.
Code:
<receiver android:name="com.dexterous.flutterlocalnotifications.ScheduledNotificationBootReceiver">
<intent-filter>
<action android:name="android.intent.action.BOOT_COMPLETED"/>
</intent-filter>
</receiver>
<receiver android:name="com.dexterous.flutterlocalnotifications.ScheduledNotificationReceiver" />
Let’s create Flutter local notification object.
Code:
FlutterLocalNotificationsPlugin localNotification = FlutterLocalNotificationsPlugin();
@override
void initState() {
// TODO: implement initState
super.initState();
var initializationSettingsAndroid =
AndroidInitializationSettings('ic_launcher');
var initializationSettingsIOs = IOSInitializationSettings();
var initSetttings = InitializationSettings(
android: initializationSettingsAndroid, iOS: initializationSettingsIOs);
localNotification.initialize(initSetttings);
}
Now create logic for display simple notification.
Code:
Future _showNotification(String title, String description) async {
var androidDetails = AndroidNotificationDetails(
"channelId", "channelName", "content",
importance: Importance.high);
var iosDetails = IOSNotificationDetails();
var generateNotification =
NotificationDetails(android: androidDetails, iOS: iosDetails);
await localNotification.show(
0, title, description, generateNotification);
}
Demo
            
Tips and Tricks
1. Download latest HMS Flutter plugin.
2. Set minSDK version to 24 or later.
3. HMS Core APK 4.0.2.300 is required.
4. Currently this plugin not supporting background task.
5. Do not forget to click pug get after adding dependencies.
Conclusion
In this article, we have learned how to use Barrier API of Huawei Awareness Kit with a Local Notification to observe the changes in environmental factors even when the application is not running.
As you may notice, the permanent notification indicating that the application is running in the background is not dismissible by the user which can be annoying.
Based on requirement we can utilize different APIs, Huawei Awareness Kit has many other great features that we can use with foreground services in our applications.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Awareness Kit URL.
Awareness Capture API Article URL.
Original Source

A Quick Introduction about How to Implement Sound Detection

For some apps, it's necessary to have a function called sound detection that can recognize sounds like knocks on the door, rings of the doorbell, and car horns. Developing such a function can be costly for small- and medium-sized developers, so what should they do in this situation?
There's no need to worry about if you have the sound detection service in HUAWEI ML Kit. Integrating its SDK into your app is simple, and you can equip it with the sound detection function that can work well even when the device does not connect to the network.
Introduction to Sound Detection in HUAWEI ML Kit
This service can detect sound events online by real-time recording. The detected sound events can help you perform subsequent actions. Currently, the following types of sound events are supported: laughter, child crying, snoring, sneezing, shouting, cat meowing, dog barking, running water (such as from taps, streams, and ocean waves), car horns, doorbells, knocking on doors, fire alarms (including smoke alarms), and sirens (such as those from fire trucks, ambulances, police cars, and air defenses).
Preparations
Configuring the Development Environment
Create an app in AppGallery Connect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For details, see Getting Started with Android.
Enable ML Kit.
Click here to get more information.
After the app is created, an agconnect-services.json file will be automatically generated. Download it and copy it to the root directory of your project.
Configure the Huawei Maven repository address.
To learn more, click here.
Integrate the sound detection SDK.
It is recommended to integrate the SDK in full SDK mode. Add build dependencies for the SDK in the app-level build.gradle file.
Code:
// Import the sound detection package.
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-sdk:2.1.0.300'
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-model:2.1.0.300'
Add the AppGallery Connect plugin configuration as needed using either of the following methods:
Method 1: Add the following information under the declaration in the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block:
Code:
plugins {
id 'com.android.application'
id 'com.huawei.agconnect'
}
Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file. After a user installs your app from HUAWEI AppGallery, the machine learning model is automatically updated to the user's device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "sounddect"/>
For details, go to Integrating the Sound Detection SDK.
Development Procedure​
Obtain the microphone permission. If the app does not have this permission, error 12203 will be reported.
(Mandatory) Apply for the static permission.
Code:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
(Mandatory) Apply for the dynamic permission.
ActivityCompat.requestPermissions(
this, new String[]{Manifest.permission.RECORD_AUDIO
}, 1);
Create an MLSoundDector object.
Code:
private static final String TAG = "MLSoundDectorDemo";
// Object of sound detection.
private MLSoundDector mlSoundDector;
// Create an MLSoundDector object and configure the callback.
private void initMLSoundDector(){
mlSoundDector = MLSoundDector.createSoundDector();
mlSoundDector.setSoundDectListener(listener);
}
Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
// Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
Code:
private MLSoundDectListener listener = new MLSoundDectListener() {
@Override
public void onSoundSuccessResult(Bundle result) {
// Processing logic when the detection is successful. The detection result ranges from 0 to 12, corresponding to the 13 sound types whose names start with SOUND_EVENT_TYPE. The types are defined in MLSoundDectConstants.java.
int soundType = result.getInt(MLSoundDector.RESULTS_RECOGNIZED);
Log.d(TAG,"Detection success:"+soundType);
}
@Override
public void onSoundFailResult(int errCode) {
// Processing logic for detection failure. The possible cause is that your app does not have the microphone permission (Manifest.permission.RECORD_AUDIO).
Log.d(TAG,"Detection failure"+errCode);
}
};
Note: The code above prints the type of the detected sound as an integer. In the actual situation, you can convert the integer into a data type that users can understand.
Definition for the types of detected sounds:
Code:
<string-array name="sound_dect_voice_type">
<item>laughter</item>
<item>baby crying sound</item>
<item>snore</item>
<item>sneeze</item>
<item>shout</item>
<item>cat's meow</item>
<item>dog's bark</item>
<item>running water</item>
<item>car horn sound</item>
<item>doorbell sound</item>
<item>knock</item>
<item>fire alarm sound</item>
<item>alarm sound</item>
</string-array>
Start and stop sound detection.
Code:
@Override
public void onClick(View v) {
switch (v.getId()){
case R.id.btn_start_detect:
if (mlSoundDector != null){
boolean isStarted = mlSoundDector.start(this); // context: Context.
// If the value of isStared is true, the detection is successfully started. If the value of isStared is false, the detection fails to be started. (The possible cause is that the microphone is occupied by the system or another app.)
if (isStarted){
Toast.makeText(this,"The detection is successfully started.", Toast.LENGTH_SHORT).show();
}
}
break;
case R.id.btn_stop_detect:
if (mlSoundDector != null){
mlSoundDector.stop();
}
break;
}
}
Call destroy() to release resources when the sound detection page is closed.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
if (mlSoundDector != null){
mlSoundDector.destroy();
}
}
Testing the App​
Using the knock as an example, the output result of sound detection is expected to be 10.
Tap Start detecting and simulate a knock on the door. If you get logs as follows in Android Studio's console, they indicates that the integration of the sound detection SDK is successful.
More Information​
Sound detection belongs to one of the six capability categories of ML Kit, which are related to text, language/voice, image, face/body, natural language processing, and custom model.
Sound detection is a service in the language/voice-related category.
Interested in other categories? Feel free to have a look at the HUAWEI ML Kit document.
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source

Posture Recognition: Natural Interaction Brought to Life

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Augmented reality (AR) provides immersive interactions by blending real and virtual worlds, making human-machine interactions more interesting and convenient than ever. A common application of AR involves placing a virtual object in the real environment, where the user is free to control or interact with the virtual object. However, there is so much more AR can do beyond that.
To make interactions easier and more immersive, many mobile app developers now allow users to control their devices without having to touch the screen, by identifying the body motions, hand gestures, and facial expressions of users in real time, and using the identified information to trigger different events in the app. For example, in an AR somatosensory game, players can trigger an action in the game by striking a pose, which spares them from having to frequently tap keys on the control console. Likewise, when shooting an image or short video, the user can apply special effects to the image or video by striking specific poses, without even having to touch the screen. In a trainer-guided health and fitness app, the system powered by AR can identify the user's real-time postures to determine whether they are doing the exercise correctly, and guide them to exercise in the correct way. All of these would be impossible without AR.
How then can an app accurately identify postures of users, to power these real time interactions?
If you are also considering developing an AR app that needs to identify user motions in real time to trigger a specific event, such as to control the interaction interface on a device or to recognize and control game operations, integrating an SDK that provides the posture recognition capability is a no brainer. Integrating this SDK will greatly streamline the development process, and allow you to focus on improving the app design and craft the best possible user experience.
HMS Core AR Engine does much of the heavy lifting for you. Its posture recognition capability accurately identifies different body postures of users in real time. After integrating this SDK, your app will be able to use both the front and rear cameras of the device to recognize six different postures from a single person in real time, and output and display the recognition results in the app.
The SDK provides basic core features that motion sensing apps will need, and enriches your AR apps with remote control and collaborative capabilities.
Here I will show you how to integrate AR Engine to implement these amazing features.
How to Develop​Requirements on the development environment:
JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​
1. Before getting started with the development, you will need to first register as a Huawei developer and complete identity verification on the HUAWEI Developers website. You can click here to find out the detailed registration and identity verification procedure.
2. Before getting started with the development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development​1. Check whether AR Engine has been installed on the current device. If so, your app will be able to run properly. If not, you need to prompt the user to install AR Engine, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Initialize an AR scene. AR Engine supports up to five scenes, including motion tracking (ARWorldTrackingConfig[z(1] ), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
3. Call the ARBodyTrackingConfig API to initialize the human body tracking scene.
Code:
mArSession = new ARSession(context)
ARBodyTrackingConfig config = new ARHandTrackingConfig(mArSession);
Config.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE.MASK);
Configure the session information.
mArSession.configure(config);
4. Initialize the BodyRelatedDisplay API to render data related to the main AR type.
Code:
Public interface BodyRelatedDisplay{
Void init();
Void onDrawFrame (Collection<ARBody> bodies,float[] projectionMatrix) ;
}
5. Initialize the BodyRenderManager class, which is used to render the personal data obtained by AREngine.
Code:
Public class BodyRenderManager implements GLSurfaceView.Renderer{
// Implement the onDrawFrame() method.
Public void onDrawFrame(){
ARFrame frame = mSession.update();
ARCamera camera = Frame.getCramera();
// Obtain the projection matrix of the AR camera.
Camera.getProjectionMatrix();
// Obtain the set of all traceable objects of the specified type and pass ARBody.class to return the human body tracking result.
Collection<ARBody> bodies = mSession.getAllTrackbles(ARBody.class);
}
}
6. Initialize BodySkeletonDisplay to obtain skeleton data and pass the data to the OpenGL ES, which will render the data and display it on the device screen.
Code:
Public class BodySkeletonDisplay implements BodyRelatedDisplay{
// Methods used in this class are as follows:
// Initialization method.
public void init(){
}
// Use OpenGL to update and draw the node data.
Public void onDrawFrame(Collection<ARBody> bodies,float[] projectionMatrix){
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = DRAW_COORDINATE;
}
findValidSkeletonPoints(body);
updateBodySkeleton();
drawBodySkeleton(coordinate, projectionMatrix);
}
}
}
// Search for valid skeleton points.
private void findValidSkeletonPoints(ARBody arBody) {
int index = 0;
int[] isExists;
int validPointNum = 0;
float[] points;
float[] skeletonPoints;
if (arBody.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
isExists = arBody.getSkeletonPointIsExist3D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint3D();
} else {
isExists = arBody.getSkeletonPointIsExist2D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint2D();
}
for (int i = 0; i < isExists.length; i++) {
if (isExists[i] != 0) {
points[index++] = skeletonPoints[3 * i];
points[index++] = skeletonPoints[3 * i + 1];
points[index++] = skeletonPoints[3 * i + 2];
validPointNum++;
}
}
mSkeletonPoints = FloatBuffer.wrap(points);
mPointsNum = validPointNum;
}
}
7. Obtain the skeleton point connection data and pass it to OpenGL ES, which will then render the data and display it on the device screen.
Code:
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {
// Render the lines between body bones.
public void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix) {
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = COORDINATE_SYSTEM_TYPE_3D_FLAG;
}
updateBodySkeletonLineData(body);
drawSkeletonLine(coordinate, projectionMatrix);
}
}
}
}
Conclusion​By blending real and virtual worlds, AR gives users the tools they need to overlay creative effects in real environments, and interact with these imaginary virtual elements. AR makes it easy to build whimsical and immersive interactions that enhance user experience. From virtual try-on, gameplay, photo and video shooting, to product launch, training and learning, and home decoration, everything is made easier and more interesting with AR.
If you are considering developing an AR app that interacts with users when they strike specific poses, like jumping, showing their palm, and raising their hands, or even more complicated motions, you will need to equip your app to accurately identify these motions in real time. The AR Engine SDK is a capability that makes this possible. This SDK equips your app to track user motions with a high degree of accuracy, and then interact with the motions, easing the process for developing AR-powered apps.
References​AR Engine Development Guide
Sample Code
Software and Hardware Requirements of AR Engine Features

Categories

Resources