More information like this, you can visit HUAWEI Developer Forum
1 Scenario
What Can Awareness Kit Bring?
As an offline shopping and consumption app, how to attract users and promote user consumption is one of the pain points. Even if a merchant has prepared a large number of coupons, users still need to access the app to view the coupons.
The HMS Awareness Kit provides a comprehensive intelligent push solution. It accurately determines user intentions based on the combination of multiple perception states. It helps business districts or stores apps to push discount information immediately when the potential user picks up a mobile phone, promoting sales. The Awareness Kit can also accurately determine the time and scenario that are not suitable for pushing information, avoiding frequent notifications.
What Is Awareness Kit?
Awareness kit provides the app with the ability to obtain users' contextual information including current time, location, activity status (behavior), headset status, vehicle-mounted connection status, ambient light, weather, and beacons connection status. Awareness Kit sends notifications to the app through the barrier (fence) capability that can run in the background, this feature enables apps to provide accurate and considerate services for users in a timely manner. The preceding capabilities are still being expanded. You can combine these capabilities to build a combined fence, making the service capability of the app more intelligent and accurate.
With the support of the Awareness Kit, the app provides the following experience for users:
In your spare time, when you casually walk to the shopping mall or stay for several minutes outside a store, the notification bar on your phone reminds you that the shopping mall has special discounts today. Maybe that's what you need.
You do not need to open the app. Instead, you can directly click the notification button to obtain all types of coupons.
You can automatically sign in at every online hot store. Oh, it's one step closer to exclusive discounts.
In addition, the app can recommend the most appropriate discount information based on the weather and time. Developers can also set exclusion scenarios to avoid disturbing users when the weather and time are not suitable for shopping and consumption. The app can also determine whether to push messages based on the activity status of the user. For example, if a user walks through a business district or stops in a business district, push is required. If the user is cycling or in a car, avoid pushing.
The Awareness Kit provides the following awareness functions
Geo-fence awareness: Users can set the fence radius freely. When a user enters, leaves, or stays in a fence (barrier), the system can remind the user. It is recommended that notifications be triggered when a user stays in a geo-fence to avoid disturbing the user
Weather, time, and activity (behavior) status awareness: Accurately identify shopping scenarios based on the time, weather, and activity status of users near the business district to avoid ineffective reminders.
Beacon awareness: Beacon (Bluetooth technology-based hardware) is installed in a store, and a notification may be sent to a user when the user enters a range. The difference between the beacon and the geo-fence is the size of the range. Generally, the beacon has a sensing range of only more than 10 meters, which can implement precise reminder between different shops in a shopping mall.
Advantages of the Awareness Kit
Users do not need to start the app in advance. After entering the geo-fence range, users can activate the app in the background to trigger notifications.
If the App process is killed by the system, you can still receive notifications through the fence service.
Click the notification to activate the app on the GUI. Click the notification to go to the app recommendation page.
Precise push through combined fences. In this way, invalid notifications are not provided in scenarios where users do not need them, and frequent interruptions are avoided.
2 Development Preparations
The following three key steps are required for integrating the Awareness Kit. For details, see the HUAWEI Developer document.
AppGallery Connect configuration
Integrating the HMS Awareness SDK
Configuring Obfuscation Scripts
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/awareness-preparation
3 Key Code Development Steps
1. Create a barrie.
Code:
double latitude = 22.4943;
double longitude = 113.7436;
double radius = 100;
long timeOfDuration = 30 * 1000L;
//Create a stay geo-fence. The longitude and latitude are those of the shop in the business district. The radius is set to 100 m and the residence duration is set to 30s.
AwarenessBarrier stayBarrier = LocationBarrier.stay(latitude, longitude, radius, timeOfDuration);
//Create an behavior barrier. The behavior status is cycling and in-vehicle.
AwarenessBarrier behaviorBarrier = BehaviorBarrier.keeping(BehaviorBarrier.BEHAVIOR_ON_BICYCLE, BehaviorBarrier.BEHAVIOR_IN_VEHICLE);
//Create a time barrier. The status of the time barrier is true in the late night time segment (21:00 to 06:00 of the next day).
AwarenessBarrier timeBarrier = TimeBarrier.inTimeCategory(TimeBarrier.TIME_CATEGORY_NIGHT);
//Combine the three barriers. The status of the combined barrier changes to true only when the following conditions are met: The user stays in the geo-fence for more than 30s, the user is not in the cycling or cycling state, and the user is not in the night time segment.
AwarenessBarrier combinedBarrier = AwarenessBarrier.and(stayBarrier, AwarenessBarrier.not(timeBarrier), AwarenessBarrier.not(behaviorBarrier));
//Create a pending intent. When the barrier status changes, the pending intent is triggered. The following uses starting a service as an example.
PendingIntent pendingIntent;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
//In Android 8.0 or later, only foreground services can be started when the app is running in the background.
pendingIntent = PendingIntent.getForegroundService(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);
} else {
pendingIntent = PendingIntent.getService(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);
}
// Create a label for the barrier. You can query or delete the barrier based on the label.
String combinedBarrierLabel = "combined barrier label";
2. Register the barrier.
Code:
//Register the created composite barrier and its corresponding label and pendingIntent to the awareness kit.
Awareness.getBarrierClient(context).updateBarriers(new BarrierUpdateRequest.Builder()
addBarrier(combinedBarrierLabel, combinedBarrier, pendingIntent).build())
.addOnSuccessListener(aVoid -> {
// The barrier is successfully registered.
Log.i(TAG, "add barrier success");
})
.addOnFailureListener(e -> {
// The barrier fails to be registered.
Log.e(TAG, "add barrier failed");
e.printStackTrace();
});
3. Create a service to listen to the barrier event.
In this example, PendingIntent of the composite barrier is set to start a service. Therefore, you need to define the corresponding service to monitor the barrier status.
Code:
public class DemoService extends IntentService {
public DemoService() {
super("AwarenessDemoService");
}
@Override
public void onCreate() {
super.onCreate();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
NotificationManager manager = getSystemService(NotificationManager.class);
if (manager == null) {
return;
}
String channel_ID = "DemoService";
NotificationChannel channel = new NotificationChannel(channel_ID,getResources().getString(R.string.app_name),NotificationManager.IMPORTANCE_LOW);
manager.createNotificationChannel(channel);
Notification notification = new Notification.Builder(this,channel_ID).build();
startForeground(1,notification);
}
}
@Override
protected void onHandleIntent(@Nullable Intent intent) {
if (intent == null) {
Log.e(TAG, "Intent is null");
return;
}
// Barrier information is transferred through intents. Parse the barrier information using the Barrier.extract method.
BarrierStatus barrierStatus = BarrierStatus.extract(intent);
// Obtain the label and current status of the barrier through BarrierStatus.
String barrierLabel = barrierStatus.getBarrierLabel();
int status = barrierStatus.getPresentStatus();
if (status == BarrierStatus.TRUE && barrierLabel.equals(combinedBarrierLabel)) {
// The barrier status is true, indicating that the user has stayed near the store for more than 30s, the user is not cycling or in-vehicle, and the time is not in the late night time segment.
// In this case, the preference information can be pushed to the user.
//send Notification....
}
}
}
Demo
We made a simple app demo to demonstrate the shopping experience improvement brought by awareness Kit.
The code is also open-source. Please move to GitHub.
https://github.com/Bun-Cheung/Awa-Shopping
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How does this Kit restart an app after the app process is reclaimed?
When the screen is on or off, the app can be restarted in PendingIntent mode. Currently, the modes Activity, Service, and Broadcast are supported.
Static broadcast is recommended. Regardless of whether the barrier is true or false, PendingIntent is triggered as long as the barrier status changes. The system resource consumption for starting Activity and Service is greater than that for broadcasting. You can perform preliminary logic judgment in the broadcast receiver and then perform subsequent logic.
Related
More articles like this, you can visit HUAWEI Developer Forum
Hello everyone, in this article we’re going to take a look at the Awareness Kit features so we can easily put these features to our apps.
Providing dynamic and effective user experiences to the app is an important point. Huawei Awareness Kit allows this to be done quickly and economically.It has collection of both contextual and location based features such as users’ current time, location, behavior, audio device status, ambient light, weather, and nearby beacons.
Huawei Awareness Kit also strongly emphasizes both the power and memory consumption when accessing these features and helping to ensure that the battery life and memory usage of your apps.
To use these features, Awareness Kit has two different sections:
Capture API
The Capture API allows your app to request the current user status, such as time, location, behavior, and whether a headset is connected.
Barrier API
The Barrier API allows your app to set a combination of contextual conditions. When the preset contextual conditions are met, your app will receive a notification.
I created a sample app which I’ve open-sourced on GitHub for enthusiasts who want to use this kit.
Setting up the Awareness Kit
You’ll need to do setup before you can use the Awareness Kit.You can follow the official documentation on how to prepare app on App Gallery Connect.
Add the required dependencies to the build.gradle file under app folder.We need awareness and location kit dependencies.
Code:
implementation 'com.huawei.hms:awareness:1.0.3.300'
implementation 'com.huawei.hms:location:4.0.4.300'
Add the required permissions to the AndroidManifest.xml file under app/src/main folder.
Code:
<!-- Location permission, which is sensitive. After the permission is declared, you need to dynamically apply for it in the code. -->
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <!-- Location permission. If only the IP address-based time capture function is used, you can declare only this permission. -->
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <!-- If your app uses the barrier capability and runs on Android 10 or later, you need to add the background location access permission, which is also a sensitive permission. -->
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
<uses-permission android:name="android.permission.BLUETOOTH" /> <!-- Behavior detection permission (Android 10 or later). This permission is sensitive and needs to be dynamically applied for in the code after being declared. -->
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" /> <!-- Behavior detection permission (Android 9). This permission is sensitive and needs to be dynamically applied for in the code after being declared. -->
<uses-permission android:name="com.huawei.hms.permission.ACTIVITY_RECOGNITION" />
Don’t forget checking permissions in the app.
Checking location permission:
Code:
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.ACCESS_FINE_LOCATION), PERMISSION_REQUEST_ACCESS_FINE_LOCATION)
return
}
Capture API
Firstly, We’re going to examine the Capture API in depth.This API allows us to request the current user status from current environment.Using this API we can retrieve data such as:
Obtains the current local time or time of a specified location, such as working day, weekend, etc.
Getting the users’ current location.
Obtains the current activity behavior, such as walking, running, cycling, driving, or staying still.
Indicates whether the device has approached, connected to, or disconnected from a registered beacon.
Retrieve info related to the user currently has their headphones plugged in or unplugged.
Obtains the status of an audio device (connected or disconnected).
Obtains the illuminance of the environment in lux unit where the device is located.
The weather status based on where the user is currently located.
Let’s take a deep look at these different awarenesses that we can retrieve from the Capture API.
Time Awareness
Using the Capture API we can detect whether the holiday information of most countries/regions and sunrise and sunset time of all cities around the world. Some simple use cases could be found in:
A meditation app that wishes to show a notification to remind morning rest
A gift card sending app that may wish to send virtual holiday wish cards to users
To obtain the time categories from the Capture API we simply need to call the timeCategories method.This will return instance of the TimeCategoriesResponse class that if successful, will contain information about the time categories as integer array.
Code:
private fun getTimeCategories() {
try {
// Use getTimeCategories() to get the information about the current time of the user location.
// Time information includes whether the current day is a workday or a holiday, and whether the current day is in the morning, afternoon, or evening, or at the night.
val task = Awareness.getCaptureClient(this).timeCategories
task.addOnSuccessListener { timeCategoriesResponse ->
val timeCategories = timeCategoriesResponse.timeCategories
// do anything with time categories array
}.addOnFailureListener { e ->
Toast.makeText(
applicationContext, "get Time Categories failed",
Toast.LENGTH_SHORT
).show()
Log.e(TAG, "get Time Categories failed", e)
}
} catch (e: Exception) {
logViewTv.text = "get Time Categories failed.Exception:" + e.message
}
}
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Location Awareness
Using the Capture API we can get location information where the user currently is. Some simple use cases could be found in:
A shopping app can show their nearby stores in current user location if there is.
An event app can show the upcoming events based on the appropriate location.
To get the location from the Capture API we simply need to call the location method.This will return instance of the LocationResponse class that if successful, will contain information about the users current location includes latitude, longitude, accuracy and altitude.
Code:
private fun getLocation(){
Awareness.getCaptureClient(this).location
.addOnSuccessListener { locationResponse ->
val location: Location = locationResponse.location
val latitude = location.latitude
val longitude = location.longitude
val accuracy = location.accuracy
val altitude = location.altitude
}
.addOnFailureListener {
logViewTv.text = "get location failed:" + it.message
}
}
Behavior Awareness
We can use the Capture API to detect the current activity behavior. Some simple use cases could be found in:
A fitness app can log user activity in background and make a graph from this data
An ebook listening app that wishes to recommend listening book when user is walking.
To get the current user behavior from the Capture API we simply need to call the behavior method.This will return instance of the BehaviorResponse class that if successful, will contain information about the users current context.
Code:
private fun getUserBehavior(){
Awareness.getCaptureClient(this).behavior
.addOnSuccessListener { behaviorResponse ->
val behaviorStatus = behaviorResponse.behaviorStatus
val mostLikelyBehavior = behaviorStatus.mostLikelyBehavior
val mostLikelyBehaviors = behaviorStatus.probableBehavior
val mostProbableActivity = mostLikelyBehavior.type
val detectedBehavior: Long = behaviorStatus.time
val elapsedTime: Long = behaviorStatus.elapsedRealtimeMillis
}
.addOnFailureListener {
logViewTv.text = "get behavior failed: " + it.message
}
}
Beacon Awareness
We can use the Capture API to get registered beacons in nearby. You need to register beacon devices with your project. For details, please refer to Beacon Management. Some simple use cases could be found in:
When the user enters a store, the application can show a special discount to the person.
A bus app that wishes to show push notifications about buses going in that direction.
To get any nearby registered beacons from the Capture API we simply need to call the getBeaconStatus method. This will return instance of the BeaconStatusResponse class that if successful, will contain information about nearby registered beacons.
Code:
private fun getBeacons(){
val namespace = "sample namespace"
val type = "sample type"
val content = byteArrayOf(
's'.toByte(),
'a'.toByte(),
'm'.toByte(),
'p'.toByte(),
'l'.toByte(),
'e'.toByte()
)
val filter = BeaconStatus.Filter.match(namespace, type, content)
Awareness.getCaptureClient(this).getBeaconStatus(filter)
.addOnSuccessListener { beaconStatusResponse ->
val beaconDataList = beaconStatusResponse.beaconStatus.beaconData
if (beaconDataList != null && beaconDataList.size != 0) {
var i = 1
val builder = StringBuilder()
for (beaconData in beaconDataList) {
builder.append("Beacon Data ").append(i)
builder.append(" namespace:").append(beaconData.namespace)
builder.append(",type:").append(beaconData.type)
builder.append(",content:").append(Arrays.toString(beaconData.content))
builder.append("; ")
i++
}
} else {
logViewTv.text = "no beacons match filter nearby"
}
}
.addOnFailureListener { e ->
logViewTv.text = "get beacon status failed: " + e.message
}
}
Headset Awareness
Using the Capture API we can get information about headphones connected. Some simple use cases could be found in:
A music app stop playing song on their app when the user disconnects their headphones.
An ebook listening app can show a notification to allow quick access to their app when the user connects their headphones.
To obtain the headphone status from the Capture API we simply need to call the headsetStatus method .This will return instance of the HeadsetStatusResponse class that if successful, will contain information about the devices current headphone status.
Bluetooth Car Stereo Awareness
Using the Capture API we can get information about bluetooth car stereo connected. Some simple use cases could be found in:
A music app that wishes to show a notification to play a song when the car stereo connects.
A radio app that wishes to ask playing a radio when the car stereo connects.
To get the car stereo status from the Capture API we simply need to call the getBluetoothStatus method .This will return instance of the BluetoothStatusResponse class that if successful, will contain information about the car stereo status.
Code:
private fun getCarStereoStatus() {
val deviceType = 0 // Value 0 indicates a Bluetooth car stereo.
Awareness.getCaptureClient(this).getBluetoothStatus(deviceType)
.addOnSuccessListener { bluetoothStatusResponse ->
val bluetoothStatus = bluetoothStatusResponse.bluetoothStatus
val status = bluetoothStatus.status
val stateStr = "The Bluetooth car stereo is " + if (status == BluetoothStatus.CONNECTED) "connected" else "disconnected"
}
.addOnFailureListener { e ->
logViewTv.text = "get bluetooth status failed: " + e.message
}
}
Ambient Light Awareness
Using the Capture API we can obtain information about ambient light. Some simple use cases could be found in:
A book reading application can adjust the brightness of the screen according to the ambient light to consider the eyesight.
A video watching application can adjust the brightness of the screen according to the ambient light to consider the eyesight.
To get the ambient light from the Capture API we simply need to call the lightIntensity method .This will return instance of the AmbientLightResponse class that if successful, will contain information about the ambient light as lux.
Code:
private fun getAmbientLight() {
Awareness.getCaptureClient(this).lightIntensity
.addOnSuccessListener { ambientLightResponse ->
val ambientLightStatus = ambientLightResponse.ambientLightStatus
logViewTv.text = "Light intensity is " + ambientLightStatus.lightIntensity + " lux"
}
.addOnFailureListener { e ->
logViewTv.text = "get light intensity failed" + e.message
}
}
Weather Awareness
We can also use the Capture API to retrieve the weather status of where the user is currently located. Some simple use cases could be found in:
A music app can recommend a playlist to listen to, depending on the weather.
A fitness app can give a weather alert to users before running outside.
To get the weather status from the Capture API we simply need to call the weatherByDevice method .This will return instance of the WeatherStatusResponse class that if successful.
Code:
private fun getWeather(){
Awareness.getCaptureClient(this).weatherByDevice
.addOnSuccessListener { weatherStatusResponse ->
val weatherStatus = weatherStatusResponse.weatherStatus
val weatherSituation = weatherStatus.weatherSituation
val situation = weatherSituation.situation
cityNameTv.text = "City: " + weatherSituation.city.name
weatherIdTv.text = "Weather id is: " + situation.weatherId
cnWeatherIdTv.text = "CN Weather id is: " + situation.cnWeatherId
temperatureCTv.text = "Temperature is: " + situation.temperatureC + "℃"
temperatureFTv.text = "Temperature is: " + situation.temperatureF + "℉"
windSpeedTv.text = "Wind speed is: " + situation.windSpeed
windDirectionTv.text = "Wind direction is: " + situation.windDir
humidityTv.text = "Humidity is : " + situation.humidity
}
.addOnFailureListener {
logViewTv.text = "get weather failed: " + it.message
}
}
Can Awareness will be useful for restaurants for promoting their customers with a message.
Introduction
In this article, we will learn how to implement Huawei Awareness kit features so we can easily integrate these features in to our Fitness application. Providing dynamic and real time information to users is an important point. This article I will cover Time Awareness details and Weather Awareness based details.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What is Huawei Awareness kit Service?
Huawei Awareness kit supports to get the app insight into a users’ current situation more efficiently, making it possible to deliver a smarter, more considerate user experience and it provides the users’ current time, location, behavior, audio device status, ambient light, weather, and nearby beacons, application status, and DarkMode.
Huawei Awareness Kit also strongly emphasizes both the power and memory consumption when accessing these features and helping to ensure that the battery life and memory usage of your apps.
To use these features, Awareness Kit has two different sections:
Capture API
The Capture API allows your app to request the current user status, such as time, location, behavior, application, dark mode, Wi-Fi, screen and headset.
1. Users’ current location.
2. The local time of an arbitrary location given, and additional information regarding that region such as weekdays, holidays etc.
3. Users’ ambient illumination levels.
4. Users’ current behavior, meaning their physical activity including walking, staying still, running, driving or cycling.
5. The weather status of the area the user is located in, inclusive of temperature, wind, humidity, and weather id and province name.
6. The audio device status, specifically the ability to detect whether headphones have been plugged in or not.
7. Beacons located nearby.
8. Wi-Fi status whether user connected Wi-Fi or not.
9. The device dark mode status, using this we can identify the mode.
Barrier API
The Barrier API allows your app to set a combination of contextual conditions. When the preset contextual conditions are met, your app will receive a notification.
Advantages
1. Converged: Multi-dimensional and evolvable awareness capabilities can be called in a unified manner.
2. Accurate: The synergy of hardware and software makes data acquisition more accurate and efficient.
3. Fast: On-chip processing of local service requests and nearby access of cloud services promise a faster service response.
4. Economical: Sharing awareness capabilities avoids separate interactions between apps and the device, reducing system resource consumption. Collaborating with the EMUI (or Magic UI) and Kirin chip, Awareness Kit can even achieve optimal performance with the lowest power consumption.
Requirements
1. Any operating system(i.e. MacOS, Linux and Windows)
2. Any IDE with Flutter SDK installed (i.e. IntelliJ, Android Studio and VsCode etc.)
3. A little knowledge of Dart and Flutter.
4. A Brain to think
5. Minimum API Level 24 is required.
6. Required EMUI 5.0 and later version devices.
Setting up the Awareness kit
1. Before start creating application make sure we connect our project to AppGallery. For more information check this link
2. After that follow the URL for cross-platform plugins. Download required plugins.
3. Enable the Awareness kit in the Manage API section and add the plugin.
4. Add the required dependencies to the build.gradle file under root folder.
Java:
maven {url 'http://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
5. Add the required permissions to the AndroidManifest.xml file under app/src/main folder.
Java:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.INTERNET" />
6. After completing all the above steps, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. You can find all the plugins in pub.dev with the latest versions.
Java:
huawei_awareness:
path: ../huawei_awareness/
After adding them, run flutter pub get command. Now all the plugins are ready to use.
Note: Set multiDexEnabled to true in the android/app directory, so the app will not crash.
Use Awareness to get the Weather information
The Service will help you in fitness activity based on weather condition user can choose whether he wonts to choose indoor activity or outdoor activity.
Before calling service we need to request the permissions once app launching.
Java:
@override
void initState() {
checkPermissions();
requestPermissions();
super.initState();
}
void checkPermissions() async {
locationPermission = await AwarenessUtilsClient.hasLocationPermission();
backgroundLocationPermission =
await AwarenessUtilsClient.hasBackgroundLocationPermission();
activityRecognitionPermission =
await AwarenessUtilsClient.hasActivityRecognitionPermission();
if (locationPermission &&
backgroundLocationPermission &&
activityRecognitionPermission) {
setState(() {
permissions = true;
notifyAwareness();
});
}
}
void requestPermissions() async {
if (locationPermission == false) {
bool status = await AwarenessUtilsClient.requestLocationPermission();
setState(() {
locationPermission = status;
});
}
if (backgroundLocationPermission == false) {
bool status =
await AwarenessUtilsClient.requestBackgroundLocationPermission();
setState(() {
locationPermission = status;
});
}
if (activityRecognitionPermission == false) {
bool status =
await AwarenessUtilsClient.requestActivityRecognitionPermission();
setState(() {
locationPermission = status;
});
checkPermissions();
}
}
Once all the permissions are enabled then we need to call the awareness service.
Java:
captureWeatherByDevice() async {
WeatherResponse response = await AwarenessCaptureClient.getWeatherByDevice();
setState(() {
List<HourlyWeather> hourlyList = response.hourlyWeather;
hourlyWeather = hourlyList[0];
switch (hourlyWeather.weatherId) {
case 1:
assetImage = "sunny.png";
break;
case 4:
assetImage = "cloudy.jpg";
break;
case 7:
assetImage = "cloudy.png";
break;
case 18:
assetImage = "rain.png";
break;
default:
assetImage = "sunny.png";
break;
}
setState(() {
timeInfoStr =timeInfoStr + ' ' + hourlyWeather.tempC.toString() + ' ' + "°C";
});
});
}
It will return the WeatherResponse class instance containing information including, but not limited to, area, temperature, humidity, wind speed and direction etc. According to the results of the temperature, humidity and wind speed, we create various if conditions to check whether those results are within normal ranges and we give values to created temporary integers accordingly. We will use these integers later when we send a notification to our device.
Use Awareness to get the Time Categories information
Awareness Kit can detect the time where the user is located, including whether it is weekend/holiday or workday, time of sunrise/sunset, and other detailed information. You can also set time-sensitive notifications, such as those notifying the user based on conditions. For example if tomorrow holiday we can notify to the user, so that user can plan for the day.
Code:
getTimeCategories() async {
TimeCategoriesResponse response =
await AwarenessCaptureClient.getTimeCategories();
if (response != null) {
setState(() {
List<int> categoriesList = response.timeCategories;
var categories = categoriesList[2];
switch (categories) {
case 1:
timeInfoStr = "Good Morning ❤";
break;
case 2:
timeInfoStr = "Good Afternoon ❤";
break;
case 3:
timeInfoStr = "Good Evening ❤";
break;
case 4:
timeInfoStr = "Good Night ❤";
break;
default:
timeInfoStr = "Unknown";
break;
}
});
}
}
GetTimeCategories method would return the TimeCategoriesResponse in an array if successful.
Note: Refer this URL for constant values
Demo
Tips and Tricks
1. Download latest HMS Flutter plugin.
2. Set minSDK version to 24 or later.
3. Do not forget to click pug get after adding dependencies.
4. Latest HMS Core APK is required.
5. Refer this URL for supported Devices list
Conclusion
In this article, I have covered two services Time Awareness and Weather Awareness. Based on weather condition app will suggest few activities to the user and it will notify the temperature.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Awareness Kit URL
Original Source
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Nowadays, apps use device location to provide a wide variety of services. You may want to see a user's location to show services around them, learn the habits of users in different areas, or perform location-based actions, like a ride-hailing app calculating travel distance and trip fare.
In order to protect user privacy, however, most apps request device location only when they run in the foreground. Once switched to the background, or even with the device screen turned off, apps are usually not allowed to request device location. As a result, they cannot record the GPS track of a device, which negatively affects the user experience of core functions of many apps, such as ride-hailing, trip-sharing, and exercise, all of which need to track location in real time.
To ensure this kind of app is able to function properly, the app developer usually needs to give their app the capability to obtain device location when the app is running in the background. So, is there a service which the app can use to continuously request device location in the background? Fortunately, HMS Core Location Kit offers the background location function necessary to do just that.
In this article, I'll show you how to use the background location function to request device location continuously in the background.
Background Location Implementation Method
An app is running on a non-Huawei phone with the location function enabled for the app using LocationCallback. After the app is switched to the background, it will be unable to request device location.
To enable the app to continuously request device location after it is switched to the background, the app developer can use the enableBackgroundLocation method to create a foreground service. This increases the frequency with which the app requests location updates in the background. The foreground service is the most important thing for enabling successful background location because it is a service with the highest priority and the lowest probability of being ended by the system.
Note that the background location function itself cannot request device location. It needs to be used together with the location function enabled using LocationCallback. Location data needs to be obtained using the onLocationResult(LocationResult locationResult) method in the LocationCallback object.
Background Location Notes
1. The app needs to be running on a non-Huawei phone.
2. The app must be granted permission to always obtain device location.
3. The app must not be frozen by the power saving app on the device. For example, on a vivo phone, you need to allow the app to consume power when running in the background.
Demo Testing Notes
1. It is recommended not to charge the device during the test. This is because the app may not be limited for power consumption while the device is charging.
2. To determine if the app is requesting device location, you can check whether the positioning icon is displayed in the device's status bar. On vivo, for example, the phone will show a positioning icon in the status bar when an app requests the phone's location. If background location is disabled for the app, the positioning icon in the status bar will disappear after the app is switched to the background. If background location is enabled for the app, however, the phone will continue to show the positioning icon in the status bar after the app is switched to the background.
Development Preparation
Before getting started with the development, you will need to integrate the Location SDK into your app. If you are using Android Studio, you can integrate the SDK via the Maven repository.
Here, I won't be describing how to integrate the SDK. You can click here to learn about the details.
Background Location Development Procedure
After integrating the SDK, perform the following steps to use the background location function:
1. Declare the background location permission in the AndroidManifest.xml file.
Code:
<service
android:name="com.huawei.location.service.BackGroundService"
android:foregroundServiceType="location" />
Generally, there are three location permissions in Android:
ACCESS_COARSE_LOCATION: This is the approximate location permission, which allows an app to obtain an approximate location, accurate to city block level.
ACCESS_FINE_LOCATION: This is the precise location permission, which allows an app to obtain a precise location, more accurate than the one obtained based on the ACCESS_COARSE_LOCATION permission.
ACCESS_BACKGROUND_LOCATION: This is the background location permission, which allows an app to obtain device location when running in the background in Android 10. In Android 10 or later, this permission is a dangerous permission and needs to be dynamically applied for. In versions earlier than Android 10, an app can obtain device location regardless of whether it runs in the foreground or background, as long as it is assigned the ACCESS_COARSE_LOCATION permission.
In Android 11 or later, these three permissions cannot be dynamically applied for at the same time. The ACCESS_BACKGROUND_LOCATION permission can only be applied for after the ACCESS_COARSE_LOCATION and ACCESS_FINE_LOCATION permissions are granted.
2. Declare the FOREGROUND_SERVICE permission in the AndroidManifest.xml file to ensure that the background location permission can be used properly.
Note that this step is required only for Android 9 or later. As I mentioned earlier, the foreground service has the highest priority and the lowest probability of being ended by the system. This is a basis for successful background location.
Code:
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
3. Create a Notification object.
Code:
public class NotificationUtil {
public static final int NOTIFICATION_ID = 1;
public static Notification getNotification(Context context) {
Notification.Builder builder;
Notification notification;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
NotificationManager notificationManager =
(NotificationManager) context.getSystemService(NOTIFICATION_SERVICE);
String channelId = context.getPackageName();
NotificationChannel notificationChannel =
new NotificationChannel(channelId, "LOCATION", NotificationManager.IMPORTANCE_LOW);
notificationManager.createNotificationChannel(notificationChannel);
builder = new Notification.Builder(context, channelId);
} else {
builder = new Notification.Builder(context);
}
builder.setSmallIcon(R.drawable.ic_launcher)
.setContentTitle("Location SDK")
.setContentText("Running in the background ")
.setWhen(System.currentTimeMillis());
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN) {
notification = builder.build();
} else {
notification = builder.getNotification();
}
return notification;
}
}
4. Initialize the FusedLocationProviderClient object.
Code:
FusedLocationProviderClient mFusedLocationProviderClient = LocationServices.getFusedLocationProviderClient(this);
5. Enable the background location function.
Code:
private void enableBackgroundLocation() {
mFusedLocationProviderClient
.enableBackgroundLocation(NotificationUtil.NOTIFICATION_ID, NotificationUtil.getNotification(this))
.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
LocationLog.i(TAG, "enableBackgroundLocation onSuccess");
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
LocationLog.e(TAG, "enableBackgroundLocation onFailure:" + e.getMessage());
}
});
}
Congratulations, your app is now able to request device location when running in the background.
Conclusion
With the booming of mobile Internet, mobile apps have been playing a more and more important role in our daily lives. Many apps now offer location-based services to create a better user experience. However, most apps can request device location only when they run in the foreground, in order to protect user privacy. To ensure this kind of app functions properly, it is necessary for the app to receive location information while it is running in the background or the device screen is off.
My app can now just do that easily, thanks to the background location capability in Location Kit. If you want your app to do the same, have a try on this capability and you may be surprised.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I dare say there are two types of people in this world: people who love Toy Story and people who have not watched it.
Well, this is just the opinion of a huge fan of the animation film. When I was a child, I always dreamed of having toys that could move and play with me, like my own Buzz Lightyear. Thanks to a fancy technique called rigging, I can now bring my toys to life, albeit I'm probably too old for them now.
What Is Rigging in 3D Animation and Why Do We Need It?Put simply, rigging is a process whereby a skeleton is created for a 3D model to make it move. In other words, rigging creates a set of connected virtual bones that are used to control a 3D model.
It paves the way for animation because it enables a model to be deformed, making it moveable, which is the very reason that rigging is necessary for 3D animation.
What Is Auto Rigging3D animation has been adopted by mobile apps in a number of fields (gaming, e-commerce, video, and more), to achieve more realistic animations than 2D.
However, this graphic technique has daunted many developers (like me) because rigging, one of its major prerequisites, is difficult and time-consuming for people who are unfamiliar with modeling. Specifically speaking, most high-performing rigging solutions have many requirements. An example of this is that the input model should be in a standard position, seven or eight key skeletal points should be added, as well as inverse kinematics which must be added to the bones, and more.
Luckily, there are solutions that can automatically complete rigging, such as the auto rigging solution from HMS Core 3D Modeling Kit.
This capability delivers a wholly automated rigging process, requiring just a biped humanoid model that is generated using images taken from a mobile phone camera. After the model is input, auto rigging uses AI algorithms for limb rigging and generates the model skeleton and skin weights (which determine the degree to which a bone can influence a part of the mesh). Then, the capability changes the orientation and position of the skeleton so that the model can perform a range of preset actions, like walking, running, and jumping. Besides, the rigged model can also be moved according to an action generated by using motion capture technology, or be imported into major 3D engines for animation.
Lower requirements do not compromise rigging accuracy. Auto rigging is built upon hundreds of thousands of 3D model rigging data records. Thanks to some fine-tuned data records, the capability delivers ideal algorithm accuracy and generalization.
I know that words alone are no proof, so check out the animated model I've created using the capability.
Movement is smooth, making the cute panda move almost like a real one. Now I'd like to show you how I created this model and how I integrated auto rigging into my demo app.
Integration ProcedurePreparationsBefore moving on to the real integration work, make necessary preparations, which include:
Configure app information in AppGallery Connect.
Integrate the HMS Core SDK with the app project, which includes Maven repository address configuration.
Configure obfuscation scripts.
Declare necessary permissions.
Capability Integration1. Set an access token or API key — which can be found in agconnect-services.json — during app initialization for app authentication.
Using the access token: Call setAccessToken to set an access token. This task is required only once during app initialization.
Code:
ReconstructApplication.getInstance().setAccessToken("your AccessToken");
Using the API key: Call setApiKey to set an API key. This key does not need to be set again.
Code:
ReconstructApplication.getInstance().setApiKey("your api_key");
The access token is recommended. And if you prefer the API key, it is assigned to the app when it is created in AppGallery Connect.
2. Create a 3D object reconstruction engine and initialize it. Then, create an auto rigging configurator.
Code:
// Create a 3D object reconstruction engine.
Modeling3dReconstructEngine modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(context);
// Create an auto rigging configurator.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
// Set the working mode of the engine to PICTURE.
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
// Set the task type to auto rigging.
.setTaskType(Modeling3dReconstructConstants.TaskType.AUTO_RIGGING)
.create();
3. Create a listener for the result of uploading images of an object.
Code:
private Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Callback when the upload progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
// Callback when the upload is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when the upload failed.
}
};
4. Use a 3D object reconstruction configurator to initialize the task, set an upload listener for the engine created in step 1, and upload images.
Code:
// Use the configurator to initialize the task, which should be done in a sub-thread.
Modeling3dReconstructInitResult modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
String taskId = modeling3dReconstructInitResult.getTaskId();
// Set an upload listener.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Call the uploadFile API of the 3D object reconstruction engine to upload images.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
5. Query the status of the auto rigging task.
Code:
// Initialize the task processing class.
Modeling3dReconstructTaskUtils modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(context);
// Call queryTask in a sub-thread to query the task status.
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(taskId);
// Obtain the task status.
int status = queryResult.getStatus();
6. Create a listener for the result of model file download.
Code:
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
// Callback when download progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
// Callback when download is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when download failed.
}
};
7. Pass the download listener to the 3D object reconstruction engine, to download the rigged model.
Code:
// Set download configurations.
Modeling3dReconstructDownloadConfig downloadConfig = new Modeling3dReconstructDownloadConfig.Factory()
// Set the model file format to OBJ or glTF.
.setModelFormat(Modeling3dReconstructConstants.ModelFormat.OBJ)
// Set the texture map mode to normal mode or PBR mode.
.setTextureMode(Modeling3dReconstructConstants.TextureMode.PBR)
.create();
// Set the download listener.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Call downloadModelWithConfig, passing the task ID, path to which the downloaded file will be saved, and download configurations, to download the rigged model.
modeling3dReconstructEngine.downloadModelWithConfig(taskId, fileSavePath, downloadConfig);
Where to UseAuto rigging is used in many scenarios, for example:
Gaming. The most direct way of using auto rigging is to create moveable characters in a 3D game. Or, I think we can combine it with AR to create animated models that can appear in the camera display of a mobile device, which will interact with users.
Online education. We can use auto rigging to animate 3D models of popular toys, and liven them up with dance moves, voice-overs, and nursery rhymes to create educational videos. These models can be used in educational videos to appeal to kids more.
E-commerce. Anime figurines look rather plain compared to how they behave in animes. To spice up the figurines, we can use auto rigging to animate 3D models that will look more engaging and dynamic.
Conclusion3D animation is widely used in mobile apps, because it presents objects in a more fun and interactive way.
A key technique for creating great 3D animations is rigging. Conventional rigging requires modeling know-how and expertise, which puts off many amateur modelers.
Auto rigging is the perfect solution to this challenge because its full-automated rigging process can produce highly accurate rigged models that can be easily animated using major engines available on the market. Auto rigging not only lowers the costs and requirements of 3D model generation and animation, but also helps 3D models look more appealing.
Influencers have become increasingly important, as more and more consumers choose to purchase items online – whether on Amazon, Taobao, or one of the many other prominent e-commerce platforms. Brands and merchants have spent a lot of money finding influencers to promote their products through live streams and consumer interactions, and many purchases are made on the recommendation of a trusted influencer.
However, employing a public-facing influencer can be costly and risky. Many brands and merchants have opted instead to host live streams with their own virtual characters. This gives them more freedom to showcase their products, and widens the pool of potential on camera talent. For consumers, virtual characters can add fun and whimsy to the shopping experience.
E-commerce platforms have begun to accommodate the preference for anonymous livestreaming, by offering a range of important capabilities, such as those that allow for automatic identification, skeleton point-based motion tracking in real time (as shown in the gif), facial expression and gesture identification, copying of traits to virtual characters, a range of virtual character models for users to choose from, and natural real-world interactions.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Building these capabilities comes with its share of challenges. For example, after finally building a model that is able to translate the person's every gesture, expression, and movement into real-time parameters and then applying them to the virtual character, you can find out that the virtual character can't be blocked by real bodies during the livestream, which gives it a fake, ghost-like form. This is a problem I encountered when I developed my own e-commerce app, and it occurred because I did not occlude the bodies that appeared behind and in front of the virtual character. Fortunately I was able to find an SDK that helped me solve this problem — HMS Core AR Engine.
This toolkit provides a range of capabilities that make it easy to incorporate AR-powered features into apps. From hit testing and movement tracking, to environment mesh, and image tracking, it's got just about everything you need. The human body occlusion capability was exactly what I needed at the time.
Now I'll show you how I integrated this toolkit into my app, and how helpful it's been for.
First I registered for an account on the HUAWEI Developers website, downloaded the AR Engine SDK, and followed the step-by-step development guide to integrate the SDK. The integration process was quite simple and did not take too long. Once the integration was successful, I ran the demo on a test phone, and was amazed to see how well it worked. During livestreams my app was able to recognize and track the areas where I was located within the image, with an accuracy of up to 90%, and provided depth-related information about the area. Better yet, it was able to identify and track the profile of up to two people, and output the occlusion information and skeleton points corresponding to the body profiles in real time. With this capability, I was able to implement a lot of engaging features, for example, changing backgrounds, hiding virtual characters behind real people, and even a feature that allows the audience to interact with the virtual character through special effects. All of these features have made my app more immersive and interactive, which makes it more attractive to potential shoppers.
DemoAs shown in the gif below, the person blocks the virtual panda when walking in front of it.
How to DevelopPreparationsRegistering as a developerBefore getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
Creating an appCreate a project and create an app under the project. Pay attention to the following parameter settings:
Platform: Select Android.
Device: Select Mobile phone.
App category: Select App or Game.
Integrating the AR Engine SDKBefore development, integrate the AR Engine SDK via the Maven repository into your development environment.
Configuring the Maven repository address for the AR Engine SDKThe procedure for configuring the Maven repository address in Android Studio is different for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
Adding build dependenciesOpen the build.gradle file in the app directory of your project.
Add a build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}'
}
Open the modified build.gradle file again. You will find a Sync Now link in the upper right corner of the page. Click Sync Now and wait until synchronization is complete.
Developing Your AppChecking the AvailabilityCheck whether AR Engine has been installed on the current device. If so, the app can run properly. If not, the app prompts the user to install AR Engine, for example, by redirecting the user to AppGallery. The code is as follows:
Code:
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
Create a BodyActivity object to display body bones and output human body features, for AR Engine to recognize human body.
Code:
Public class BodyActivity extends BaseActivity{
Private BodyRendererManager mBodyRendererManager;
Protected void onCreate(){
// Initialize surfaceView.
mSurfaceView = findViewById();
// Context for keeping the OpenGL ES running.
mSurfaceView.setPreserveEGLContextOnPause(true);
// Set the OpenGL ES version.
mSurfaceView.setEGLContextClientVersion(2);
// Set the EGL configuration chooser, including for the number of bits of the color buffer and the number of depth bits.
mSurfaceView.setEGLConfigChooser(……);
mBodyRendererManager = new BodyRendererManager(this);
mSurfaceView.setRenderer(mBodyRendererManager);
mSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
Protected void onResume(){
// Initialize ARSession to manage the entire running status of AR Engine.
If(mArSession == null){
mArSession = new ARSession(this.getApplicationContext());
mArConfigBase = new ARBodyTrackingConfig(mArSession);
mArConfigBase.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE_MASK);
mArConfigBase.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS
mArSession.configure(mArConfigBase);
}
// Pass the required parameters to setBodyMask.
mBodyRendererManager.setBodyMask(((mArConfigBase.getEnableItem() & ARConfigBase.ENABLE_MASK) != 0) && mIsBodyMaskEnable);
sessionResume(mBodyRendererManager);
}
}
Create a BodyRendererManager object to render the personal data obtained by AR Engine.
Code:
Public class BodyRendererManager extends BaseRendererManager{
Public void drawFrame(){
// Obtain the set of all traceable objects of the specified type.
Collection<ARBody> bodies = mSession.getAllTrackables(ARBody.class);
for (ARBody body : bodies) {
if (body.getTrackingState() != ARTrackable.TrackingState.TRACKING){
continue;
}
mBody = body;
hasBodyTracking = true;
}
// Update the body recognition information displayed on the screen.
StringBuilder sb = new StringBuilder();
updateMessageData(sb, mBody);
Size textureSize = mSession.getCameraConfig().getTextureDimensions();
if (mIsWithMaskData && hasBodyTracking && mBackgroundDisplay instanceof BodyMaskDisplay) {
((BodyMaskDisplay) mBackgroundDisplay).onDrawFrame(mArFrame, mBody.getMaskConfidence(),
textureSize.getWidth(), textureSize.getHeight());
}
// Display the updated body information on the screen.
mTextDisplay.onDrawFrame(sb.toString());
for (BodyRelatedDisplay bodyRelatedDisplay : mBodyRelatedDisplays) {
bodyRelatedDisplay.onDrawFrame(bodies, mProjectionMatrix);
} catch (ArDemoRuntimeException e) {
LogUtil.error(TAG, "Exception on the ArDemoRuntimeException!");
} catch (ARFatalException | IllegalArgumentException | ARDeadlineExceededException |
ARUnavailableServiceApkTooOldException t) {
Log(…);
}
}
// Update gesture-related data for display.
Private void updateMessageData(){
if (body == null) {
return;
}
float fpsResult = doFpsCalculate();
sb.append("FPS=").append(fpsResult).append(System.lineSeparator());
int bodyAction = body.getBodyAction();
sb.append("bodyAction=").append(bodyAction).append(System.lineSeparator());
}
}
Customize the camera preview class, which is used to implement human body drawing based on certain confidence.
Code:
Public class BodyMaskDisplay implements BaseBackGroundDisplay{}
Obtain skeleton data and pass the data to the OpenGL ES, which renders the data and displays it on the screen.
Code:
public class BodySkeletonDisplay implements BodyRelatedDisplay {
Obtain skeleton point connection data and pass it to OpenGL ES for rendering the data and display it on the screen.
Code:
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {}
ConclusionTrue-to-life AR live-streaming is now an essential feature in e-commerce apps, but developing this capability from scratch can be costly and time-consuming. AR Engine SDK is the best and most convenient SDK I've encountered, and it's done wonders for my app, by recognizing individuals within images with accuracy as high as 90%, and providing the detailed information required to support immersive, real-world interactions. Try it out on your own app to add powerful and interactive features that will have your users clamoring to shop more!
ReferencesAR Engine Development Guide
Sample Code
API Reference