Build an Emoji Making App Effortlessly - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Emojis are a must-have tool in today's online communications as they help add color to text-based chatting and allow users to better express the emotions behind their words. Since the number of preset emojis is always limited, many apps now allow users to create their own custom emojis to keep things fresh and exciting.
For example, in a social media app, users who do not want to show their faces when making video calls can use an animated character to protect their privacy, with their facial expressions applied to the character; in a live streaming or e-commerce app, virtual streamers with realistic facial expressions are much more likely to attract watchers; in a video or photo shooting app, users can control the facial expressions of an animated character when taking a selfie, and then share the selfie via social media; and in an educational app for kids, a cute animated character with detailed facial expressions will make online classes much more fun and engaging for students.
I myself am developing such a messaging app. When chatting with friends and wanting to express themselves in ways other than words, users of my app can take a photo to create an emoji of themselves, or of an animated character they have selected. The app will then identify users' facial expressions, and apply their facial expressions to the emoji. In this way, users are able to create an endless amount of unique emojis. During the development of my app, I used the capabilities provided by HMS Core AR Engine to track users' facial expressions and convert the facial expressions into parameters, which greatly reduced the development workload. Now I will show you how I managed to do this.
Implementation​AR Engine provides apps with the ability to track and recognize facial expressions in real time, which can then be converted into facial expression parameters and used to accurately control the facial expressions of virtual characters.
Currently, AR Engine provides 64 facial expressions, including eyelid, eyebrow, eyeball, mouth, and tongue movements. It supports 21 eye-related movements, including eyeball movement and opening and closing the eyes; 28 mouth movements, including opening the mouth, puckering, pulling, or licking the lips, and moving the tongue; as well as 5 eyebrow movements, including raising or lowering the eyebrows.
Demo​Facial expression based emoji
Development Procedure​Requirements on the Development Environment​JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
Test device: see Software and Hardware Requirements of AR Engine Features
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​1. Before getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
2. Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development​1. Check whether AR Engine has been installed on the current device. If yes, your app can run properly. If not, you need to prompt the user to install it, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Create an AR scene. AR Engine supports five scenes, including motion tracking (ARWorldTrackingConfig), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
The following takes creating a face tracking scene by calling ARFaceTrackingConfig as an example.
Code:
// Create an ARSession object.
mArSession = new ARSession(this);
// Select a specific Config to initialize the ARSession object based on the application scenario.
ARFaceTrackingConfig config = new ARFaceTrackingConfig(mArSession);
Set scene parameters using the config.setXXX method.
Code:
// Set the camera opening mode, which can be external or internal. The external mode can only be used in ARFace. Therefore, you are advised to use the internal mode.
mArConfig.setImageInputMode(ARConfigBase.ImageInputMode.EXTERNAL_INPUT_ALL);
3. Set the AR scene parameters for face tracking and start face tracking.
Code:
mArSession.configure(mArConfig);
mArSession.resume();
4. Initialize the FaceGeometryDisplay class to obtain the facial geometric data and render the data on the screen.
Code:
public class FaceGeometryDisplay {
// Initialize the OpenGL ES rendering related to face geometry, including creating the shader program.
void init(Context context) {...
}
}
5. Initialize the onDrawFrame method in the FaceGeometryDisplay class, and call face.getFaceGeometry() to obtain the face mesh.
Code:
public void onDrawFrame(ARCamera camera, ARFace face) {
ARFaceGeometry faceGeometry = face.getFaceGeometry();
updateFaceGeometryData(faceGeometry);
updateModelViewProjectionData(camera, face);
drawFaceGeometry();
faceGeometry.release();
}
6. Initialize updateFaceGeometryData() in the FaceGeometryDisplay class.
Pass the face mesh data for configuration and set facial expression parameters using OpenGl ES.
Code:
private void updateFaceGeometryData (ARFaceGeometry faceGeometry) {
FloatBuffer faceVertices = faceGeometry.getVertices();
FloatBuffer textureCoordinates =faceGeometry.getTextureCoordinates();
// Obtain an array consisting of face mesh texture coordinates, which is used together with the vertex data returned by getVertices() during rendering.
}
7. Initialize the FaceRenderManager class to manage facial data rendering.
Code:
public class FaceRenderManager implements GLSurfaceView.Renderer {
public FaceRenderManager(Context context, Activity activity) {
mContext = context;
mActivity = activity;
}
// Set ARSession to obtain the latest data.
public void setArSession(ARSession arSession) {
if (arSession == null) {
LogUtil.error(TAG, "Set session error, arSession is null!");
return;
}
mArSession = arSession;
}
// Set ARConfigBase to obtain the configuration mode.
public void setArConfigBase(ARConfigBase arConfig) {
if (arConfig == null) {
LogUtil.error(TAG, "setArFaceTrackingConfig error, arConfig is null.");
return;
}
mArConfigBase = arConfig;
}
// Set the camera opening mode.
public void setOpenCameraOutsideFlag(boolean isOpenCameraOutsideFlag) {
isOpenCameraOutside = isOpenCameraOutsideFlag;
}
...
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
mFaceGeometryDisplay.init(mContext);
}
}
8. Implement the face tracking effect by calling methods like setArSession and setArConfigBase of FaceRenderManager in FaceActivity.
Code:
public class FaceActivity extends BaseActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
mFaceRenderManager = new FaceRenderManager(this, this);
mFaceRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mFaceRenderManager.setTextView(mTextView);
glSurfaceView.setRenderer(mFaceRenderManager);
glSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
}
Conclusion​Emojis allow users to express their moods and excitement in a way words can't. Instead of providing users with a selection of the same old boring preset emojis that have been used a million times, you can now make your app more fun by allowing users to create emojis themselves! Users can easily create an emoji with their own smiles, simply by facing the camera, selecting an animated character they love, and smiling. With such an ability to customize emojis, users will be able to express their feelings in a more personalized and interesting manner. If you have any interest in developing such an app, AR Engine is a great choice for you. With accurate facial tracking capabilities, it is able to identify users' facial expressions in real time, convert the facial expressions into parameters, and then apply them to virtual characters. Integrating the capability can help you considerably streamline your app development process, leaving you with more time to focus on how to provide more interesting features to users and improve your app's user experience.
Reference​AR Engine Sample Code
Face Tracking Capability

Related

Create a New Paradigm for Photo Gallery Management by ML Kit

Overview
"Hey, I just took some pictures of that gorgeous view. Take a look."
"Yes, let me see."
... (a few minutes later)
"Where are they?"
"Wait a minute. There are too many pictures."
Have you experienced this type of frustration before? Finding one specific image in a gallery packed with thousands of images, can be a daunting task. Wouldn't it be nice if there was a way to search for images by category, rather than having to browse through your entire album to find what you want?
Our thoughts exactly! That's why we created HUAWEI ML Kit's scene detection service, which empowers your app to build a smart album, bolstered by intelligent image classification, the result of detecting and labeling elements within images. With this service, you'll be able to locate any image in little time, and with zero hassle.
Features
ML Kit's scene detection service is able to classify and annotate images with food, flowers, plants, cats, dogs, kitchens, mountains, and washers, among a multitude of other items, as well as provide for an enhanced user experience based on the detected information.
The service contains the following features:
Multi-scenario detection
Detects 102 scenarios, with more scenarios continually added.
High detection accuracy
Detects a wide range of objects with a high degree of accuracy.
Fast detection response
Responds in milliseconds, and continually optimizes performance.
Simple and efficient integration
Facilitates simple and cost-effective integration, with APIs and SDK packages.
Applicable Scenarios
In addition to creating smart albums, retrieving, and classifying images, the scene detection service can also automatically select corresponding filters and camera parameters to help users take better images, by detecting where the users are located.
Development Practice
1. Preparations
1.1 Configure app information in AppGallery Connect.
Before you start developing an app, configure the app information in AppGallery Connect. For details, please refer to Development Guide.
1.2 Configure the Maven repository address for the HMS Core SDK, and integrate the SDK for the service.
(1) Open the build.gradle file in the root directory of your Android Studio project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
(2) Add the AppGallery Connect plug-in and the Maven repository.
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
If the agconnect-services.json file has been added to the app, go to buildscript > dependencies and add the AppGallery Connect plug-in configuration.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Code Development
Static Image Detection
2.1 Create a scene detection analyzer instance.
Code:
// Method 1: Use default parameter settings.
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
// Method 2: Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
MLSceneDetectionAnalyzer analyzer =
2.2 Create an MLFrame object by using the android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are all supported.
Code:
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
2.3 Implement scene detection.
Code:
// Method 1: Detect in synchronous mode.
SparseArray<MLSceneDetection> results = analyzer.analyseFrame(frame);
// Method 2: Detect in asynchronous mode.
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the error code. You can process the error code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the error code.
String errorMessage = mlException.getMessage();
} else {
// Other exceptions.
}
}
});
2.4 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
Camera Stream Detection
You can process camera streams, convert them into an MLFrame object, and detect scenarios using the static image detection method.
If the synchronous detection API is called, you can also use the LensEngine class built into the SDK to detect scenarios in camera streams. The following is the sample code:
3.1 Create a scene detection analyzer, which can only be created on the device.
Code:
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
3.2 Create the SceneDetectionAnalyzerTransactor class for processing detection results. This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this API to obtain the detection results and implement specific services.
Code:
public class SceneDetectionAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLSceneDetection> {
@Override
public void transactResult(MLAnalyzer.Result<MLSceneDetection> results) {
SparseArray<MLSceneDetection> items = results.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
3.3 Set the detection result processor to bind the analyzer to the result processor.
Code:
analyzer.setTransactor(new SceneDetectionAnalyzerTransactor());
// Create an instance of the LensEngine class provided by the HMS Core ML SDK to capture dynamic camera streams and pass the streams to the analyzer.
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context, this.analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1440, 1080)
.applyFps(30.0f)
.enableAutomaticFocus(true)
.create();
3.4 Call the run method to start the camera and read camera streams for detection.
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
3.5 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
You'll find that the sky, plants, and mountains in all of your images will be identified in an instant. Pretty exciting stuff, wouldn't you say? Feel free to try it out yourself!
GitHub Source Code
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow

How a Programmer Developed a Live-Streaming App with Gesture-Controlled Virtual Backgrounds

What's it like to date a programmer?
John is a Huawei programmer. His girlfriend Jenny, a teacher, has an interesting answer to that question: "Thanks to my programmer boyfriend, my course ranked among the most popular online courses at my school".
Let's go over how this came to be. Due to COVID-19, the school where Jenny taught went entirely online. Jenny, who was new to live streaming, wanted her students to experience the full immersion of traveling to Tokyo, New York, Paris, the Forbidden City, Catherine Palace, and the Louvre Museum, so that they could absorb all of the relevant geographic and historical knowledge related to those places. But how to do so?
Jenny was stuck on this issue, but John quickly came to her rescue.
After analyzing her requirements in detail, John developed a tailored online course app that brings its users an uncannily immersive experience. It enables users to change the background while live streaming. The video imagery within the app looks true-to-life, as each pixel is labeled, and the entire body image — down to a single strand of hair — is completely cut out.
Actual Effects​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How to Implement​Changing live-streaming backgrounds by gesture can be realized by using image segmentation and hand gesture recognition in HUAWEI ML Kit.
The image segmentation service segments specific elements from static images or dynamic video streams, with 11 types of image elements supported: human bodies, sky scenes, plants, foods, cats and dogs, flowers, water, sand, buildings, mountains, and others.
The hand gesture recognition service offers two capabilities: hand keypoint detection and hand gesture recognition. Hand keypoint detection is capable of detecting 21 hand keypoints (including fingertips, knuckles, and wrists) and returning positions of the keypoints. The hand gesture recognition capability detects and returns the positions of all rectangular areas of the hand from images and videos, as well as the type and confidence of a gesture. This capability can recognize 14 different gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time video streams.
Development Process​1. Add the AppGallery Connect plugin and the Maven repository.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Integrate required services in the full SDK mode.
Code:
dependencies{
// Import the basic SDK of image segmentation.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.4.300'
// Import the multiclass segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-multiclass-model:2.0.4.300'
// Import the human body segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.4.300'
// Import the basic SDK of hand gesture recognition.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
// Import the model package of hand keypoint detection.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
}
3. Add configurations in the file header.
Add apply plugin: 'com.huawei.agconnect' after apply plugin: 'com.android.application'.
4. Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file:
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="imgseg,handkeypoint" />
...
</manifest>
5. Create an image segmentation analyzer.
Code:
MLImageSegmentationAnalyzer imageSegmentationAnalyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();// Image segmentation analyzer.
MLHandKeypointAnalyzer handKeypointAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();// Hand gesture recognition analyzer.
MLCompositeAnalyzer analyzer = new MLCompositeAnalyzer.Creator()
.add(imageSegmentationAnalyzer)
.add(handKeypointAnalyzer)
.create();
6. Create a class for processing the recognition result.
Code:
public class ImageSegmentAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLImageSegmentation> {
@Override
public void transactResult(MLAnalyzer.Result<MLImageSegmentation> results) {
SparseArray<MLImageSegmentation> items = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
7. Set the detection result processor to bind the analyzer to the result processor.
Code:
imageSegmentationAnalyzer.setTransactor(new ImageSegmentAnalyzerTransactor());
handKeypointAnalyzer.setTransactor(new HandKeypointTransactor());
8. Create a LensEngine object.
Code:
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context,analyzer)
// Set the front or rear camera mode. LensEngine.BACK_LENS indicates the rear camera, and LensEngine.FRONT_LENS indicates the front camera.
.setLensType(LensEngine.FRONT_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
9. Start the camera, read video streams, and start recognition.
Code:
// Implement other logics of the SurfaceView control by yourself.
SurfaceView mSurfaceView = new SurfaceView(this);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
10. Stop the analyzer and release the recognition resources when recognition ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
For more details, as follows:
Our official website
Our Development Documentation page, to find the documents you need
Experience the easy-integration process on Codelabs
GitHub to download demos and sample codes
Stack Overflow to solve any integration problem
Original Source
muraliameakula said:
What's it like to date a programmer?
John is a Huawei programmer. His girlfriend Jenny, a teacher, has an interesting answer to that question: "Thanks to my programmer boyfriend, my course ranked among the most popular online courses at my school".
Let's go over how this came to be. Due to COVID-19, the school where Jenny taught went entirely online. Jenny, who was new to live streaming, wanted her students to experience the full immersion of traveling to Tokyo, New York, Paris, the Forbidden City, Catherine Palace, and the Louvre Museum, so that they could absorb all of the relevant geographic and historical knowledge related to those places. But how to do so?
Jenny was stuck on this issue, but John quickly came to her rescue.
After analyzing her requirements in detail, John developed a tailored online course app that brings its users an uncannily immersive experience. It enables users to change the background while live streaming. The video imagery within the app looks true-to-life, as each pixel is labeled, and the entire body image — down to a single strand of hair — is completely cut out.
Actual Effects​
How to Implement​Changing live-streaming backgrounds by gesture can be realized by using image segmentation and hand gesture recognition in HUAWEI ML Kit.
The image segmentation service segments specific elements from static images or dynamic video streams, with 11 types of image elements supported: human bodies, sky scenes, plants, foods, cats and dogs, flowers, water, sand, buildings, mountains, and others.
The hand gesture recognition service offers two capabilities: hand keypoint detection and hand gesture recognition. Hand keypoint detection is capable of detecting 21 hand keypoints (including fingertips, knuckles, and wrists) and returning positions of the keypoints. The hand gesture recognition capability detects and returns the positions of all rectangular areas of the hand from images and videos, as well as the type and confidence of a gesture. This capability can recognize 14 different gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time video streams.
Development Process​1. Add the AppGallery Connect plugin and the Maven repository.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Integrate required services in the full SDK mode.
Code:
dependencies{
// Import the basic SDK of image segmentation.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.4.300'
// Import the multiclass segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-multiclass-model:2.0.4.300'
// Import the human body segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.4.300'
// Import the basic SDK of hand gesture recognition.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
// Import the model package of hand keypoint detection.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
}
3. Add configurations in the file header.
Add apply plugin: 'com.huawei.agconnect' after apply plugin: 'com.android.application'.
4. Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file:
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="imgseg,handkeypoint" />
...
</manifest>
5. Create an image segmentation analyzer.
Code:
MLImageSegmentationAnalyzer imageSegmentationAnalyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();// Image segmentation analyzer.
MLHandKeypointAnalyzer handKeypointAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();// Hand gesture recognition analyzer.
MLCompositeAnalyzer analyzer = new MLCompositeAnalyzer.Creator()
.add(imageSegmentationAnalyzer)
.add(handKeypointAnalyzer)
.create();
6. Create a class for processing the recognition result.
Code:
public class ImageSegmentAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLImageSegmentation> {
@Override
public void transactResult(MLAnalyzer.Result<MLImageSegmentation> results) {
SparseArray<MLImageSegmentation> items = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
7. Set the detection result processor to bind the analyzer to the result processor.
Code:
imageSegmentationAnalyzer.setTransactor(new ImageSegmentAnalyzerTransactor());
handKeypointAnalyzer.setTransactor(new HandKeypointTransactor());
8. Create a LensEngine object.
Code:
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context,analyzer)
// Set the front or rear camera mode. LensEngine.BACK_LENS indicates the rear camera, and LensEngine.FRONT_LENS indicates the front camera.
.setLensType(LensEngine.FRONT_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
9. Start the camera, read video streams, and start recognition.
Code:
// Implement other logics of the SurfaceView control by yourself.
SurfaceView mSurfaceView = new SurfaceView(this);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
10. Stop the analyzer and release the recognition resources when recognition ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
For more details, as follows:
Our official website
Our Development Documentation page, to find the documents you need
Experience the easy-integration process on Codelabs
GitHub to download demos and sample codes
Stack Overflow to solve any integration problem
Original Source
Click to expand...
Click to collapse
can i integrate any video background?

Environment Mesh: Blend the Real with the Virtual

Augmented reality (AR) is now widely used in a diverse range of fields, to facilitate fun and immersive experiences and interactions. Many features like virtual try-on, 3D gameplay, and interior design, among many others, depend on this technology. For example, many of today's video games use AR to keep gameplay seamless and interactive. Players can create virtual characters in battle games, and make them move as if they are extensions of the player's body. With AR, characters can move and behave like real people, hiding behind a wall, for instance, to escape detection by the enemy. Another common application is adding elements like pets, friends, and objects to photos, without compromising the natural look in the image.
However, AR app development is still hindered by the so-called pass-through problem, which you may have encountered during the development. Examples include a ball moving too fast and then passing through the table, a player being unable to move even when there are no obstacles around, or a fast-moving bullet passing through and then missing its target. You may also have found that the virtual objects that your app applies to the physical world look as if they were pasted on the screen, instead of blending into the environment. This can to a large extent undermine the user experience and may lead directly to user churn. Fortunately there is environment mesh in HMS Core AR Engine, a toolkit that offers powerful AR capabilities and streamlines your app development process, to resolve these issues once and for all. After being integrated with this toolkit, your app will enjoy better perception of the 3D space in which a virtual object is placed, and perform collision detection using the reconstructed mesh. This ensures that users are able to interact with virtual objects in a highly realistic and natural manner, and that virtual characters will be able to move around 3D spaces with greater ease. Next we will show you how to implement this capability.
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Implementation​AR Engine uses the real time computing to output the environment mesh, which includes the device orientation in a real space, and 3D grid for the current camera view. AR Engine is currently supported on mobile phone models with rear ToF cameras, and only supports the scanning of static scenes. After being integrated with this toolkit, your app will be able to use environment meshes to accurately recognize the real world 3D space where a virtual character is located, and allow for the character to be placed anywhere in the space, whether it is a horizontal surface, vertical surface, or curved surface that can be reconstructed. You can use the reconstructed environment mesh to implement virtual and physical occlusion and collision detection, and even hide virtual objects behind physical ones, to effectively prevent pass-through.
Environment mesh technology has a wide range of applications. For example, it can be used to provide users with more immersive and refined virtual-reality interactions during remote collaboration, video conferencing, online courses, multi-player gaming, laser beam scanning (LBS), metaverse, and more.
Integration Procedure​Ensure that you have met the following requirements on the development environment:
JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​1. Before getting started, you will need to register as a Huawei developer and complete identity verification on the HUAWEI Developers website. You can click here to find out the detailed registration and identity verification procedure.
2. Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. The following takes Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
Development Procedure​Initialize the HitResultDisplay class to draw virtual objects based on the specified parameters.
Code:
Public class HitResultDisplay implements SceneMeshComponenDisplay{
// Initialize VirtualObjectData.
VirtualObjectData mVirtualObject = new VirtualObjectData();
// Pass the context to mVirtualObject in the init method.
Public void init(Context context){
mVirtualObject.init(context);
/ / Pass the material attributes.
mVirtualObject.setMaterialProperties();
}
// Pass ARFrame in the onDrawFrame method to obtain the estimated lighting.
Public void onDrawFrame(ARFrame arframe){
// Obtain the estimated lighting.
ARLightEstimate le = arframe.getLightEstimate();
// Obtain the pixel intensity of the current camera field of view.
lightIntensity = le.getPixelIntensity();
// Pass data to methods in mVirtualObject.
mVirtualObject.draw(…,…,lightIntensity,…);
// Pass the ARFrame object in the handleTap method to obtain the coordinate information.
handleTap(arframe);
}
// Implement the handleTap method.
Private void handleTap(ARFrame frame){
// Call hitTest with the ARFrame object.
List<ARHitResult> hitTestResults = frame.hitTest(tap);
// Check whether a surface is hit and whether it is hit in a plane polygon.
For(int i = 0;i<hitTestResults.size();i++){
ARHitResult hitResultTemp = hitTestResults.get(i);
Trackable = hitResultTemp.getTrackable();
If(trackable instanceof ARPoint && ((ARPoint) trackable).getOrientationMode() == ARPoint.OrientationMode.ESTIMATED_SURFACE_NORMAL){
isHasHitFlag = true;
hitResult = hitResultTemp;
}
}
}
}
2. Initialize the SceneMeshDisplay class to render the scene network.
Code:
Public class SceneMeshDiaplay implements SceneMeshComponenDisplay{
// Implement openGL operations in init.
Public void init(Context context){}
// Obtain the current environment mesh in the onDrawFrame method.
Public void onDrawFrame(ARFrame arframe){
ARSceneMesh arSceneMesh = arframe.acquireSceneMesh();
// Create a method for updating data and pass arSceneMesh to the method.
updateSceneMeshData(arSceneMesh);
// Release arSceneMesh when it is no longer in use.
arSceneMesh.release();
}
// Implement this method to update data.
Public void updateSceneMeshData(ARSceneMesh sceneMesh){
// Obtain an array containing mesh vertex coordinates of the environment mesh in the current view.
FloatBuffer meshVertices = sceneMesh.getVertices();
// Obtain an array containing the indexes of the vertices in the mesh triangle plane in the current view.
IntBuffer meshTriangleIndices = sceneMesh.getTriangleIndices();
}
}
3. Initialize the SceneMeshRenderManager class to provide render managers for external scenes, including render managers for virtual objects.
Code:
public class SceneMeshRenderManager implements GLSurfaceView.Render{
// Initialize the class for updating network data and performing rendering.
private SceneMeshDisplay mSceneMesh = new SceneMeshDisplay();
// Initialize the class for drawing virtual objects.
Private HitResultDisplay mHitResultDisplay = new HitResultDisplay();
// Implement the onSurfaceCreated() method.
public void onSurfaceCreated(){
// Pass context to the mSceneMesh and mHitResultDisplay classes.
mSceneMesh.init(mContext);
mHitResultDisplay.init(mContext);
}
// Implement the onDrawFrame() method.
public void onDrawFrame(){
// Configure camera using the ARSession object.
mArSession.setCameraTexTureName();
ARFrame arFrame = mArSession.update();
ARCamera arCamera = arframe.getCamera();
// Pass the data required for the SceneMeshDisplay class.
mSceneMesh.onDrawFrame(arframe,viewmtxs,projmtxs);
}
}
4. Initialize the SceneMeshActivity class to implement display functions.
Code:
public class SceneMeshActivity extends BaseActivity{
// Provides render managers for external scenes, including those for virtual objects.
private ScemeMeshRenderManager mSceneMeshRenderManager;
// Manages the entire running status of AR Engine.
private ARSession mArSession;
// Initialize some classes and objects.
protected void onCreate(Bundle savedInstanceState){
mSceneMeshRenderManager = new SceneMeshRenderManager();
}
// Initialize ARSession in the onResume method.
protected void onResume(){
// Initialize ARSession.
mArSession = new ARSession(this.getApplicationContext());
// Create an ARWorldTrackingConfig object based on the session parameters.
ARConfigBase config = new ARWorldTrackingConfig(mArSession);
// Pass ARSession to SceneMeshRenderManager.
mSceneMeshRenderManager.setArSession(mArSession);
// Enable the mesh, and call the setEnableItem method using config.
config.setEnableItem(ARConfigBase.ENABLE_MESH | ARConfigBase.ENABLE_DEPTH);
}
}
Conclusion​AR bridges the real and the virtual worlds, to make jaw-dropping interactive experiences accessible to all users. That is why so many mobile app developers have opted to build AR capabilities into their apps. Doing so can give your app a leg up over the competition.
When developing such an app, you will need to incorporate a range of capabilities, such as hand recognition, motion tracking, hit test, plane detection, and lighting estimate. Fortunately, you do not have to do any of this on your own. Integrating an SDK can greatly streamline the process, and provide your app with many capabilities that are fundamental to seamless and immersive AR interactions. If you are not sure how to deal with the pass-through issue, or your app is not good at presenting virtual objects naturally in the real world, AR Engine can do a lot of heavy lifting for you. After being integrated with this toolkit, your app will be able to better perceive the physical environments around virtual objects, and therefore give characters the freedom to move around as if they are navigating real spaces.
References​AR Engine Development Guide
Software and Hardware Requirements of AR Engine Features
AR Engine Sample Code

Posture Recognition: Natural Interaction Brought to Life

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Augmented reality (AR) provides immersive interactions by blending real and virtual worlds, making human-machine interactions more interesting and convenient than ever. A common application of AR involves placing a virtual object in the real environment, where the user is free to control or interact with the virtual object. However, there is so much more AR can do beyond that.
To make interactions easier and more immersive, many mobile app developers now allow users to control their devices without having to touch the screen, by identifying the body motions, hand gestures, and facial expressions of users in real time, and using the identified information to trigger different events in the app. For example, in an AR somatosensory game, players can trigger an action in the game by striking a pose, which spares them from having to frequently tap keys on the control console. Likewise, when shooting an image or short video, the user can apply special effects to the image or video by striking specific poses, without even having to touch the screen. In a trainer-guided health and fitness app, the system powered by AR can identify the user's real-time postures to determine whether they are doing the exercise correctly, and guide them to exercise in the correct way. All of these would be impossible without AR.
How then can an app accurately identify postures of users, to power these real time interactions?
If you are also considering developing an AR app that needs to identify user motions in real time to trigger a specific event, such as to control the interaction interface on a device or to recognize and control game operations, integrating an SDK that provides the posture recognition capability is a no brainer. Integrating this SDK will greatly streamline the development process, and allow you to focus on improving the app design and craft the best possible user experience.
HMS Core AR Engine does much of the heavy lifting for you. Its posture recognition capability accurately identifies different body postures of users in real time. After integrating this SDK, your app will be able to use both the front and rear cameras of the device to recognize six different postures from a single person in real time, and output and display the recognition results in the app.
The SDK provides basic core features that motion sensing apps will need, and enriches your AR apps with remote control and collaborative capabilities.
Here I will show you how to integrate AR Engine to implement these amazing features.
How to Develop​Requirements on the development environment:
JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​
1. Before getting started with the development, you will need to first register as a Huawei developer and complete identity verification on the HUAWEI Developers website. You can click here to find out the detailed registration and identity verification procedure.
2. Before getting started with the development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development​1. Check whether AR Engine has been installed on the current device. If so, your app will be able to run properly. If not, you need to prompt the user to install AR Engine, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Initialize an AR scene. AR Engine supports up to five scenes, including motion tracking (ARWorldTrackingConfig[z(1] ), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
3. Call the ARBodyTrackingConfig API to initialize the human body tracking scene.
Code:
mArSession = new ARSession(context)
ARBodyTrackingConfig config = new ARHandTrackingConfig(mArSession);
Config.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE.MASK);
Configure the session information.
mArSession.configure(config);
4. Initialize the BodyRelatedDisplay API to render data related to the main AR type.
Code:
Public interface BodyRelatedDisplay{
Void init();
Void onDrawFrame (Collection<ARBody> bodies,float[] projectionMatrix) ;
}
5. Initialize the BodyRenderManager class, which is used to render the personal data obtained by AREngine.
Code:
Public class BodyRenderManager implements GLSurfaceView.Renderer{
// Implement the onDrawFrame() method.
Public void onDrawFrame(){
ARFrame frame = mSession.update();
ARCamera camera = Frame.getCramera();
// Obtain the projection matrix of the AR camera.
Camera.getProjectionMatrix();
// Obtain the set of all traceable objects of the specified type and pass ARBody.class to return the human body tracking result.
Collection<ARBody> bodies = mSession.getAllTrackbles(ARBody.class);
}
}
6. Initialize BodySkeletonDisplay to obtain skeleton data and pass the data to the OpenGL ES, which will render the data and display it on the device screen.
Code:
Public class BodySkeletonDisplay implements BodyRelatedDisplay{
// Methods used in this class are as follows:
// Initialization method.
public void init(){
}
// Use OpenGL to update and draw the node data.
Public void onDrawFrame(Collection<ARBody> bodies,float[] projectionMatrix){
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = DRAW_COORDINATE;
}
findValidSkeletonPoints(body);
updateBodySkeleton();
drawBodySkeleton(coordinate, projectionMatrix);
}
}
}
// Search for valid skeleton points.
private void findValidSkeletonPoints(ARBody arBody) {
int index = 0;
int[] isExists;
int validPointNum = 0;
float[] points;
float[] skeletonPoints;
if (arBody.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
isExists = arBody.getSkeletonPointIsExist3D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint3D();
} else {
isExists = arBody.getSkeletonPointIsExist2D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint2D();
}
for (int i = 0; i < isExists.length; i++) {
if (isExists[i] != 0) {
points[index++] = skeletonPoints[3 * i];
points[index++] = skeletonPoints[3 * i + 1];
points[index++] = skeletonPoints[3 * i + 2];
validPointNum++;
}
}
mSkeletonPoints = FloatBuffer.wrap(points);
mPointsNum = validPointNum;
}
}
7. Obtain the skeleton point connection data and pass it to OpenGL ES, which will then render the data and display it on the device screen.
Code:
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {
// Render the lines between body bones.
public void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix) {
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = COORDINATE_SYSTEM_TYPE_3D_FLAG;
}
updateBodySkeletonLineData(body);
drawSkeletonLine(coordinate, projectionMatrix);
}
}
}
}
Conclusion​By blending real and virtual worlds, AR gives users the tools they need to overlay creative effects in real environments, and interact with these imaginary virtual elements. AR makes it easy to build whimsical and immersive interactions that enhance user experience. From virtual try-on, gameplay, photo and video shooting, to product launch, training and learning, and home decoration, everything is made easier and more interesting with AR.
If you are considering developing an AR app that interacts with users when they strike specific poses, like jumping, showing their palm, and raising their hands, or even more complicated motions, you will need to equip your app to accurately identify these motions in real time. The AR Engine SDK is a capability that makes this possible. This SDK equips your app to track user motions with a high degree of accuracy, and then interact with the motions, easing the process for developing AR-powered apps.
References​AR Engine Development Guide
Sample Code
Software and Hardware Requirements of AR Engine Features

Must-Have Tool for Anonymous Virtual Livestreams

Influencers have become increasingly important, as more and more consumers choose to purchase items online – whether on Amazon, Taobao, or one of the many other prominent e-commerce platforms. Brands and merchants have spent a lot of money finding influencers to promote their products through live streams and consumer interactions, and many purchases are made on the recommendation of a trusted influencer.
However, employing a public-facing influencer can be costly and risky. Many brands and merchants have opted instead to host live streams with their own virtual characters. This gives them more freedom to showcase their products, and widens the pool of potential on camera talent. For consumers, virtual characters can add fun and whimsy to the shopping experience.
E-commerce platforms have begun to accommodate the preference for anonymous livestreaming, by offering a range of important capabilities, such as those that allow for automatic identification, skeleton point-based motion tracking in real time (as shown in the gif), facial expression and gesture identification, copying of traits to virtual characters, a range of virtual character models for users to choose from, and natural real-world interactions.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Building these capabilities comes with its share of challenges. For example, after finally building a model that is able to translate the person's every gesture, expression, and movement into real-time parameters and then applying them to the virtual character, you can find out that the virtual character can't be blocked by real bodies during the livestream, which gives it a fake, ghost-like form. This is a problem I encountered when I developed my own e-commerce app, and it occurred because I did not occlude the bodies that appeared behind and in front of the virtual character. Fortunately I was able to find an SDK that helped me solve this problem — HMS Core AR Engine.
This toolkit provides a range of capabilities that make it easy to incorporate AR-powered features into apps. From hit testing and movement tracking, to environment mesh, and image tracking, it's got just about everything you need. The human body occlusion capability was exactly what I needed at the time.
Now I'll show you how I integrated this toolkit into my app, and how helpful it's been for.
First I registered for an account on the HUAWEI Developers website, downloaded the AR Engine SDK, and followed the step-by-step development guide to integrate the SDK. The integration process was quite simple and did not take too long. Once the integration was successful, I ran the demo on a test phone, and was amazed to see how well it worked. During livestreams my app was able to recognize and track the areas where I was located within the image, with an accuracy of up to 90%, and provided depth-related information about the area. Better yet, it was able to identify and track the profile of up to two people, and output the occlusion information and skeleton points corresponding to the body profiles in real time. With this capability, I was able to implement a lot of engaging features, for example, changing backgrounds, hiding virtual characters behind real people, and even a feature that allows the audience to interact with the virtual character through special effects. All of these features have made my app more immersive and interactive, which makes it more attractive to potential shoppers.
Demo​As shown in the gif below, the person blocks the virtual panda when walking in front of it.
How to Develop​Preparations​Registering as a developer​Before getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
Creating an app​Create a project and create an app under the project. Pay attention to the following parameter settings:
Platform: Select Android.
Device: Select Mobile phone.
App category: Select App or Game.
Integrating the AR Engine SDK​Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
Configuring the Maven repository address for the AR Engine SDK​The procedure for configuring the Maven repository address in Android Studio is different for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
Adding build dependencies​Open the build.gradle file in the app directory of your project.
Add a build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}'
}
Open the modified build.gradle file again. You will find a Sync Now link in the upper right corner of the page. Click Sync Now and wait until synchronization is complete.
Developing Your App​Checking the Availability​Check whether AR Engine has been installed on the current device. If so, the app can run properly. If not, the app prompts the user to install AR Engine, for example, by redirecting the user to AppGallery. The code is as follows:
Code:
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
Create a BodyActivity object to display body bones and output human body features, for AR Engine to recognize human body.
Code:
Public class BodyActivity extends BaseActivity{
Private BodyRendererManager mBodyRendererManager;
Protected void onCreate(){
// Initialize surfaceView.
mSurfaceView = findViewById();
// Context for keeping the OpenGL ES running.
mSurfaceView.setPreserveEGLContextOnPause(true);
// Set the OpenGL ES version.
mSurfaceView.setEGLContextClientVersion(2);
// Set the EGL configuration chooser, including for the number of bits of the color buffer and the number of depth bits.
mSurfaceView.setEGLConfigChooser(……);
mBodyRendererManager = new BodyRendererManager(this);
mSurfaceView.setRenderer(mBodyRendererManager);
mSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
Protected void onResume(){
// Initialize ARSession to manage the entire running status of AR Engine.
If(mArSession == null){
mArSession = new ARSession(this.getApplicationContext());
mArConfigBase = new ARBodyTrackingConfig(mArSession);
mArConfigBase.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE_MASK);
mArConfigBase.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS
mArSession.configure(mArConfigBase);
}
// Pass the required parameters to setBodyMask.
mBodyRendererManager.setBodyMask(((mArConfigBase.getEnableItem() & ARConfigBase.ENABLE_MASK) != 0) && mIsBodyMaskEnable);
sessionResume(mBodyRendererManager);
}
}
Create a BodyRendererManager object to render the personal data obtained by AR Engine.
Code:
Public class BodyRendererManager extends BaseRendererManager{
Public void drawFrame(){
// Obtain the set of all traceable objects of the specified type.
Collection<ARBody> bodies = mSession.getAllTrackables(ARBody.class);
for (ARBody body : bodies) {
if (body.getTrackingState() != ARTrackable.TrackingState.TRACKING){
continue;
}
mBody = body;
hasBodyTracking = true;
}
// Update the body recognition information displayed on the screen.
StringBuilder sb = new StringBuilder();
updateMessageData(sb, mBody);
Size textureSize = mSession.getCameraConfig().getTextureDimensions();
if (mIsWithMaskData && hasBodyTracking && mBackgroundDisplay instanceof BodyMaskDisplay) {
((BodyMaskDisplay) mBackgroundDisplay).onDrawFrame(mArFrame, mBody.getMaskConfidence(),
textureSize.getWidth(), textureSize.getHeight());
}
// Display the updated body information on the screen.
mTextDisplay.onDrawFrame(sb.toString());
for (BodyRelatedDisplay bodyRelatedDisplay : mBodyRelatedDisplays) {
bodyRelatedDisplay.onDrawFrame(bodies, mProjectionMatrix);
} catch (ArDemoRuntimeException e) {
LogUtil.error(TAG, "Exception on the ArDemoRuntimeException!");
} catch (ARFatalException | IllegalArgumentException | ARDeadlineExceededException |
ARUnavailableServiceApkTooOldException t) {
Log(…);
}
}
// Update gesture-related data for display.
Private void updateMessageData(){
if (body == null) {
return;
}
float fpsResult = doFpsCalculate();
sb.append("FPS=").append(fpsResult).append(System.lineSeparator());
int bodyAction = body.getBodyAction();
sb.append("bodyAction=").append(bodyAction).append(System.lineSeparator());
}
}
Customize the camera preview class, which is used to implement human body drawing based on certain confidence.
Code:
Public class BodyMaskDisplay implements BaseBackGroundDisplay{}
Obtain skeleton data and pass the data to the OpenGL ES, which renders the data and displays it on the screen.
Code:
public class BodySkeletonDisplay implements BodyRelatedDisplay {
Obtain skeleton point connection data and pass it to OpenGL ES for rendering the data and display it on the screen.
Code:
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {}
Conclusion​True-to-life AR live-streaming is now an essential feature in e-commerce apps, but developing this capability from scratch can be costly and time-consuming. AR Engine SDK is the best and most convenient SDK I've encountered, and it's done wonders for my app, by recognizing individuals within images with accuracy as high as 90%, and providing the detailed information required to support immersive, real-world interactions. Try it out on your own app to add powerful and interactive features that will have your users clamoring to shop more!
References​AR Engine Development Guide
Sample Code
API Reference

Categories

Resources