Precise and Immersive AR for Interior Design - Huawei Developers

Augmented reality (AR) technologies are increasingly widespread, notably in the field of interior design, as they allow users to visualize real spaces and apply furnishing to them, with remarkable ease. HMS Core AR Engine is a must-have for developers creating AR-based interior design apps, since it's easy to use, covers all the basics, and considerably streamlines the development process. It is an engine for AR apps that bridge the virtual and real worlds, for a brand new visually interactive user experience. AR Engine's motion tracking capability allows your app to output the real-time 3D coordinates of interior spaces, convert these coordinates between real and virtual worlds, and use this information to determine the correct position of furniture. With AR Engine integrated, your app will be able to provide users with AR-based interior design features that are easy to use.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
As a key component of AR Engine, the motion tracking capability bridges real and virtual worlds, by facilitating the construction of a virtual framework, tracking how the position and pose of user devices change in relation to their surroundings, and outputting the 3D coordinates of the surroundings.
About This Feature​The motion tracking capability provides a geometric link between real and virtual worlds, by tracking the changes of the device's position and pose in relation to its surroundings, and determining the conversion of coordinate systems between the real and virtual worlds. This allows virtual furnishings to be rendered from the perspective of the device user, and overlaid on images captured by the camera.
For example, in an AR-based car exhibition, virtual cars can be placed precisely in the target position, creating a virtual space that's seamlessly in sync with the real world.
The basic condition for implementing real-virtual interaction is tracking the motion of the device in real time, and updating the status of virtual objects in real time based on the motion tracking results. This means that the precision and quality of motion tracking directly affect the AR effects available on your app. Any delay or error can cause a virtual object to jitter or drift, which undermines the sense of reality and immersion offered to users by AR.
Advantages​Simultaneous localization and mapping (SLAM) 3.0 released in AR Engine 3.0 enhances the motion tracking performance in the following ways:
With the 6DoF motion tracking mode, users are able to observe virtual objects in an immersive manner from different distances, directions, and angles.
Stability of virtual objects is ensured, thanks to monocular absolute trajectory error (ATE) as low as 1.6 cm.
The plane detection takes no longer than one second, facilitating plane recognition and expansion.
Integration Procedure​Logging In to HUAWEI Developers and Creating an App​The header is quite self-explanatory
Integrating the AR Engine SDK​1. Open the project-level build.gradle file in Android Studio, and add the Maven repository (versions earlier than 7.0 are used as an example).
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Go to allprojects > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url "https://developer.huawei.com/repo/" }
}
}
allprojects {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url "https://developer.huawei.com/repo/" }
}
}
2. Open the app-level build.gradle file in your project.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:3.1.0.1'
}
Code Development​1. Check whether AR Engine has been installed on the current device. If yes, your app can run properly. If not, your app should automatically redirect the user to AppGallery to install AR Engine.
Code:
private boolean arEngineAbilityCheck() {
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk && isRemindInstall) {
Toast.makeText(this, "Please agree to install.", Toast.LENGTH_LONG).show();
finish();
}
LogUtil.debug(TAG, "Is Install AR Engine Apk: " + isInstallArEngineApk);
if (!isInstallArEngineApk) {
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
return AREnginesApk.isAREngineApkReady(this);
}
2. Check permissions before running.
Configure the camera permission in the AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.CAMERA" />
private static final int REQUEST_CODE_ASK_PERMISSIONS = 1;
private static final int MAX_ARRAYS = 10;
private static final String[] PERMISSIONS_ARRAYS = new String[]{Manifest.permission.CAMERA};
List<String> permissionsList = new ArrayList<>(MAX_ARRAYS);
boolean isHasPermission = true;
for (String permission : PERMISSIONS_ARRAYS) {
if (ContextCompat.checkSelfPermission(activity, permission) != PackageManager.PERMISSION_GRANTED) {
isHasPermission = false;
break;
}
}
if (!isHasPermission) {
for (String permission : PERMISSIONS_ARRAYS) {
if (ContextCompat.checkSelfPermission(activity, permission) != PackageManager.PERMISSION_GRANTED) {
permissionsList.add(permission);
}
}
ActivityCompat.requestPermissions(activity,
permissionsList.toArray(new String[permissionsList.size()]), REQUEST_CODE_ASK_PERMISSIONS);
}
3. Create an ARSession object for motion tracking by calling ARWorldTrackingConfig.
Code:
private ARSession mArSession;
private ARWorldTrackingConfig mConfig;
config.setCameraLensFacing(ARConfigBase.CameraLensFacing.FRONT); // Set scene parameters by calling config.setXXX.
config.setPowerMode(ARConfigBase.PowerMode.ULTRA_POWER_SAVING);
mArSession.configure(config);
mArSession.resume();
mArSession.configure(config);
mSession.setCameraTextureName(mTextureDisplay.getExternalTextureId());
ARFrame arFrame = mSession.update(); // Obtain a frame of data from ARSession.
// Set the environment texture probe and mode after the camera is initialized.
setEnvTextureData();
ARCamera arCamera = arFrame.getCamera(); // Obtain ARCamera from ARFrame. ARCamera can then be used for obtaining the camera's projection matrix to render the window.
// The size of the projection matrix is 4 x 4.
float[] projectionMatrix = new float[16];
arCamera.getProjectionMatrix(projectionMatrix, PROJ_MATRIX_OFFSET, PROJ_MATRIX_NEAR, PROJ_MATRIX_FAR);
mTextureDisplay.onDrawFrame(arFrame);
StringBuilder sb = new StringBuilder();
updateMessageData(arFrame, sb);
mTextDisplay.onDrawFrame(sb);
// The size of ViewMatrix is 4 x 4.
float[] viewMatrix = new float[16];
arCamera.getViewMatrix(viewMatrix, 0);
for (ARPlane plane : mSession.getAllTrackables(ARPlane.class)) { // Obtain all trackable planes from ARSession.
if (plane.getType() != ARPlane.PlaneType.UNKNOWN_FACING
&& plane.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
drawTarget(mSession.getAllTrackables(ARTarget.class), arCamera, viewMatrix, projectionMatrix);
mLabelDisplay.onDrawFrame(mSession.getAllTrackables(ARPlane.class), arCamera.getDisplayOrientedPose(),
projectionMatrix);
handleGestureEvent(arFrame, arCamera, projectionMatrix, viewMatrix);
ARLightEstimate lightEstimate = arFrame.getLightEstimate();
ARPointCloud arPointCloud = arFrame.acquirePointCloud();
getEnvironmentTexture(lightEstimate);
drawAllObjects(projectionMatrix, viewMatrix, getPixelIntensity(lightEstimate));
mPointCloud.onDrawFrame(arPointCloud, viewMatrix, projectionMatrix);
ARHitResult hitResult = hitTest4Result(arFrame, arCamera, event.getEventSecond());
if (hitResult != null) {
mSelectedObj.setAnchor(hitResult.createAnchor()); // Create an anchor at the hit position to enable AR Engine to continuously track the position.
}
4. Draw the required virtual object based on the anchor position.
Code:
mEnvTextureBtn.setOnCheckedChangeListener((compoundButton, b) -> {
mEnvTextureBtn.setEnabled(false);
handler.sendEmptyMessageDelayed(MSG_ENV_TEXTURE_BUTTON_CLICK_ENABLE,
BUTTON_REPEAT_CLICK_INTERVAL_TIME);
mEnvTextureModeOpen = !mEnvTextureModeOpen;
if (mEnvTextureModeOpen) {
mEnvTextureLayout.setVisibility(View.VISIBLE);
} else {
mEnvTextureLayout.setVisibility(View.GONE);
}
int lightingMode = refreshLightMode(mEnvTextureModeOpen, ARConfigBase.LIGHT_MODE_ENVIRONMENT_TEXTURE);
refreshConfig(lightingMode);
});
Reference​>> About AR Engine
>> AR Engine Development Guide
>> Open-source repository at GitHub and Gitee
>> HUAWEI Developers
>> Development Documentation

Related

Develop an ID Photo DIY Applet with HMS MLKit SDK in 30 Min

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201257812100840239&fid=0101187876626530001
It’s an application level development and we won’t go through the algorithm of image segmentation. Use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don’t know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don’t have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn’t meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
1.2 Add SDK Dependency in Application Level build.gradle
Introducing SDK and basic SDK of face recognition:
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user’s device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--Uses storage permissions--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // Open the corresponding configuration page based on the package name.
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
The image segmentation detector can be created through the image segmentation detection configurator “mlimagesegmentation setting".
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create “mlframe” Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator “MLImageSegmentationSetting".
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call “asyncanalyseframe” Method for Image Segmentation
Code:
// Create a task to process the result returned by the image segmentation detector. Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // Asynchronously processing the result returned by the image segmentation detector Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let’s see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-MLKit/HUAWEI-HMS-MLKit-Sample=
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
People’s portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
Previous link:
NO. 1:One article to understand Huawei HMS ML Kit text recognition, bank card recognition, general card identification
NO.2: Integrating MLkit and publishing ur app on Huawei AppGallery
NO.3.: Comparison Between Zxing and Huawei HMS Scan Kit
NO.4: How to use Huawei HMS MLKit service to quickly develop a photo translation app

Easily Capture Body Motion with HUAWEI ML Kit’s Skeleton Detection

Are you the kind of person who tends to go stiff and awkward when there’s a camera on you, and ends up looking unnatural in photos? If so, posture snapshots can help. All you need to do is select a posture template, and the camera will automatically take snapshots when it detects your body is in that position. This means photographs are only taken when you’re at your most natural. In this post, I'm going to show you how to integrate HUAWEI ML Kit's skeleton detection function into your apps. This function locates 14 skeleton points, and easily captures images of specific postures.
Skeleton Detection Function Development
1. Preparations
Before you get started, you need to make the necessary preparations. Also, ensure that the Maven repository address for the HMS Core SDK has been configured in your project, and the skeleton detection SDK has been integrated.
1.1 Configure the Maven Repository Address to the Project-Level build.gradle File
Code:
buildscript {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
classpath "com.android.tools.build:gradle:3.3.2"
}
}
Code:
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
1.2 Add SDK Dependencies to the App-Level build.gradle File
Code:
dependencies {
implementation 'com.huawei.hms:ml-computer-vision-skeleton-model:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-skeleton:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-base:2.0.1.300'
}
2 Code Development
2.1 Static Image Detection
2.1.1 Create a Skeleton Analyzer
Code:
MLSkeletonAnalyzer analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer();
2.1.2 Create an MLFrame Using a Bitmap
The image resolution should be not less than 320 x 320 pixels and not greater than 1920 x 1920 pixels.
Code:
// Create an MLFrame using the bitmap.
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.1.3 Call the asyncAnalyseFrame Method to Perform Skeleton Detection
Code:
Task<List<MLSkeleton>> task = analyzer.asyncAnalyseFrame(frame); task.addOnSuccessListener(new OnSuccessListener<List<MLSkeleton>>() {
public void onSuccess(List<MLSkeleton> skeletons) {
// Process the detection result.
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Detection failure.
}
});
2.1.4 Stop the Analyzer and Release Resources When the Detection Ends
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
2.2 Dynamic Video Detection
2.2.1 Create a Skeleton Analyzer
Code:
MLSkeletonAnalyzer analyzer = MLSkeletonAnalyzerFactory.getInstance().getSkeletonAnalyzer();
2.2.2 Create the SkeletonAnalyzerTransactor Class to Process the Detection Result
This class implements the MLAnalyzer.MLTransactor<T> API. You can use the transactResult method to obtain the detection result and implement specific services.
Code:
public class SkeletonAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLSkeleton> {
@Override
public void transactResult(MLAnalyzer.Result<MLSkeleton> results) {
SparseArray<MLSkeleton> items = results.getAnalyseList();
// You can process the detection result as required. For example, calculate the similarity in this method to perform an operation, such as taking a photo when a specific posture has been detected.
// Only the detection result is processed. Other detection APIs provided by ML Kit cannot be called.
// Convert the result encapsulated using SparseArray to an ArrayList for similarity calculation.
List<MLSkeleton> resultsList = new ArrayList<>();
for (int i = 0; i < items.size(); i++) {
resultsList.add(items.valueAt(i));
}
// Calculate the similarity between the detection result and template.
// templateList is a list of skeleton templates. Templates can be generated by detecting static images. The skeleton detection service supports single-person and multi-person template matching.
float result = analyzer.caluteSimilarity(resultsList, templateList);
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.2.3 Set the Detection Result Processor to Bind the Analyzer
Code:
analyzer.setTransactor(new SkeletonAnalyzerTransactor());
2.2.4 Create the LensEngine Class
This class is provided by the HMS Core ML SDK. It captures dynamic video streams from the camera, and sends them to the analyzer. The camera display size should be not less than 320 x 320 pixels and not greater than 1920 x 1920 pixels.
Code:
// Create LensEngine.
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
2.2.5 Open the Camera
You can obtain and detect the video streams, then stop the analyzer and release resources when the detection ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
Let's take a look at the dynamic video.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
You can do all sorts of things with HUAWEI ML Kit's skeleton detection function:
Create virtual images to simulate live action in motion sensing games.
Provide posture guidance to enhance workouts and rehabilitation training.
Detect unusual behavior in video surveillance footage.
Well explained, will it support non -huawei devices if suppose i installed HMS Core APK.
Good Article Thanks :victory:
Nice explanation. Can you please provide a scenario where I can use this service. Thank you

Create a New Paradigm for Photo Gallery Management by ML Kit

Overview
"Hey, I just took some pictures of that gorgeous view. Take a look."
"Yes, let me see."
... (a few minutes later)
"Where are they?"
"Wait a minute. There are too many pictures."
Have you experienced this type of frustration before? Finding one specific image in a gallery packed with thousands of images, can be a daunting task. Wouldn't it be nice if there was a way to search for images by category, rather than having to browse through your entire album to find what you want?
Our thoughts exactly! That's why we created HUAWEI ML Kit's scene detection service, which empowers your app to build a smart album, bolstered by intelligent image classification, the result of detecting and labeling elements within images. With this service, you'll be able to locate any image in little time, and with zero hassle.
Features
ML Kit's scene detection service is able to classify and annotate images with food, flowers, plants, cats, dogs, kitchens, mountains, and washers, among a multitude of other items, as well as provide for an enhanced user experience based on the detected information.
The service contains the following features:
Multi-scenario detection
Detects 102 scenarios, with more scenarios continually added.
High detection accuracy
Detects a wide range of objects with a high degree of accuracy.
Fast detection response
Responds in milliseconds, and continually optimizes performance.
Simple and efficient integration
Facilitates simple and cost-effective integration, with APIs and SDK packages.
Applicable Scenarios
In addition to creating smart albums, retrieving, and classifying images, the scene detection service can also automatically select corresponding filters and camera parameters to help users take better images, by detecting where the users are located.
Development Practice
1. Preparations
1.1 Configure app information in AppGallery Connect.
Before you start developing an app, configure the app information in AppGallery Connect. For details, please refer to Development Guide.
1.2 Configure the Maven repository address for the HMS Core SDK, and integrate the SDK for the service.
(1) Open the build.gradle file in the root directory of your Android Studio project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
(2) Add the AppGallery Connect plug-in and the Maven repository.
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
If the agconnect-services.json file has been added to the app, go to buildscript > dependencies and add the AppGallery Connect plug-in configuration.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Code Development
Static Image Detection
2.1 Create a scene detection analyzer instance.
Code:
// Method 1: Use default parameter settings.
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
// Method 2: Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
MLSceneDetectionAnalyzer analyzer =
2.2 Create an MLFrame object by using the android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are all supported.
Code:
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
2.3 Implement scene detection.
Code:
// Method 1: Detect in synchronous mode.
SparseArray<MLSceneDetection> results = analyzer.analyseFrame(frame);
// Method 2: Detect in asynchronous mode.
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the error code. You can process the error code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the error code.
String errorMessage = mlException.getMessage();
} else {
// Other exceptions.
}
}
});
2.4 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
Camera Stream Detection
You can process camera streams, convert them into an MLFrame object, and detect scenarios using the static image detection method.
If the synchronous detection API is called, you can also use the LensEngine class built into the SDK to detect scenarios in camera streams. The following is the sample code:
3.1 Create a scene detection analyzer, which can only be created on the device.
Code:
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
3.2 Create the SceneDetectionAnalyzerTransactor class for processing detection results. This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this API to obtain the detection results and implement specific services.
Code:
public class SceneDetectionAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLSceneDetection> {
@Override
public void transactResult(MLAnalyzer.Result<MLSceneDetection> results) {
SparseArray<MLSceneDetection> items = results.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
3.3 Set the detection result processor to bind the analyzer to the result processor.
Code:
analyzer.setTransactor(new SceneDetectionAnalyzerTransactor());
// Create an instance of the LensEngine class provided by the HMS Core ML SDK to capture dynamic camera streams and pass the streams to the analyzer.
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context, this.analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1440, 1080)
.applyFps(30.0f)
.enableAutomaticFocus(true)
.create();
3.4 Call the run method to start the camera and read camera streams for detection.
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
3.5 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
You'll find that the sky, plants, and mountains in all of your images will be identified in an instant. Pretty exciting stuff, wouldn't you say? Feel free to try it out yourself!
GitHub Source Code
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow

Build an Emoji Making App Effortlessly

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Emojis are a must-have tool in today's online communications as they help add color to text-based chatting and allow users to better express the emotions behind their words. Since the number of preset emojis is always limited, many apps now allow users to create their own custom emojis to keep things fresh and exciting.
For example, in a social media app, users who do not want to show their faces when making video calls can use an animated character to protect their privacy, with their facial expressions applied to the character; in a live streaming or e-commerce app, virtual streamers with realistic facial expressions are much more likely to attract watchers; in a video or photo shooting app, users can control the facial expressions of an animated character when taking a selfie, and then share the selfie via social media; and in an educational app for kids, a cute animated character with detailed facial expressions will make online classes much more fun and engaging for students.
I myself am developing such a messaging app. When chatting with friends and wanting to express themselves in ways other than words, users of my app can take a photo to create an emoji of themselves, or of an animated character they have selected. The app will then identify users' facial expressions, and apply their facial expressions to the emoji. In this way, users are able to create an endless amount of unique emojis. During the development of my app, I used the capabilities provided by HMS Core AR Engine to track users' facial expressions and convert the facial expressions into parameters, which greatly reduced the development workload. Now I will show you how I managed to do this.
Implementation​AR Engine provides apps with the ability to track and recognize facial expressions in real time, which can then be converted into facial expression parameters and used to accurately control the facial expressions of virtual characters.
Currently, AR Engine provides 64 facial expressions, including eyelid, eyebrow, eyeball, mouth, and tongue movements. It supports 21 eye-related movements, including eyeball movement and opening and closing the eyes; 28 mouth movements, including opening the mouth, puckering, pulling, or licking the lips, and moving the tongue; as well as 5 eyebrow movements, including raising or lowering the eyebrows.
Demo​Facial expression based emoji
Development Procedure​Requirements on the Development Environment​JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
Test device: see Software and Hardware Requirements of AR Engine Features
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​1. Before getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
2. Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development​1. Check whether AR Engine has been installed on the current device. If yes, your app can run properly. If not, you need to prompt the user to install it, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Create an AR scene. AR Engine supports five scenes, including motion tracking (ARWorldTrackingConfig), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
The following takes creating a face tracking scene by calling ARFaceTrackingConfig as an example.
Code:
// Create an ARSession object.
mArSession = new ARSession(this);
// Select a specific Config to initialize the ARSession object based on the application scenario.
ARFaceTrackingConfig config = new ARFaceTrackingConfig(mArSession);
Set scene parameters using the config.setXXX method.
Code:
// Set the camera opening mode, which can be external or internal. The external mode can only be used in ARFace. Therefore, you are advised to use the internal mode.
mArConfig.setImageInputMode(ARConfigBase.ImageInputMode.EXTERNAL_INPUT_ALL);
3. Set the AR scene parameters for face tracking and start face tracking.
Code:
mArSession.configure(mArConfig);
mArSession.resume();
4. Initialize the FaceGeometryDisplay class to obtain the facial geometric data and render the data on the screen.
Code:
public class FaceGeometryDisplay {
// Initialize the OpenGL ES rendering related to face geometry, including creating the shader program.
void init(Context context) {...
}
}
5. Initialize the onDrawFrame method in the FaceGeometryDisplay class, and call face.getFaceGeometry() to obtain the face mesh.
Code:
public void onDrawFrame(ARCamera camera, ARFace face) {
ARFaceGeometry faceGeometry = face.getFaceGeometry();
updateFaceGeometryData(faceGeometry);
updateModelViewProjectionData(camera, face);
drawFaceGeometry();
faceGeometry.release();
}
6. Initialize updateFaceGeometryData() in the FaceGeometryDisplay class.
Pass the face mesh data for configuration and set facial expression parameters using OpenGl ES.
Code:
private void updateFaceGeometryData (ARFaceGeometry faceGeometry) {
FloatBuffer faceVertices = faceGeometry.getVertices();
FloatBuffer textureCoordinates =faceGeometry.getTextureCoordinates();
// Obtain an array consisting of face mesh texture coordinates, which is used together with the vertex data returned by getVertices() during rendering.
}
7. Initialize the FaceRenderManager class to manage facial data rendering.
Code:
public class FaceRenderManager implements GLSurfaceView.Renderer {
public FaceRenderManager(Context context, Activity activity) {
mContext = context;
mActivity = activity;
}
// Set ARSession to obtain the latest data.
public void setArSession(ARSession arSession) {
if (arSession == null) {
LogUtil.error(TAG, "Set session error, arSession is null!");
return;
}
mArSession = arSession;
}
// Set ARConfigBase to obtain the configuration mode.
public void setArConfigBase(ARConfigBase arConfig) {
if (arConfig == null) {
LogUtil.error(TAG, "setArFaceTrackingConfig error, arConfig is null.");
return;
}
mArConfigBase = arConfig;
}
// Set the camera opening mode.
public void setOpenCameraOutsideFlag(boolean isOpenCameraOutsideFlag) {
isOpenCameraOutside = isOpenCameraOutsideFlag;
}
...
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
mFaceGeometryDisplay.init(mContext);
}
}
8. Implement the face tracking effect by calling methods like setArSession and setArConfigBase of FaceRenderManager in FaceActivity.
Code:
public class FaceActivity extends BaseActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
mFaceRenderManager = new FaceRenderManager(this, this);
mFaceRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mFaceRenderManager.setTextView(mTextView);
glSurfaceView.setRenderer(mFaceRenderManager);
glSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
}
Conclusion​Emojis allow users to express their moods and excitement in a way words can't. Instead of providing users with a selection of the same old boring preset emojis that have been used a million times, you can now make your app more fun by allowing users to create emojis themselves! Users can easily create an emoji with their own smiles, simply by facing the camera, selecting an animated character they love, and smiling. With such an ability to customize emojis, users will be able to express their feelings in a more personalized and interesting manner. If you have any interest in developing such an app, AR Engine is a great choice for you. With accurate facial tracking capabilities, it is able to identify users' facial expressions in real time, convert the facial expressions into parameters, and then apply them to virtual characters. Integrating the capability can help you considerably streamline your app development process, leaving you with more time to focus on how to provide more interesting features to users and improve your app's user experience.
Reference​AR Engine Sample Code
Face Tracking Capability

Posture Recognition: Natural Interaction Brought to Life

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Augmented reality (AR) provides immersive interactions by blending real and virtual worlds, making human-machine interactions more interesting and convenient than ever. A common application of AR involves placing a virtual object in the real environment, where the user is free to control or interact with the virtual object. However, there is so much more AR can do beyond that.
To make interactions easier and more immersive, many mobile app developers now allow users to control their devices without having to touch the screen, by identifying the body motions, hand gestures, and facial expressions of users in real time, and using the identified information to trigger different events in the app. For example, in an AR somatosensory game, players can trigger an action in the game by striking a pose, which spares them from having to frequently tap keys on the control console. Likewise, when shooting an image or short video, the user can apply special effects to the image or video by striking specific poses, without even having to touch the screen. In a trainer-guided health and fitness app, the system powered by AR can identify the user's real-time postures to determine whether they are doing the exercise correctly, and guide them to exercise in the correct way. All of these would be impossible without AR.
How then can an app accurately identify postures of users, to power these real time interactions?
If you are also considering developing an AR app that needs to identify user motions in real time to trigger a specific event, such as to control the interaction interface on a device or to recognize and control game operations, integrating an SDK that provides the posture recognition capability is a no brainer. Integrating this SDK will greatly streamline the development process, and allow you to focus on improving the app design and craft the best possible user experience.
HMS Core AR Engine does much of the heavy lifting for you. Its posture recognition capability accurately identifies different body postures of users in real time. After integrating this SDK, your app will be able to use both the front and rear cameras of the device to recognize six different postures from a single person in real time, and output and display the recognition results in the app.
The SDK provides basic core features that motion sensing apps will need, and enriches your AR apps with remote control and collaborative capabilities.
Here I will show you how to integrate AR Engine to implement these amazing features.
How to Develop​Requirements on the development environment:
JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​
1. Before getting started with the development, you will need to first register as a Huawei developer and complete identity verification on the HUAWEI Developers website. You can click here to find out the detailed registration and identity verification procedure.
2. Before getting started with the development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development​1. Check whether AR Engine has been installed on the current device. If so, your app will be able to run properly. If not, you need to prompt the user to install AR Engine, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Initialize an AR scene. AR Engine supports up to five scenes, including motion tracking (ARWorldTrackingConfig[z(1] ), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
3. Call the ARBodyTrackingConfig API to initialize the human body tracking scene.
Code:
mArSession = new ARSession(context)
ARBodyTrackingConfig config = new ARHandTrackingConfig(mArSession);
Config.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE.MASK);
Configure the session information.
mArSession.configure(config);
4. Initialize the BodyRelatedDisplay API to render data related to the main AR type.
Code:
Public interface BodyRelatedDisplay{
Void init();
Void onDrawFrame (Collection<ARBody> bodies,float[] projectionMatrix) ;
}
5. Initialize the BodyRenderManager class, which is used to render the personal data obtained by AREngine.
Code:
Public class BodyRenderManager implements GLSurfaceView.Renderer{
// Implement the onDrawFrame() method.
Public void onDrawFrame(){
ARFrame frame = mSession.update();
ARCamera camera = Frame.getCramera();
// Obtain the projection matrix of the AR camera.
Camera.getProjectionMatrix();
// Obtain the set of all traceable objects of the specified type and pass ARBody.class to return the human body tracking result.
Collection<ARBody> bodies = mSession.getAllTrackbles(ARBody.class);
}
}
6. Initialize BodySkeletonDisplay to obtain skeleton data and pass the data to the OpenGL ES, which will render the data and display it on the device screen.
Code:
Public class BodySkeletonDisplay implements BodyRelatedDisplay{
// Methods used in this class are as follows:
// Initialization method.
public void init(){
}
// Use OpenGL to update and draw the node data.
Public void onDrawFrame(Collection<ARBody> bodies,float[] projectionMatrix){
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = DRAW_COORDINATE;
}
findValidSkeletonPoints(body);
updateBodySkeleton();
drawBodySkeleton(coordinate, projectionMatrix);
}
}
}
// Search for valid skeleton points.
private void findValidSkeletonPoints(ARBody arBody) {
int index = 0;
int[] isExists;
int validPointNum = 0;
float[] points;
float[] skeletonPoints;
if (arBody.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
isExists = arBody.getSkeletonPointIsExist3D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint3D();
} else {
isExists = arBody.getSkeletonPointIsExist2D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint2D();
}
for (int i = 0; i < isExists.length; i++) {
if (isExists[i] != 0) {
points[index++] = skeletonPoints[3 * i];
points[index++] = skeletonPoints[3 * i + 1];
points[index++] = skeletonPoints[3 * i + 2];
validPointNum++;
}
}
mSkeletonPoints = FloatBuffer.wrap(points);
mPointsNum = validPointNum;
}
}
7. Obtain the skeleton point connection data and pass it to OpenGL ES, which will then render the data and display it on the device screen.
Code:
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {
// Render the lines between body bones.
public void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix) {
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = COORDINATE_SYSTEM_TYPE_3D_FLAG;
}
updateBodySkeletonLineData(body);
drawSkeletonLine(coordinate, projectionMatrix);
}
}
}
}
Conclusion​By blending real and virtual worlds, AR gives users the tools they need to overlay creative effects in real environments, and interact with these imaginary virtual elements. AR makes it easy to build whimsical and immersive interactions that enhance user experience. From virtual try-on, gameplay, photo and video shooting, to product launch, training and learning, and home decoration, everything is made easier and more interesting with AR.
If you are considering developing an AR app that interacts with users when they strike specific poses, like jumping, showing their palm, and raising their hands, or even more complicated motions, you will need to equip your app to accurately identify these motions in real time. The AR Engine SDK is a capability that makes this possible. This SDK equips your app to track user motions with a high degree of accuracy, and then interact with the motions, easing the process for developing AR-powered apps.
References​AR Engine Development Guide
Sample Code
Software and Hardware Requirements of AR Engine Features

Categories

Resources