What makes a beautiful game loaded with top notch graphics, what makes virtual reality meeting rooms, what makes moving objects and space representation?
I am sure, this would have made you wonder!!
This is possible with 3D image rendering.
3D image rendering is a process to convert any 3D model (which is also done using computer software) into a 2D view on a compute/mobile screen, however ensuring the actual feel of the 3D model.
There are many tools and software to do the conversions on a system but very few supports the same process on an Android Mobile device.
Huawei Scene Kit is a wonderful set of API’s which allows 3D models to display on a mobile screen while converting the 3D models adopting physical based rendering (PBR) pipelines to achieve realistic rendering effects.
Features
Uses physical based rendering for 3D scenes with quality immersion to provide realistic effects.
Offers API's for different scenarios for easier 3D scene construction which makes the development super easy.
Uses less power for equally powerful scene construction using a 3D graphics rendering framework and algorithms to provide high performance experience.
How does Huawei Scene Kit Works
The SDK of Scene Kit, after being integrated into your app, will send a 3D materials loading request to the Scene Kit APK in HMS Core (APK). Then the Scene Kit APK will verify, parse, and render the materials.
Scene Kit also provides a Full-SDK, which you can integrate into your app to access 3D graphics rendering capabilities, even though your app runs on phones without HMS Core.
Scene Kit uses the Entity Component System (ECS) to reduce coupling and implement multi-threaded parallel rendering.
Real-time PBR pipelines are adopted to make rendered images look like in a real world.
The general-purpose GPU Turbo is supported to significantly reduce power consumption.
Huawei Scene Kit supports 3 types of different rendering mechanism
SceneView
ARView
FaceView
Development Overview
Prerequisite
Must have a Huawei Developer Account
Must have Android Studio 3.0 or later
Must have a Huawei phone with HMS Core 4.0.2.300 or later
EMUI 3.0 or later
Software Requirements
Java SDK 1.7 or later
Android 5.0 or later
Supported Devices
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Supported 3D Formates
Materials to be rendered: glTF, glb
glTF textures: png, jpeg
Skybox materials: dds (cubemap)
Lighting maps: dds (cubemap)
Note: These materials allow for a resolution of up to 4K.
Preparation
Create an app or project in the Huawei app gallery connect.
Provide the SHA Key and App Package name of the project in App Information Section and enable the required API.
Create an Android project.
Note: Scene Kit SDK can be directly called by devices, without connecting to AppGallery Connect and hence it is not mandatory to download and integrate the agconnect-services.json.
Integration
Add below to build.gradle (project)file, under buildscript/repositories and allprojects/repositories.
Code:
Maven {url 'http://developer.huawei.com/repo/'}
Add below to build.gradle (app) file, under dependencies.
To use the SDK of Scene Kit, add the following dependencies:
Code:
dependences {
implementation 'com.huawei.scenekit:sdk:{5.0.2.302}'
}
To use the Full-SDK, add the following dependencies:
Code:
dependencies {
implementation 'com.huawei.scenekit:full-sdk:{5.0.2.302}'
Adding permissions
Code:
<uses-permission android:name="android.permission.CAMERA " />
Development Process
This article focus on demonstrating the capabilities of Huawei’s Scene Kit: Scene View API’s.
Here is the example which explains how we can integrate a simple 3D Model (Downloaded from the free website: https://sketchfab.com/ ) with Scene View API’s to showcase the space view of AMAZON Jungle.
Download and add supported 3D model into the App’s asset folder.
Main Activity
Launcher activity for the application which has a button to navigate and display the space view.
Code:
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/dbg"
android:orientation="vertical">
<Button
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@drawable/btn"
android:onClick="onBtnSceneViewDemoClicked"
android:text="@string/btn_text_scene_kit_demo"
android:textColor="@color/upsdk_white"
android:textSize="20sp"
android:textStyle="bold" />
</LinearLayout>
Space View Activity
It’s a java class which holds the logic to render the 3D models using the API’s and display them in a surface view.
Code:
import android.content.Context;
import android.graphics.Canvas;
import android.util.AttributeSet;
import android.view.MotionEvent;
import android.view.SurfaceHolder;
import com.huawei.hms.scene.sdk.SceneView;
public class SceneSampleView extends SceneView {
/* Constructor - used in new mode.*/
public SceneSampleView(Context context) {
super(context);
}
public SceneSampleView(Context context, AttributeSet attributeSet) {
super(context, attributeSet);
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
super.surfaceCreated(holder);
// Loads the model of a scene by reading files from assets.
loadScene("SceneView/scene.gltf");
}
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
super.surfaceChanged(holder, format, width, height);
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
super.surfaceDestroyed(holder);
}
@Override
public boolean onTouchEvent(MotionEvent motionEvent) {
return super.onTouchEvent(motionEvent);
}
@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);
}
}
Results
Conclusion
We learnt a simple application which demonstrate the Scene View API’s and showcase how easy it is to have the 3D models shown on the applications.
More information, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201382255415980474&fid=0101187876626530001
Related
Hello everyone, in this article we will create a demo application using the core features of Panorama Kit to help you better understand the implementation of this kit. This app will simply allow users to quickly display interactive viewing of panoramic images in simulated 3D space.
What is Panorama Kit and Why Should We Use It?
By integrating the HUAWEI Panorama Kit, your app can quickly display interactive viewing of 360-degree spherical or cylindrical panoramic images in simulated 3D space, delivering immersive 360-degree showcases and interactions to users.
Features
360-degree spherical panorama, partially 360-degree spherical panorama, and cylindrical panorama
Parsing and display of panorama images in JPG, JPEG, and PNG formats
Interactive viewing of panoramic videos
Panorama view adjustment by swiping the screen or rotating the mobile phone to any degree
Interactive viewing of 360-degree spherical panoramas shot by mainstream panoramic cameras
Flexible use of panorama services, such as displaying panoramas in a certain area of your app, instead of in a window outside the app, made possible with in-app panorama display APIs
Integration Preparations
Panorama SDK can be directly called by devices, without connecting to AppGallery Connect so you don’t need agconnect-services.json. You just need the add Maven repository address in the project-level build.gradle file.
Software Requirements
Android Studio 3.0 or later
Java SDK 1.7 or later
HMS Core (APK) 4.0.0.300 or later
HMS Core SDK 4.0.0.300 or later
Android 5.0 or later
Supported Locations
To see all supported locations, please refer to HUAWEI Panorama Kit Supported Locations
Configuration
Add Maven repository address in the project-level build.gradle file
Code:
buildscript {
repositories {
...
maven { url 'https://developer.huawei.com/repo/' }
}
...
}
allprojects {
repositories {
...
maven { url 'https://developer.huawei.com/repo/' }
}
}
...
And add build dependencies in the app-level build.gradle file.
Code:
implementation 'com.huawei.hms:panorama:4.0.4.301'
Keep in mind that the SDK version used in this article may be not the latest one. If you want to use the latest SDK version, please refer to Version Change History.
Finally, add read and write permissions in the AndroidManifest.xml.
Code:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Remember, in Android 6.0 or later, you need to dynamically request the permissions.
Implementation
There are 3 different ways to call Panorama View;
Load a panorama image without a specified panorama type.
Load a panorama image with a specified panorama type.
Start an activity, which shows how to call panorama image display using the PanoramaLocalInterface.
Now we’ll go through each of these ways to implement Panorama Kit.
Load a panorama image without a specified panorama type
If you don’t specify a panorama type, users will be able to move both horizontally and vertically. But if suitable, it’s best that you specify a panorama type to give your users a better viewing experience.
Code:
private fun loadImageWithoutType() {
Panorama.getInstance().loadImageInfoWithPermission(
this,
Uri.parse("android.resource://$packageName/${R.raw.pano1}")
)
.setResultCallback(ResultCallbackImpl([email protected]))
}
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Load a panorama image with a specified panorama type
Here we specify the panorama type of IMAGE_TYPE_RING to improve the viewing experience of this particular image.
Code:
private fun loadImageWithType() {
Panorama.getInstance().loadImageInfoWithPermission(
this,
Uri.parse("android.resource://$packageName/${R.raw.pano1}"),
PanoramaInterface.IMAGE_TYPE_RING
).setResultCallback(ResultCallbackImpl([email protected]))
}
Load panorama image inside a custom activity
Finally, if you want to show panorama view in a custom activity of yours, you can do so by sending the necessary data detailed in the code below and calling the local interface api of Panorama Kit in that specific activity.
Code:
private fun panoramaWithLocalInterface() {
val intent = Intent([email protected], LocalInterfaceActivity::class.java)
intent.apply {
data = Uri.parse("android.resource://$packageName/${R.raw.pano1}");
putExtra("PanoramaType", PanoramaInterface.IMAGE_TYPE_SPHERICAL)
}
startActivity(intent)
}
Init local interface and set image uri and type, obtain the view and add it to your layout. And listen on the changeInputButton event to change the control mode.
Code:
private fun callLocalApi(imageUri: Uri?, imageType: Int) {
mLocalInterface = Panorama.getInstance().getLocalInstance(this)
if (mLocalInterface.init() == 0 && mLocalInterface.setImage(imageUri, imageType) == 0) {
val view: View = mLocalInterface.view
relativeLayout.addView(view)
view.setOnTouchListener([email protected])
changeInputButton.apply {
bringToFront()
setOnClickListener([email protected])
}
} else {
Log.e(TAG, "local api error")
}
}
Code:
override fun onClick(v: View?) {
if (v?.id == R.id.changeInputButton) {
if (mIsGyroEnabled) {
mIsGyroEnabled = false
mLocalInterface.setControlMode(PanoramaInterface.CONTROL_TYPE_TOUCH)
} else {
mIsGyroEnabled = true
mLocalInterface.setControlMode(PanoramaInterface.CONTROL_TYPE_POSE)
}
}
}
Send the view touch control event to the HMS Core Panorama SDK. (This step assumes that the touch mode is enabled.)
Code:
@SuppressLint("ClickableViewAccessibility")
override fun onTouch(v: View?, event: MotionEvent?): Boolean {
mLocalInterface.let {
mLocalInterface.updateTouchEvent(event)
}
return true
}
Keep in mind: After loading a panoramic image, you can call the setImage API to replace it with another one, if necessary.
Code:
mLocalInterface.setImage(resourceUri, imageType)
And that’s it! These were the 3 ways of implementing Panorama Kit into your app. You can find the full source code of this demo application in the link below.
https://github.com/tolunayozturk/hms-panoramakitdemo
As always, thank you for reading this article and using HMS. Please let me know if you have any questions!
How much time it takes to integrate Panoroma Kit in android project and can you give me one use case ?
Introduction
In the previous post, we looked at how to use HUAWEI ML Kit's skeleton detection capability to detect points such as the head, neck, shoulders, knees and ankles. But as well as skeleton detection, ML Kit also provides a hand keypoint detection capability, which can locate 21 hand keypoints, such as fingertips, joints, and wrists.
Application Scenarios
Hand keypoint detection is useful in a huge range of situations. For example, short video apps can generate some cute and funny special effects based on hand keypoints, to add more fun to short videos.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Or, if smart home devices are integrated with hand keypoint detection, users could control them from a remote distance using customized gestures, so they could do things like activate a robot vacuum cleaner while they’re out.
Hand Keypoint Detection Development
Now, we’re going to see how to quickly integrate ML Kit's hand keypoint detection feature. Let’s take video stream detection as an example.
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add SDK Dependencies to the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.2.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Automatically Update
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "handkeypoint"/>
1.5 Apply for Camera Permission and Local File Reading Permission
Code:
<!--Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Read permission-->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2. Code Development
2.1 Create a Hand Keypoint Analyzer
Code:
MLHandKeypointAnalyzerSetting setting = new MLHandKeypointAnalyzerSetting.Factory()
// MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
// Set the maximum number of hand regions that can be detected within an image. A maximum of 10 hand regions can be detected by default.
setMaxHandResults(1)
create();
MLHandKeypointAnalyzer analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting);
2.2 Create the HandKeypointTransactor Class for Processing Detection Results
This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this class to obtain the detection results and implement specific services. In addition to coordinate information for each hand keypoint, the detection results include a confidence value for the palm and each of the keypoints. Palm and hand keypoints which are incorrectly detected can be filtered out based on the confidence values. You can set a threshold based on misrecognition tolerance.
Code:
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = result.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.3 Set the Detection Result Processor to Bind the Analyzer to the Result Processor
Code:
analyzer.setTransactor(new HandKeypointTransactor());
2.4 Create an Instance of the LensEngine Class
The LensEngine Class is provided by the HMS Core ML SDK to capture dynamic camera streams and pass these streams to the analyzer. The camera display size should be set to a value between 320 x 320 px and 1920 x 1920 px.
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
setLensType(LensEngine.BACK_LENS)
applyDisplayDimension(1280, 720)
applyFps(20.0f)
enableAutomaticFocus(true)
create();
2.5 Call the run Method to Start the Camera and Read Camera Streams for Detection
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
Demo Effect
And that's it! We can now see hand keypoints appear when making different gestures. Remember that you can expand this capability if you need to.
1. Overview
Camera functions on today's phones are so powerful that they now involve advanced imaging processing algorithms, as well as the camera hardware itself. But even so, users often need to continually adjust the image parameters, a process that can be extremely frustrating, even when it results in an ideal shot. For the most of us who are not professional photographers, images often end up falling woefully short of expectations. Given these universal challenges, there's incredible demand for apps that can enhance image effects automatically, by accounting for the user's surroundings. HUAWEI ML Kit's scene detection service helps bring such apps to life, by detecting 102 visual features, including beaches, sky scenes, food, night scenes, plants, and common buildings. The service can automatically adjust the parameters corresponding to the image matrix, helping build proactive, intelligent, and efficient apps. Let's take a look at how it does this.
2. Effects
For a city nightscape, the service will accurately detect the night scene, and enhance the brightness in bright areas, as well as the darkness in dark areas, rendering a highly-pleasing contrast.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For a sky image, the service will brighten the sky with an enhanced matrix.
The effects are similar for photos of flowers and plants.
The effects tend to vary, depending on the specific image. Users are free to apply their favorite effects, or mix-and-max them with filters.
Now, let's take a look at how the service can be integrated.
1. Development Process
3.1 Create a scene detection analyzer instance.
Code:
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
3.2 Create an MLFrame object by using android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are currently supported.
Code:
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
3.3 Scene detection
Code:
Task<List<MLSceneDetection>> task = this.analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
@Override
public void onSuccess(List<MLSceneDetection> sceneInfos) {
// Processing logic for scene detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
if (e instanceof MLException) {
MLException exception = (MLException) e;
// Obtain the result code.
int errCode = exception.getErrCode();
// Obtain the error information.
String message = exception.getMessage();
} else {
// Other errors.
}
}
});
3.4 Stop the analyzer to release detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
The Maven repository address.
buildscript {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
allprojects {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
Import the SDK.
Code:
dependencies {
// Scene detection SDK.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection:2.0.3.300'
// Scene detection model.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection-model:2.0.3.300'
}
Manifest files
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="1" />
...
</manifest>
Permissions
Code:
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.autofocus" />
Submit a dynamic permission application.
Code:
if (!(ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED)) {
requestCameraPermission();
}
4. Learn More
ML Kit's scene detection service also comes equipped with intelligent features, such as smart album management and image searching, which allows users to find and sort images in a refreshingly natural and intuitive way.
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow
Is there any restriction using the Kit ?
Very nice
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Emojis are a must-have tool in today's online communications as they help add color to text-based chatting and allow users to better express the emotions behind their words. Since the number of preset emojis is always limited, many apps now allow users to create their own custom emojis to keep things fresh and exciting.
For example, in a social media app, users who do not want to show their faces when making video calls can use an animated character to protect their privacy, with their facial expressions applied to the character; in a live streaming or e-commerce app, virtual streamers with realistic facial expressions are much more likely to attract watchers; in a video or photo shooting app, users can control the facial expressions of an animated character when taking a selfie, and then share the selfie via social media; and in an educational app for kids, a cute animated character with detailed facial expressions will make online classes much more fun and engaging for students.
I myself am developing such a messaging app. When chatting with friends and wanting to express themselves in ways other than words, users of my app can take a photo to create an emoji of themselves, or of an animated character they have selected. The app will then identify users' facial expressions, and apply their facial expressions to the emoji. In this way, users are able to create an endless amount of unique emojis. During the development of my app, I used the capabilities provided by HMS Core AR Engine to track users' facial expressions and convert the facial expressions into parameters, which greatly reduced the development workload. Now I will show you how I managed to do this.
ImplementationAR Engine provides apps with the ability to track and recognize facial expressions in real time, which can then be converted into facial expression parameters and used to accurately control the facial expressions of virtual characters.
Currently, AR Engine provides 64 facial expressions, including eyelid, eyebrow, eyeball, mouth, and tongue movements. It supports 21 eye-related movements, including eyeball movement and opening and closing the eyes; 28 mouth movements, including opening the mouth, puckering, pulling, or licking the lips, and moving the tongue; as well as 5 eyebrow movements, including raising or lowering the eyebrows.
DemoFacial expression based emoji
Development ProcedureRequirements on the Development EnvironmentJDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
Test device: see Software and Hardware Requirements of AR Engine Features
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations1. Before getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
2. Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development1. Check whether AR Engine has been installed on the current device. If yes, your app can run properly. If not, you need to prompt the user to install it, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Create an AR scene. AR Engine supports five scenes, including motion tracking (ARWorldTrackingConfig), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
The following takes creating a face tracking scene by calling ARFaceTrackingConfig as an example.
Code:
// Create an ARSession object.
mArSession = new ARSession(this);
// Select a specific Config to initialize the ARSession object based on the application scenario.
ARFaceTrackingConfig config = new ARFaceTrackingConfig(mArSession);
Set scene parameters using the config.setXXX method.
Code:
// Set the camera opening mode, which can be external or internal. The external mode can only be used in ARFace. Therefore, you are advised to use the internal mode.
mArConfig.setImageInputMode(ARConfigBase.ImageInputMode.EXTERNAL_INPUT_ALL);
3. Set the AR scene parameters for face tracking and start face tracking.
Code:
mArSession.configure(mArConfig);
mArSession.resume();
4. Initialize the FaceGeometryDisplay class to obtain the facial geometric data and render the data on the screen.
Code:
public class FaceGeometryDisplay {
// Initialize the OpenGL ES rendering related to face geometry, including creating the shader program.
void init(Context context) {...
}
}
5. Initialize the onDrawFrame method in the FaceGeometryDisplay class, and call face.getFaceGeometry() to obtain the face mesh.
Code:
public void onDrawFrame(ARCamera camera, ARFace face) {
ARFaceGeometry faceGeometry = face.getFaceGeometry();
updateFaceGeometryData(faceGeometry);
updateModelViewProjectionData(camera, face);
drawFaceGeometry();
faceGeometry.release();
}
6. Initialize updateFaceGeometryData() in the FaceGeometryDisplay class.
Pass the face mesh data for configuration and set facial expression parameters using OpenGl ES.
Code:
private void updateFaceGeometryData (ARFaceGeometry faceGeometry) {
FloatBuffer faceVertices = faceGeometry.getVertices();
FloatBuffer textureCoordinates =faceGeometry.getTextureCoordinates();
// Obtain an array consisting of face mesh texture coordinates, which is used together with the vertex data returned by getVertices() during rendering.
}
7. Initialize the FaceRenderManager class to manage facial data rendering.
Code:
public class FaceRenderManager implements GLSurfaceView.Renderer {
public FaceRenderManager(Context context, Activity activity) {
mContext = context;
mActivity = activity;
}
// Set ARSession to obtain the latest data.
public void setArSession(ARSession arSession) {
if (arSession == null) {
LogUtil.error(TAG, "Set session error, arSession is null!");
return;
}
mArSession = arSession;
}
// Set ARConfigBase to obtain the configuration mode.
public void setArConfigBase(ARConfigBase arConfig) {
if (arConfig == null) {
LogUtil.error(TAG, "setArFaceTrackingConfig error, arConfig is null.");
return;
}
mArConfigBase = arConfig;
}
// Set the camera opening mode.
public void setOpenCameraOutsideFlag(boolean isOpenCameraOutsideFlag) {
isOpenCameraOutside = isOpenCameraOutsideFlag;
}
...
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
mFaceGeometryDisplay.init(mContext);
}
}
8. Implement the face tracking effect by calling methods like setArSession and setArConfigBase of FaceRenderManager in FaceActivity.
Code:
public class FaceActivity extends BaseActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
mFaceRenderManager = new FaceRenderManager(this, this);
mFaceRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mFaceRenderManager.setTextView(mTextView);
glSurfaceView.setRenderer(mFaceRenderManager);
glSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
}
ConclusionEmojis allow users to express their moods and excitement in a way words can't. Instead of providing users with a selection of the same old boring preset emojis that have been used a million times, you can now make your app more fun by allowing users to create emojis themselves! Users can easily create an emoji with their own smiles, simply by facing the camera, selecting an animated character they love, and smiling. With such an ability to customize emojis, users will be able to express their feelings in a more personalized and interesting manner. If you have any interest in developing such an app, AR Engine is a great choice for you. With accurate facial tracking capabilities, it is able to identify users' facial expressions in real time, convert the facial expressions into parameters, and then apply them to virtual characters. Integrating the capability can help you considerably streamline your app development process, leaving you with more time to focus on how to provide more interesting features to users and improve your app's user experience.
ReferenceAR Engine Sample Code
Face Tracking Capability
Influencers have become increasingly important, as more and more consumers choose to purchase items online – whether on Amazon, Taobao, or one of the many other prominent e-commerce platforms. Brands and merchants have spent a lot of money finding influencers to promote their products through live streams and consumer interactions, and many purchases are made on the recommendation of a trusted influencer.
However, employing a public-facing influencer can be costly and risky. Many brands and merchants have opted instead to host live streams with their own virtual characters. This gives them more freedom to showcase their products, and widens the pool of potential on camera talent. For consumers, virtual characters can add fun and whimsy to the shopping experience.
E-commerce platforms have begun to accommodate the preference for anonymous livestreaming, by offering a range of important capabilities, such as those that allow for automatic identification, skeleton point-based motion tracking in real time (as shown in the gif), facial expression and gesture identification, copying of traits to virtual characters, a range of virtual character models for users to choose from, and natural real-world interactions.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Building these capabilities comes with its share of challenges. For example, after finally building a model that is able to translate the person's every gesture, expression, and movement into real-time parameters and then applying them to the virtual character, you can find out that the virtual character can't be blocked by real bodies during the livestream, which gives it a fake, ghost-like form. This is a problem I encountered when I developed my own e-commerce app, and it occurred because I did not occlude the bodies that appeared behind and in front of the virtual character. Fortunately I was able to find an SDK that helped me solve this problem — HMS Core AR Engine.
This toolkit provides a range of capabilities that make it easy to incorporate AR-powered features into apps. From hit testing and movement tracking, to environment mesh, and image tracking, it's got just about everything you need. The human body occlusion capability was exactly what I needed at the time.
Now I'll show you how I integrated this toolkit into my app, and how helpful it's been for.
First I registered for an account on the HUAWEI Developers website, downloaded the AR Engine SDK, and followed the step-by-step development guide to integrate the SDK. The integration process was quite simple and did not take too long. Once the integration was successful, I ran the demo on a test phone, and was amazed to see how well it worked. During livestreams my app was able to recognize and track the areas where I was located within the image, with an accuracy of up to 90%, and provided depth-related information about the area. Better yet, it was able to identify and track the profile of up to two people, and output the occlusion information and skeleton points corresponding to the body profiles in real time. With this capability, I was able to implement a lot of engaging features, for example, changing backgrounds, hiding virtual characters behind real people, and even a feature that allows the audience to interact with the virtual character through special effects. All of these features have made my app more immersive and interactive, which makes it more attractive to potential shoppers.
DemoAs shown in the gif below, the person blocks the virtual panda when walking in front of it.
How to DevelopPreparationsRegistering as a developerBefore getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
Creating an appCreate a project and create an app under the project. Pay attention to the following parameter settings:
Platform: Select Android.
Device: Select Mobile phone.
App category: Select App or Game.
Integrating the AR Engine SDKBefore development, integrate the AR Engine SDK via the Maven repository into your development environment.
Configuring the Maven repository address for the AR Engine SDKThe procedure for configuring the Maven repository address in Android Studio is different for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
Adding build dependenciesOpen the build.gradle file in the app directory of your project.
Add a build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}'
}
Open the modified build.gradle file again. You will find a Sync Now link in the upper right corner of the page. Click Sync Now and wait until synchronization is complete.
Developing Your AppChecking the AvailabilityCheck whether AR Engine has been installed on the current device. If so, the app can run properly. If not, the app prompts the user to install AR Engine, for example, by redirecting the user to AppGallery. The code is as follows:
Code:
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
Create a BodyActivity object to display body bones and output human body features, for AR Engine to recognize human body.
Code:
Public class BodyActivity extends BaseActivity{
Private BodyRendererManager mBodyRendererManager;
Protected void onCreate(){
// Initialize surfaceView.
mSurfaceView = findViewById();
// Context for keeping the OpenGL ES running.
mSurfaceView.setPreserveEGLContextOnPause(true);
// Set the OpenGL ES version.
mSurfaceView.setEGLContextClientVersion(2);
// Set the EGL configuration chooser, including for the number of bits of the color buffer and the number of depth bits.
mSurfaceView.setEGLConfigChooser(……);
mBodyRendererManager = new BodyRendererManager(this);
mSurfaceView.setRenderer(mBodyRendererManager);
mSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
Protected void onResume(){
// Initialize ARSession to manage the entire running status of AR Engine.
If(mArSession == null){
mArSession = new ARSession(this.getApplicationContext());
mArConfigBase = new ARBodyTrackingConfig(mArSession);
mArConfigBase.setEnableItem(ARConfigBase.ENABLE_DEPTH | ARConfigBase.ENABLE_MASK);
mArConfigBase.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS
mArSession.configure(mArConfigBase);
}
// Pass the required parameters to setBodyMask.
mBodyRendererManager.setBodyMask(((mArConfigBase.getEnableItem() & ARConfigBase.ENABLE_MASK) != 0) && mIsBodyMaskEnable);
sessionResume(mBodyRendererManager);
}
}
Create a BodyRendererManager object to render the personal data obtained by AR Engine.
Code:
Public class BodyRendererManager extends BaseRendererManager{
Public void drawFrame(){
// Obtain the set of all traceable objects of the specified type.
Collection<ARBody> bodies = mSession.getAllTrackables(ARBody.class);
for (ARBody body : bodies) {
if (body.getTrackingState() != ARTrackable.TrackingState.TRACKING){
continue;
}
mBody = body;
hasBodyTracking = true;
}
// Update the body recognition information displayed on the screen.
StringBuilder sb = new StringBuilder();
updateMessageData(sb, mBody);
Size textureSize = mSession.getCameraConfig().getTextureDimensions();
if (mIsWithMaskData && hasBodyTracking && mBackgroundDisplay instanceof BodyMaskDisplay) {
((BodyMaskDisplay) mBackgroundDisplay).onDrawFrame(mArFrame, mBody.getMaskConfidence(),
textureSize.getWidth(), textureSize.getHeight());
}
// Display the updated body information on the screen.
mTextDisplay.onDrawFrame(sb.toString());
for (BodyRelatedDisplay bodyRelatedDisplay : mBodyRelatedDisplays) {
bodyRelatedDisplay.onDrawFrame(bodies, mProjectionMatrix);
} catch (ArDemoRuntimeException e) {
LogUtil.error(TAG, "Exception on the ArDemoRuntimeException!");
} catch (ARFatalException | IllegalArgumentException | ARDeadlineExceededException |
ARUnavailableServiceApkTooOldException t) {
Log(…);
}
}
// Update gesture-related data for display.
Private void updateMessageData(){
if (body == null) {
return;
}
float fpsResult = doFpsCalculate();
sb.append("FPS=").append(fpsResult).append(System.lineSeparator());
int bodyAction = body.getBodyAction();
sb.append("bodyAction=").append(bodyAction).append(System.lineSeparator());
}
}
Customize the camera preview class, which is used to implement human body drawing based on certain confidence.
Code:
Public class BodyMaskDisplay implements BaseBackGroundDisplay{}
Obtain skeleton data and pass the data to the OpenGL ES, which renders the data and displays it on the screen.
Code:
public class BodySkeletonDisplay implements BodyRelatedDisplay {
Obtain skeleton point connection data and pass it to OpenGL ES for rendering the data and display it on the screen.
Code:
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {}
ConclusionTrue-to-life AR live-streaming is now an essential feature in e-commerce apps, but developing this capability from scratch can be costly and time-consuming. AR Engine SDK is the best and most convenient SDK I've encountered, and it's done wonders for my app, by recognizing individuals within images with accuracy as high as 90%, and providing the detailed information required to support immersive, real-world interactions. Try it out on your own app to add powerful and interactive features that will have your users clamoring to shop more!
ReferencesAR Engine Development Guide
Sample Code
API Reference