AR placement apps have enhanced daily life in a myriad of different ways, from AR furniture placement for home furnishing and interior decorating, AR fitting in retail, to AR-based 3D models in education which gives students an opportunity to take an in-depth look at the internal workings of objects.
From integrating HUAWEI Scene Kit, you're only eight steps away from launching an AR placement app of your own. ARView, a set of AR-oriented scenario-based APIs in Scene Kit, uses the plane detection capability of AR Engine, along with the graphics rendering capability in Scene Kit, to create a 3D materials loading and rendering capability for common AR scenes.
ARView Functions
With ARView, you'll be able to:
1. Load and render 3D materials in AR scenes.
2. Set whether to display the lattice plane (consisting of white lattice points) to help select a plane in a real-world view.
3. Tap an object placed on the lattice plane to select it. Once selected, the object will turn red. You can then move, resize, or rotate it as needed.
Development Procedure
Before using ARView, you'll need to integrate the Scene Kit SDK into your Android Studio project. For details, please refer to Integrating the HMS Core SDK.
ARView inherits from GLSurfaceView, and overrides lifecycle-related methods. It can facilitate the creation of an ARView-based app in just eight steps:
1. Create an ARViewActivity that inherits from Activity. Add a Button to load materials.
Code:
public class ARViewActivity extends Activity {
private ARView mARView;
private Button mButton;
private boolean isLoadResource = false;
}
2. Add an ARView to the layout.
Code:
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent">
</com.huawei.hms.scene.sdk.ARView>
Note: To achieve the desired experience offered by ARView, your app should not support screen orientation changes or split-screen mode; thus, add the following configuration in the AndroidManifest.xml file:
Code:
android:screenOrientation="portrait"
android:resizeableActivity="false"
3. Override the onCreate method of ARViewActivity and obtain the ARView.
Code:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_ar_view);
mARView = findViewById(R.id.ar_view);
mButton = findViewById(R.id.button);
}
4. Add a Switch button in the onCreate method to set whether or not to display the lattice plane.
Code:
Switch mSwitch = findViewById(R.id.show_plane_view);
mSwitch.setChecked(true);
mSwitch.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
mARView.enablePlaneDisplay(isChecked);
}
});
Note: Add the Switch button in the layout before using it.
Code:
<Switch
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/show_plane_view"
android:layout_alignParentTop="true"
android:layout_marginTop="15dp"
android:layout_alignParentEnd="true"
android:layout_marginEnd ="15dp"
android:layout_gravity="end"
android:text="@string/show_plane"
android:theme="@style/AppTheme"
tools:ignore="RelativeOverlap" />
5. Add a button callback method. Tapping the button once will load a material, and tapping it again will clear the material.
Code:
public void onBtnClearResourceClicked(View view) {
if (!isLoadResource) {
mARView.loadAsset("ARView/scene.gltf");
isLoadResource = true;
mButton.setText(R.string.btn_text_clear_resource);
} else {
mARView.clearResource();
mARView.loadAsset("");
isLoadResource = false;
mButton.setText(R.string.btn_text_load);
}
}
Note: The onBtnSceneKitDemoClicked method must be registered in the layout attribute onClick of the button, which is tapped to load or clear a material.
6. Override the onPause method of ARViewActivity and call the onPause method of ARView.
Code:
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
7. Override the onResume method of ARViewActivity and call the onResume method of ARView.
Code:
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
8. Override the onDestroy method for ARViewActivity and call the destroy method of ARView.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
9. (Optional) After the material is loaded, use setInitialPose to set its initial status (scale and rotation).
Code:
float[] scale = new float[] { 0.1f, 0.1f, 0.1f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.707f, 0.0f };
mARView.setInitialPose(scale, rotation);
Effects
You can develop a basic AR placement app, simply by calling ARView from Scene Kit, as detailed in the eight steps above. If you are interested in this implementation method, you can view the Scene Kit demo on GitHub.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The ARView capability can be used to do far more than just develop AR placement apps; it can also help you implement a range of engaging functions, such as AR games, virtual exhibitions, and AR navigation features.
GitHub to download demos and sample codes
Related
More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201257812100840239&fid=0101187876626530001
It’s an application level development and we won’t go through the algorithm of image segmentation. Use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don’t know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don’t have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn’t meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
1.2 Add SDK Dependency in Application Level build.gradle
Introducing SDK and basic SDK of face recognition:
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user’s device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--Uses storage permissions--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // Open the corresponding configuration page based on the package name.
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
The image segmentation detector can be created through the image segmentation detection configurator “mlimagesegmentation setting".
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create “mlframe” Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator “MLImageSegmentationSetting".
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call “asyncanalyseframe” Method for Image Segmentation
Code:
// Create a task to process the result returned by the image segmentation detector. Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // Asynchronously processing the result returned by the image segmentation detector Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let’s see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-MLKit/HUAWEI-HMS-MLKit-Sample=
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
People’s portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
Previous link:
NO. 1:One article to understand Huawei HMS ML Kit text recognition, bank card recognition, general card identification
NO.2: Integrating MLkit and publishing ur app on Huawei AppGallery
NO.3.: Comparison Between Zxing and Huawei HMS Scan Kit
NO.4: How to use Huawei HMS MLKit service to quickly develop a photo translation app
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Detecting and tracking the user face movements can be very powerful feature that you can develop in your application. With this feature that you can manage and get input requests from the user face movements to execute some intents in the app.
Requirements to use ML Kit:
Android Studio 3.X or later version
JDK 1.8 or later
To able to use HMS ML Kit, you need to integrate HMS Core to your project and also add HMS ML Kit SDK. Add Write External Storage and Camera permissions afterwards.
You can click this link to integrate HMS Core to your project.
After integrating HMS Core, add HMS ML Kitdependencies in build.gradle file in app directory.
Code:
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.5.300'
implementation 'com.huawei.hms:ml-computer-vision-face-3d-model:2.0.5.300'
Add permissions to the Manifest File:
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Development Process:
In this article, we are going to use HMS ML Kit in order to track face movements from the user. In order to use this feature, we are going to use 3D features of ML Kit. We are going to use one activity and a helper class.
XML Structure of Activity:
XML:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<SurfaceView
android:id="@+id/surfaceViewCamera"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
We are going to use a surfaceView to use camera in the mobile phone screen. We are going to add a callback to our surfaceHolderCamera to track when it’s created, changed and destroyed.
Implementing HMS ML Kit Default View:
Java:
private lateinit var mAnalyzer: ML3DFaceAnalyzer
private lateinit var mLensEngine: LensEngine
private lateinit var mFaceAnalyzerTransactor: FaceAnalyzerTransactor
private var surfaceHolderCamera: SurfaceHolder? = null
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, 0)
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 0 && grantResults.isNotEmpty() && hasPermissions(requiredPermissions))
init()
}
private fun init() {
mAnalyzer = createAnalyzer()
mFaceAnalyzerTransactor = FaceAnalyzerTransactor()
mAnalyzer.setTransactor(mFaceAnalyzerTransactor)
prepareViews()
}
private fun prepareViews() {
surfaceHolderCamera = surfaceViewCamera.holder
surfaceHolderCamera?.addCallback(surfaceHolderCallback)
}
ML3DFaceAnalyzer → A face analyzer, which is used to detect 3D faces.
Lens Engine → A class with the camera initialization, frame obtaining, and logic control functions encapsulated.
FaceAnalyzerTransactor → A class for processing recognition results. This class implements MLAnalyzer.MLTransactor (A general API that needs to be implemented by a detection result processor to process detection results) and uses the transactResult method in this class to obtain the recognition results and implement specific services.
SurfaceHolder → By adding a callback to surface view, we will detect the surface changes like surface changed, created and destroyed. We will use this object callback to handle different situation in the application.
In this code block, we basically request the permissions. If the user has already given his/her permissions in init function we will create a ML3DFaceAnalyzer in order to detect 3D faces. We will handle the surface change sitations with the surfaceHolderCallback functions. Afterwards, to process the results we will create FaceAnalyzerTransactor object and the transactor of the analyzer. After everything is set, we will start to analyze the face of the user by prepareView function.
Java:
private fun createAnalyzer(): ML3DFaceAnalyzer {
val settings = ML3DFaceAnalyzerSetting.Factory()
.setTracingAllowed(true)
.setPerformanceType(ML3DFaceAnalyzerSetting.TYPE_PRECISION)
.create()
return MLAnalyzerFactory.getInstance().get3DFaceAnalyzer(settings)
}
In this code block, we will adjust the settings for our 3DFaceAnalyzer object.
setTracingAllowed → Indicates whether to enable face tracking. Due to we desire to trace the face, we set this value true.
setPerformanceType → There are two preference ways to set this value. Speed/precision preference mode for the analyzer to choose precision or speed priority in performing face detection. We choosed TYPE_PRECISION is this example.
By MLAnalyzerFactory.getInstance().get3DFaceAnalyzer(settings) method we will create a 3DFaceAnalyzer object.
Handling Surface Changes
Java:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(p0: SurfaceHolder, p1: Int, p2: Int, p3: Int) {
mLensEngine = createLensEngine(p1, p2)
mLensEngine.run(p0)
}
override fun surfaceDestroyed(p0: SurfaceHolder) {
mLensEngine.release()
}
override fun surfaceCreated(p0: SurfaceHolder) {
}
}
private fun createLensEngine(width: Int, height: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(this, mAnalyzer)
.applyFps(20F)
.setLensType(LensEngine.FRONT_LENS)
.enableAutomaticFocus(true)
return if (resources.configuration.orientation == Configuration.ORIENTATION_PORTRAIT) {
lensEngineCreator.let {
it.applyDisplayDimension(height, width)
it.create()
}
} else {
lensEngineCreator.let {
it.applyDisplayDimension(width, height)
it.create()
}
}
}
When surface is changed, we will create a LensEngine. We will get the width and height values of the surface in the surfaceChanged function by p0 and p1. When we create LensEngine, we will use these parameters. We created a LensEngine by setting the context and the 3DFaceAnalyzer that we have already created.
applyFps → Sets the preview frame rate (FPS) of a camera. The preview frame rate of a camera depends on the firmware capability of the camera.
setLensType → Sets the camera type, BACK_LENS for rear camera and FRONT_LENS for front camera. In this example, we will use front camera.
enableAutomaticFocus → Enables or disables the automatic focus function for a camera.
surfaceChanged method will be executed whenever the mobile screen orientation changes. We also handled this situation in createLensEngine function. A LensEngine will be created the width and height values of the mobile phone by checking the orientation.
run → Starts LensEngine. During startup, the LensEngine selects an appropriate camera based on the user’s requirements on the frame rate and preview image size, initializes parameters of the selected camera, and starts an analyzer thread to analyze and process frame data.
release → Releases resources occupied by LensEngine. We use this function whenever the surfaceDestroyed method is called.
Analysing Results
Java:
class FaceAnalyzerTransactor: MLTransactor<ML3DFace> {
override fun transactResult(results: MLAnalyzer.Result<ML3DFace>?) {
val items: SparseArray<ML3DFace> = results!!.analyseList
Log.i("FaceOrientation:X ",items.get(0).get3DFaceEulerX().toString())
Log.i("FaceOrientation:Y",items.get(0).get3DFaceEulerY().toString())
Log.i("FaceOrientation:Z",items.get(0).get3DFaceEulerZ().toString())
}
override fun destroy() {
}
}
We created a FaceAnalyzerTransactor object in init method. This object is used to process the results.
Due to it implemented from MLTransactor class, we have to override destroy and transactResult methods. With transactResult method we can obtain the recognition results.MLAnalyzer. It obtains the detection as a result list. Due to there will only one face in this example, we will get the first ML3DFace object of the result list.
ML3DFace represents a 3D face detected. It has features include the width, height, rotation degree, 3D coordinate points, and projection matrix.
get3DFaceEulerX() → 3D face pitch angle. A positive value indicates that the face is looking up, and a negative value indicates that the face is looking down.
get3DFaceEulerY() → 3D face yaw angle. A positive value indicates that the face turns to the right side of the image, and a negative value indicates that the face turns to the left side of the image.
get3DFaceEulerZ() → 3D face roll angle. A positive value indicates a clockwise rotation. A negative value indicates a counter-clockwise rotation.
All above methods results change from -1 to +1 according to the face movement. In this example, I will show the demonstration of get3DFaceEulerY() method.
Not making any head movements
In this example, outputs are between 0.04 and 0.1 when I did not move my head. If the values are between -0.1 to 0.1, the user possibly does not move his/her head or makes small movements.
Changing head direction to right
When I moved my head to the right direction outputs were generally between 0.5 and 1. So it can be said that if the values are bigger than 0.5, the user moved his head to right direction.
Changing head direction to left
When I moved my head to the left direction outputs were generally between -0.5 and -1. So it can be said that if the values are less than -0.5, the user moved his head to left direction.
As you can see, HMS ML Kit can detect the user face and track face movements successfully. This project and codes can be accessible from the github link in references area.
You can also implement other ML Kit features like Text-related services, Language/Voice-related services, Language/Voice-related services, Image-related services, Face/Body-related services and Natural Language Processing services.
References:
ML Kit
ML Kit Documentation
Face Detection With ML Kit
Github
Very interesting feature.
Rebis said:
Very interesting feature.
Click to expand...
Click to collapse
I can't wait to use it.
Hi, this service will work on server side?
What is the accuracy rate in low light ?
Which permissions are required?
BackgroundThe ubiquity of the Internet and smart devices has made e-commerce the preferred choice for countless consumers. However, many longtime users have grown wary of the stagnant shopping model, and thus enhancing user experience is critical to stimulating further growth in e-commerce and attracting a broader user base. HMS Core offers intelligent graphics processing capabilities to identify a user's facial and physical features, which when combined with a new display paradigm, enables users to try on products virtually through their mobile phones, for a groundbreaking digital shopping experience.
ScenariosAR Engine and Scene Kit allow users to virtually try on products found on shopping apps and shopping list sharing apps, which in turn will lead to greater customer satisfaction and fewer returns and replacements.
EffectsA user opens an shopping app, then the user taps a product's picture to view the 3D model of the product, which they can rotate, enlarge, and shrink for interactive viewing.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Getting StartedConfiguring the Maven Repository Address for the HMS Core SDK
Open the project-level build.gradle file in your Android Studio project. Go to buildscript > repositories and allprojects > repositories to configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories{
...
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
...
maven { url 'http://developer.huawei.com/repo/'}
}
}
Adding Build Dependencies for the HMS Core SDK
Open the app-level build.gradle file of your project. Add build dependencies in the dependencies block and use the Full-SDK of Scene Kit and AR Engine SDK.
Code:
dependencies {
....
implementation 'com.huawei.scenekit:full-sdk:5.0.2.302'
implementation 'com.huawei.hms:arenginesdk:2.13.0.4'
}
For details about the preceding steps, please refer to the development guide for Scene Kit on HUAWEI Developers.
Adding Permissions in the AndroidManifest.xml File
Open the AndroidManifest.xml file in main directory and add the permission to use the camera above the <application line.
Code:
<!--Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
Development ProcedureConfiguring MainActivityAdd two buttons to the layout configuration file of MainActivity. Set the background of the onBtnShowProduct button to the preview image of the product and add the text Try it on! to the onBtnTryProductOn button to guide the user to the feature.
Code:
<Button
android:layout_width="260dp"
android:layout_height="160dp"
android:background="@drawable/sunglasses"
android:onClick="onBtnShowProduct" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Try it on!"
android:textAllCaps="false"
android:textSize="24sp"
android:onClick="onBtnTryProductOn" />
If the user taps the onBtnShowProduct button, the 3D model of the product will be loaded. After tapping the onBtnTryProductOn button, the user will enter the AR fitting screen.
Configuring the 3D Model Display for a Product1. Create a SceneSampleView inherited from SceneView.
Code:
public class SceneSampleView extends SceneView {
public SceneSampleView(Context context) {
super(context);
}
public SceneSampleView(Context context, AttributeSet attributeSet) {
super(context, attributeSet);
}
}
Override the surfaceCreated method to create and initialize SceneView. Then call loadScene to load the materials, which should be in the glTF or GLB format, to have them rendered and displayed. Call loadSkyBox to load skybox materials, loadSpecularEnvTexture to load specular maps, and loadDiffuseEnvTexture to load diffuse maps. These files should be in the DDS (cubemap) format.
All loaded materials are stored in the src > main > assets > SceneView folder.
Code:
@Override
public void surfaceCreated(SurfaceHolder holder) {
super.surfaceCreated(holder);
// Load the materials to be rendered.
loadScene("SceneView/sunglasses.glb");
// Call loadSkyBox to load skybox texture materials.
loadSkyBox("SceneView/skyboxTexture.dds");
// Call loadSpecularEnvTexture to load specular texture materials.
loadSpecularEnvTexture("SceneView/specularEnvTexture.dds");
// Call loadDiffuseEnvTexture to load diffuse texture materials.
loadDiffuseEnvTexture("SceneView/diffuseEnvTexture.dds");
}
2. Create a SceneViewActivity inherited from Activity.
Call setContentView using the onCreate method, and then pass SceneSampleView that you have created using the XML tag in the layout file to setContentView.
Code:
public class SceneViewActivity extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_sample);
}
}
Create SceneSampleView in the layout file as follows:
Code:
<com.huawei.scene.demo.sceneview.SceneSampleView
android:layout_width="match_parent"
android:layout_height="match_parent"/>
3. Create an onBtnShowProduct in MainActivity.
When the user taps the onBtnShowProduct button, SceneViewActivity is called to load, render, and finally display the 3D model of the product.
Code:
public void onBtnShowProduct(View view) {
startActivity(new Intent(this, SceneViewActivity.class));
}
Configuring AR Fitting for a ProductProduct virtual try-on is easily accessible, thanks to the facial recognition, graphics rendering, and AR display capabilities offered by HMS Core.
1. Create a FaceViewActivity inherited from Activity, and create the corresponding layout file.
Create face_view in the layout file to display the try-on effect.
Code:
<com.huawei.hms.scene.sdk.FaceView
android:id="@+id/face_view"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:sdk_type="AR_ENGINE"></com.huawei.hms.scene.sdk.FaceView>
Create a switch. When the user taps it, they can check the difference between the appearances with and without the virtual glasses.
Code:
<Switch
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/switch_view"
android:layout_alignParentTop="true"
android:layout_marginTop="15dp"
android:layout_alignParentEnd="true"
android:layout_marginEnd ="15dp"
android:text="Try it on"
android:theme="@style/AppTheme"
tools:ignore="RelativeOverlap" />
2. Override the onCreate method in FaceViewActivity to obtain FaceView.
Code:
public class FaceViewActivity extends Activity {
private FaceView mFaceView;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_face_view);
mFaceView = findViewById(R.id.face_view);
}
}
3. Create a listener method for the switch. When the switch is enabled, the loadAsset method is called to load the 3D model of the product. Set the position for facial recognition in LandmarkType.
Code:
mSwitch.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
mFaceView.clearResource();
if (isChecked) {
// Load materials.
int index = mFaceView.loadAsset("FaceView/sunglasses.glb", LandmarkType.TIP_OF_NOSE);
}
}
});
Use setInitialPose to adjust the size and position of the model. Create the position, rotation, and scale arrays and pass values to them.
Code:
final float[] position = { 0.0f, 0.0f, -0.15f };
final float[] rotation = { 0.0f, 0.0f, 0.0f, 0.0f };
final float[] scale = { 2.0f, 2.0f, 0.3f };
Put the following code below the loadAsset line:
Code:
mFaceView.setInitialPose(index, position, scale, rotation);
4. Create an onBtnTryProductOn in MainActivity. When the user taps the onBtnTryProductOn button, the FaceViewActivity is called, enabling the user to view the try-on effect.
Code:
public void onBtnTryProductOn(View view) {
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(
this, new String[]{ Manifest.permission.CAMERA }, FACE_VIEW_REQUEST_CODE);
} else {
startActivity(new Intent(this, FaceViewActivity.class));
}
}
ReferencesFor more details, you can go to:
AR Engine official website; Scene Kit official website
Reddit to join our developer discussion
GitHub to download sample codes
Stack Overflow to solve any integration problems
Does it support Quick App?
Hello sir,
from where i can get the asset folder for above given code(virtual try-on glasses)
Mamta23 said:
Hello sir,
from where i can get the asset folder for above given code(virtual try-on glasses)
Click to expand...
Click to collapse
Hi,
You can get the demo here: https://github.com/HMS-Core/hms-AREngine-demo
Hope this helps!
Now that spring has arrived, it's time to get out and stretch your legs! As programmers, many of us are used to being seated for hours and hours at time, which can lead to back pain and aches. We're all aware that building the workout plan, and keeping track of health indicators round-the-clock can have enormous benefits for body, mind, and soul.
Fortunately, AR Engine makes that remarkably easy. It comes with face tracking capabilities, and will soon support body tracking as well. Thanks to core AR algorithms, AR Engine is able to monitor heart rate, respiratory rate, facial health status, and heart rate waveform signals in real time during your workouts. You can also use it to build an app, for example, to track the real-time workout status, perform real-time health check for patients, or to monitor real-time health indicators of vulnerable users, like the elderly or the disabled. With AR Engine, you can make your health or fitness app more engaging and visually immersive than you might have believed possible.
Advantages and Device Model Restrictions1. Monitors core health indicators like heart rate, respiratory rate, facial health status, and heart rate waveform signals in real time.
2. Enables devices to better understand their users. Thanks to technologies like Simultaneous Localization and Mapping (SLAM) and 3D reconstruction, AR Engine renders images to build 3D human faces on mobile phones, resulting in seamless virtual-physical cohesion.
3. Supports all of the device models listed in Software and Hardware Requirements of AR Engine Features.
Demo IntroductionA simple demo is available to give you a grasp of how to integrate AR Engine, and use its human body and face tracking capabilities.
ENABLE_HEALTH_DEVICE: indicates whether to enable health check.
HealthParameter: health check parameter, including heart rate, respiratory rate, age and gender probability based on facial features, and heart rate waveform signals.
FaceDetectMode: face detection mode, including health rate checking, respiratory rate checking, real-time health checking, and all of the three above.
Effect
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The following details how you can run the demo using the source code.
Key Steps1. Add the Huawei Maven repository to the project-level build.gradle file.
Java:
buildscript {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
dependencies {
...
// Add the AppGallery Connect plugin configuration.
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
}
}allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
}
2. Add dependencies on the SDK to the app-level build.gradle file.
Java:
implementation 'com.huawei.hms:arenginesdk:3.7.0.3'
3. Declare system permissions in the AndroidManifest.xml file.
Java:
<uses-permission android:name="android.permission.CAMERA" />
4. Check whether AR Engine has been installed on the current device. If yes, the app can run properly. If not, the app automatically redirects the user to AppGallery to install AR Engine.
Java:
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk && isRemindInstall) {
Toast.makeText(this, "Please agree to install.", Toast.LENGTH_LONG).show();
finish();
}
if (!isInstallArEngineApk) {
startActivity(new Intent(this, ConnectAppMarketActivity.class));
isRemindInstall = true;
}
return AREnginesApk.isAREngineApkReady(this);
Key Code1. Call ARFaceTrackingConfig and create an ARSession object. Then, set the human face detection mode, configure AR parameters for motion tracking, and enable motion tracking.
Java:
mArSession = new ARSession(this);
mArFaceTrackingConfig = new ARFaceTrackingConfig(mArSession);
mArFaceTrackingConfig.setEnableItem(ARConfigBase.ENABLE_HEALTH_DEVICE);
mArFaceTrackingConfig
.setFaceDetectMode(ARConfigBase.FaceDetectMode.HEALTH_ENABLE_DEFAULT.getEnumValue());
2. Call FaceHealthServiceListener to add your app and pass the health check status and progress. Call handleProcessProgressEvent() to obtain the health check progress.
Java:
mArSession.addServiceListener(new FaceHealthServiceListener() {
@Override
public void handleEvent(EventObject eventObject) {
if (!(eventObject instanceof FaceHealthCheckStateEvent)) {
return;
}
final FaceHealthCheckState faceHealthCheckState =
((FaceHealthCheckStateEvent) eventObject).getFaceHealthCheckState();
runOnUiThread(new Runnable() {
@Override
public void run() {
mHealthCheckStatusTextView.setText(faceHealthCheckState.toString());
}
});
}
@Override
public void handleProcessProgressEvent(final int progress) {
mHealthRenderManager.setHealthCheckProgress(progress);
runOnUiThread(new Runnable() {
@Override
public void run() {
setProgressTips(progress);
}
});
}
});
private void setProgressTips(int progress) {
String progressTips = "processing";
if (progress >= MAX_PROGRESS) {
progressTips = "finish";
}
mProgressTips.setText(progressTips);
mHealthProgressBar.setProgress(progress);
}
Update data in real time and display the health check result.
Java:
mActivity.runOnUiThread(new Runnable() {
@Override
public void run() {
mHealthParamTable.removeAllViews();
TableRow heatRateTableRow = initTableRow(ARFace.HealthParameter.PARAMETER_HEART_RATE.toString(),
healthParams.getOrDefault(ARFace.HealthParameter.PARAMETER_HEART_RATE, 0.0f).toString());
mHealthParamTable.addView(heatRateTableRow);
TableRow breathRateTableRow = initTableRow(ARFace.HealthParameter.PARAMETER_BREATH_RATE.toString(),
healthParams.getOrDefault(ARFace.HealthParameter.PARAMETER_BREATH_RATE, 0.0f).toString());
mHealthParamTable.addView(breathRateTableRow);
}
});
References>> AR Engine official website
>> AR Engine Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
BackgroundVideos are memories — so why not spend more time making them look better? Many mobile apps on the market simply offer basic editing functions, such as applying filters and adding stickers. That said, it is not enough for those who want to create dynamic videos, where a moving person stays in focus. Traditionally, this requires a keyframe to be added and the video image to be manually adjusted, which could scare off many amateur video editors.
I am one of those people and I've been looking for an easier way of implementing this kind of feature. Fortunately for me, I stumbled across the track person capability from HMS Core Video Editor Kit, which automatically generates a video that centers on a moving person, as the images below show.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Before using the capability
After using the capability
Thanks to the capability, I can now confidently create a video with the person tracking effect.
Let's see how the function is developed.
Development ProcessPreparationsConfigure the app information in AppGallery Connect.
Project Configuration1. Set the authentication information for the app via an access token or API key.
Use the setAccessToken method to set an access token during app initialization. This needs setting only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. The API key needs to be set only once.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a unique License ID.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for HuaweiVideoEditor.
When creating a video editing project, first create a HuaweiVideoEditor object and initialize its runtime environment. Release this object when exiting a video editing project.
(1) Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
(2) Specify the preview area position.
The area renders video images. This process is implemented via SurfaceView creation in the SDK. The preview area position must be specified before the area is created.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Configure the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
(3) Initialize the runtime environment. LicenseException will be thrown if license verification fails.
Creating the HuaweiVideoEditor object will not occupy any system resources. The initialization time for the runtime environment has to be manually set. Then, necessary threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
4. Add a video or an image.
Create a video lane. Add a video or an image to the lane using the file path.
Code:
// Obtain the HVETimeLine object.
HVETimeLine timeline = editor.getTimeLine();
// Create a video lane.
HVEVideoLane videoLane = timeline.appendVideoLane();
// Add a video to the end of the lane.
HVEVideoAsset videoAsset = videoLane.appendVideoAsset("test.mp4");
// Add an image to the end of the video lane.
HVEImageAsset imageAsset = videoLane.appendImageAsset("test.jpg");
Function Building
Code:
// Initialize the capability engine.
visibleAsset.initHumanTrackingEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Initialization progress.
}
@Override
public void onSuccess() {
// The initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The initialization failed.
}
});
// Track a person using the coordinates. Coordinates of two vertices that define the rectangle containing the person are returned.
List<Float> rects = visibleAsset.selectHumanTrackingPerson(bitmap, position2D);
// Enable the effect of person tracking.
visibleAsset.addHumanTrackingEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Handling progress.
}
@Override
public void onSuccess() {
// Handling successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Handling failed.
}
});
// Interrupt the effect.
visibleAsset.interruptHumanTracking();
// Remove the effect.
visibleAsset.removeHumanTrackingEffect();
ReferencesThe Importance of Visual Effects
Track Person