AR Engine and Scene Kit Deliver Virtual Try-On of Glasses - Huawei Developers

Background​The ubiquity of the Internet and smart devices has made e-commerce the preferred choice for countless consumers. However, many longtime users have grown wary of the stagnant shopping model, and thus enhancing user experience is critical to stimulating further growth in e-commerce and attracting a broader user base. HMS Core offers intelligent graphics processing capabilities to identify a user's facial and physical features, which when combined with a new display paradigm, enables users to try on products virtually through their mobile phones, for a groundbreaking digital shopping experience.
Scenarios​AR Engine and Scene Kit allow users to virtually try on products found on shopping apps and shopping list sharing apps, which in turn will lead to greater customer satisfaction and fewer returns and replacements.
Effects​A user opens an shopping app, then the user taps a product's picture to view the 3D model of the product, which they can rotate, enlarge, and shrink for interactive viewing.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Getting Started​Configuring the Maven Repository Address for the HMS Core SDK
Open the project-level build.gradle file in your Android Studio project. Go to buildscript > repositories and allprojects > repositories to configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories{
...
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
...
maven { url 'http://developer.huawei.com/repo/'}
}
}
Adding Build Dependencies for the HMS Core SDK
Open the app-level build.gradle file of your project. Add build dependencies in the dependencies block and use the Full-SDK of Scene Kit and AR Engine SDK.
Code:
dependencies {
....
implementation 'com.huawei.scenekit:full-sdk:5.0.2.302'
implementation 'com.huawei.hms:arenginesdk:2.13.0.4'
}
For details about the preceding steps, please refer to the development guide for Scene Kit on HUAWEI Developers.
Adding Permissions in the AndroidManifest.xml File
Open the AndroidManifest.xml file in main directory and add the permission to use the camera above the <application line.
Code:
<!--Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
Development Procedure​Configuring MainActivity​Add two buttons to the layout configuration file of MainActivity. Set the background of the onBtnShowProduct button to the preview image of the product and add the text Try it on! to the onBtnTryProductOn button to guide the user to the feature.
Code:
<Button
android:layout_width="260dp"
android:layout_height="160dp"
android:background="@drawable/sunglasses"
android:onClick="onBtnShowProduct" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Try it on!"
android:textAllCaps="false"
android:textSize="24sp"
android:onClick="onBtnTryProductOn" />
If the user taps the onBtnShowProduct button, the 3D model of the product will be loaded. After tapping the onBtnTryProductOn button, the user will enter the AR fitting screen.
Configuring the 3D Model Display for a Product​1. Create a SceneSampleView inherited from SceneView.
Code:
public class SceneSampleView extends SceneView {
public SceneSampleView(Context context) {
super(context);
}
public SceneSampleView(Context context, AttributeSet attributeSet) {
super(context, attributeSet);
}
}
Override the surfaceCreated method to create and initialize SceneView. Then call loadScene to load the materials, which should be in the glTF or GLB format, to have them rendered and displayed. Call loadSkyBox to load skybox materials, loadSpecularEnvTexture to load specular maps, and loadDiffuseEnvTexture to load diffuse maps. These files should be in the DDS (cubemap) format.
All loaded materials are stored in the src > main > assets > SceneView folder.
Code:
@Override
public void surfaceCreated(SurfaceHolder holder) {
super.surfaceCreated(holder);
// Load the materials to be rendered.
loadScene("SceneView/sunglasses.glb");
// Call loadSkyBox to load skybox texture materials.
loadSkyBox("SceneView/skyboxTexture.dds");
// Call loadSpecularEnvTexture to load specular texture materials.
loadSpecularEnvTexture("SceneView/specularEnvTexture.dds");
// Call loadDiffuseEnvTexture to load diffuse texture materials.
loadDiffuseEnvTexture("SceneView/diffuseEnvTexture.dds");
}
2. Create a SceneViewActivity inherited from Activity.
Call setContentView using the onCreate method, and then pass SceneSampleView that you have created using the XML tag in the layout file to setContentView.
Code:
public class SceneViewActivity extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_sample);
}
}
Create SceneSampleView in the layout file as follows:
Code:
<com.huawei.scene.demo.sceneview.SceneSampleView
android:layout_width="match_parent"
android:layout_height="match_parent"/>
3. Create an onBtnShowProduct in MainActivity.
When the user taps the onBtnShowProduct button, SceneViewActivity is called to load, render, and finally display the 3D model of the product.
Code:
public void onBtnShowProduct(View view) {
startActivity(new Intent(this, SceneViewActivity.class));
}
Configuring AR Fitting for a Product​Product virtual try-on is easily accessible, thanks to the facial recognition, graphics rendering, and AR display capabilities offered by HMS Core.
1. Create a FaceViewActivity inherited from Activity, and create the corresponding layout file.
Create face_view in the layout file to display the try-on effect.
Code:
<com.huawei.hms.scene.sdk.FaceView
android:id="@+id/face_view"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:sdk_type="AR_ENGINE"></com.huawei.hms.scene.sdk.FaceView>
Create a switch. When the user taps it, they can check the difference between the appearances with and without the virtual glasses.
Code:
<Switch
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/switch_view"
android:layout_alignParentTop="true"
android:layout_marginTop="15dp"
android:layout_alignParentEnd="true"
android:layout_marginEnd ="15dp"
android:text="Try it on"
android:theme="@style/AppTheme"
tools:ignore="RelativeOverlap" />
2. Override the onCreate method in FaceViewActivity to obtain FaceView.
Code:
public class FaceViewActivity extends Activity {
private FaceView mFaceView;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_face_view);
mFaceView = findViewById(R.id.face_view);
}
}
3. Create a listener method for the switch. When the switch is enabled, the loadAsset method is called to load the 3D model of the product. Set the position for facial recognition in LandmarkType.
Code:
mSwitch.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
mFaceView.clearResource();
if (isChecked) {
// Load materials.
int index = mFaceView.loadAsset("FaceView/sunglasses.glb", LandmarkType.TIP_OF_NOSE);
}
}
});
Use setInitialPose to adjust the size and position of the model. Create the position, rotation, and scale arrays and pass values to them.
Code:
final float[] position = { 0.0f, 0.0f, -0.15f };
final float[] rotation = { 0.0f, 0.0f, 0.0f, 0.0f };
final float[] scale = { 2.0f, 2.0f, 0.3f };
Put the following code below the loadAsset line:
Code:
mFaceView.setInitialPose(index, position, scale, rotation);
4. Create an onBtnTryProductOn in MainActivity. When the user taps the onBtnTryProductOn button, the FaceViewActivity is called, enabling the user to view the try-on effect.
Code:
public void onBtnTryProductOn(View view) {
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(
this, new String[]{ Manifest.permission.CAMERA }, FACE_VIEW_REQUEST_CODE);
} else {
startActivity(new Intent(this, FaceViewActivity.class));
}
}
References​For more details, you can go to:
AR Engine official website; Scene Kit official website
Reddit to join our developer discussion
GitHub to download sample codes
Stack Overflow to solve any integration problems

Does it support Quick App?

Hello sir,
from where i can get the asset folder for above given code(virtual try-on glasses)

Mamta23 said:
Hello sir,
from where i can get the asset folder for above given code(virtual try-on glasses)
Click to expand...
Click to collapse
Hi,
You can get the demo here: https://github.com/HMS-Core/hms-AREngine-demo
Hope this helps!

Related

Develop an ID Photo DIY Applet with HMS MLKit SDK in 30 Min

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201257812100840239&fid=0101187876626530001
It’s an application level development and we won’t go through the algorithm of image segmentation. Use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don’t know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don’t have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn’t meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
1.2 Add SDK Dependency in Application Level build.gradle
Introducing SDK and basic SDK of face recognition:
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user’s device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--Uses storage permissions--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // Open the corresponding configuration page based on the package name.
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
The image segmentation detector can be created through the image segmentation detection configurator “mlimagesegmentation setting".
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create “mlframe” Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator “MLImageSegmentationSetting".
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call “asyncanalyseframe” Method for Image Segmentation
Code:
// Create a task to process the result returned by the image segmentation detector. Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // Asynchronously processing the result returned by the image segmentation detector Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let’s see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-MLKit/HUAWEI-HMS-MLKit-Sample=
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
People’s portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
Previous link:
NO. 1:One article to understand Huawei HMS ML Kit text recognition, bank card recognition, general card identification
NO.2: Integrating MLkit and publishing ur app on Huawei AppGallery
NO.3.: Comparison Between Zxing and Huawei HMS Scan Kit
NO.4: How to use Huawei HMS MLKit service to quickly develop a photo translation app

Implement Eye-Enlarging and Face-Shaping Functions with ML Kit's Detection Capability

Introduction
Sometimes, we can't help taking photos to keep our unforgettable moments in life. But most of us are not professional photographers or models, so our photographs can end up falling short of our expectations. So, how can we produce more impressive snaps? If you have an image processing app on your phone, it can automatically detect faces in a photo, and you can then adjust the image until you're happy with it. So, after looking around online, I found HUAWEI ML Kit's face detection capability. By integrating this capability, you can add beautification functions to your apps. Have a try!
Application Scenarios
ML Kit's face detection capability detects up to 855 facial keypoints and returns the coordinates for the face's contour, eyebrows, eyes, nose, mouth, and ears, as well as the angle of the face. Once you've integrated this capability, you can quickly create beauty apps and enable users to add fun facial effects and features to their images.
Face detection also detects whether the subject's eyes are open, whether they're wearing glasses or a hat, whether they have a beard, and even their gender and age. This is useful if you want to add a parental control function to your app which prevents children from getting too close to their phone, or staring at the screen for too long.
In addition, face detection can detect up to seven facial expressions, including smiling, neutral, angry, disgusted, frightened, sad, and surprised faces. This is great if you want to create apps such as smile-cameras.
You can integrate any of these capabilities as needed. At the same time, face detection supports image and video stream detection, cross-frame face tracking, and multi-face detection. It really is powerful! Now, let's see how to integrate this capability.
Face Detection Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process. Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add Configurations to the File Header
After integrating the SDK, add the following configuration to the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.3 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Import the contour and keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
// Import the facial expression detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
// Import the facial feature detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
...
</manifest>
1.5 Apply for Camera Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
2. Code Development
2.1 Create a Face Analyzer by Using the Default Parameter Configurations
Code:
analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
2.2 Create an MLFrame Object by Using the android.graphics.Bitmap for the Analyzer to Detect Images
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncAnalyseFrame Method to Perform Face Detection
Code:
Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {
@Override
public void onSuccess(List<MLFace> faces) {
// Detection success. Obtain the face keypoints.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
2.4 Use the Progress Bar to Process the Face in the Image
Call the magnifyEye and smallFaceMesh methods to implement the eye-enlarging algorithm and face-shaping algorithm.
Code:
private SeekBar.OnSeekBarChangeListener onSeekBarChangeListener = new SeekBar.OnSeekBarChangeListener() {
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
switch (seekBar.getId()) {
case R.id.seekbareye: // When the progress bar of the eye enlarging changes, ...
case R.id.seekbarface: // When the progress bar of the face shaping changes, ...
}
}
2.5 Release the Analyzer After the Detection is Complete
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
Log.e(TAG, "e=" + e.getMessage());
}
Demo
Now, let's see what it can do. Pretty cool, right?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}

Scene Kit Puts AR Placement Apps at Your Fingertips

AR placement apps have enhanced daily life in a myriad of different ways, from AR furniture placement for home furnishing and interior decorating, AR fitting in retail, to AR-based 3D models in education which gives students an opportunity to take an in-depth look at the internal workings of objects.
From integrating HUAWEI Scene Kit, you're only eight steps away from launching an AR placement app of your own. ARView, a set of AR-oriented scenario-based APIs in Scene Kit, uses the plane detection capability of AR Engine, along with the graphics rendering capability in Scene Kit, to create a 3D materials loading and rendering capability for common AR scenes.
ARView Functions
With ARView, you'll be able to:
1. Load and render 3D materials in AR scenes.
2. Set whether to display the lattice plane (consisting of white lattice points) to help select a plane in a real-world view.
3. Tap an object placed on the lattice plane to select it. Once selected, the object will turn red. You can then move, resize, or rotate it as needed.
Development Procedure
Before using ARView, you'll need to integrate the Scene Kit SDK into your Android Studio project. For details, please refer to Integrating the HMS Core SDK.
ARView inherits from GLSurfaceView, and overrides lifecycle-related methods. It can facilitate the creation of an ARView-based app in just eight steps:
1. Create an ARViewActivity that inherits from Activity. Add a Button to load materials.
Code:
public class ARViewActivity extends Activity {
private ARView mARView;
private Button mButton;
private boolean isLoadResource = false;
}
2. Add an ARView to the layout.
Code:
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent">
</com.huawei.hms.scene.sdk.ARView>
Note: To achieve the desired experience offered by ARView, your app should not support screen orientation changes or split-screen mode; thus, add the following configuration in the AndroidManifest.xml file:
Code:
android:screenOrientation="portrait"
android:resizeableActivity="false"
3. Override the onCreate method of ARViewActivity and obtain the ARView.
Code:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_ar_view);
mARView = findViewById(R.id.ar_view);
mButton = findViewById(R.id.button);
}
4. Add a Switch button in the onCreate method to set whether or not to display the lattice plane.
Code:
Switch mSwitch = findViewById(R.id.show_plane_view);
mSwitch.setChecked(true);
mSwitch.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
mARView.enablePlaneDisplay(isChecked);
}
});
Note: Add the Switch button in the layout before using it.
Code:
<Switch
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/show_plane_view"
android:layout_alignParentTop="true"
android:layout_marginTop="15dp"
android:layout_alignParentEnd="true"
android:layout_marginEnd ="15dp"
android:layout_gravity="end"
android:text="@string/show_plane"
android:theme="@style/AppTheme"
tools:ignore="RelativeOverlap" />
5. Add a button callback method. Tapping the button once will load a material, and tapping it again will clear the material.
Code:
public void onBtnClearResourceClicked(View view) {
if (!isLoadResource) {
mARView.loadAsset("ARView/scene.gltf");
isLoadResource = true;
mButton.setText(R.string.btn_text_clear_resource);
} else {
mARView.clearResource();
mARView.loadAsset("");
isLoadResource = false;
mButton.setText(R.string.btn_text_load);
}
}
Note: The onBtnSceneKitDemoClicked method must be registered in the layout attribute onClick of the button, which is tapped to load or clear a material.
6. Override the onPause method of ARViewActivity and call the onPause method of ARView.
Code:
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
7. Override the onResume method of ARViewActivity and call the onResume method of ARView.
Code:
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
8. Override the onDestroy method for ARViewActivity and call the destroy method of ARView.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
9. (Optional) After the material is loaded, use setInitialPose to set its initial status (scale and rotation).
Code:
float[] scale = new float[] { 0.1f, 0.1f, 0.1f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.707f, 0.0f };
mARView.setInitialPose(scale, rotation);
Effects
You can develop a basic AR placement app, simply by calling ARView from Scene Kit, as detailed in the eight steps above. If you are interested in this implementation method, you can view the Scene Kit demo on GitHub.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The ARView capability can be used to do far more than just develop AR placement apps; it can also help you implement a range of engaging functions, such as AR games, virtual exhibitions, and AR navigation features.
GitHub to download demos and sample codes

ML Kit's Scene Detection Service Brings Enhanced Effects to Images with Newfound Ease

1. Overview
Camera functions on today's phones are so powerful that they now involve advanced imaging processing algorithms, as well as the camera hardware itself. But even so, users often need to continually adjust the image parameters, a process that can be extremely frustrating, even when it results in an ideal shot. For the most of us who are not professional photographers, images often end up falling woefully short of expectations. Given these universal challenges, there's incredible demand for apps that can enhance image effects automatically, by accounting for the user's surroundings. HUAWEI ML Kit's scene detection service helps bring such apps to life, by detecting 102 visual features, including beaches, sky scenes, food, night scenes, plants, and common buildings. The service can automatically adjust the parameters corresponding to the image matrix, helping build proactive, intelligent, and efficient apps. Let's take a look at how it does this.
2. Effects
For a city nightscape, the service will accurately detect the night scene, and enhance the brightness in bright areas, as well as the darkness in dark areas, rendering a highly-pleasing contrast.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For a sky image, the service will brighten the sky with an enhanced matrix.
The effects are similar for photos of flowers and plants.
The effects tend to vary, depending on the specific image. Users are free to apply their favorite effects, or mix-and-max them with filters.
Now, let's take a look at how the service can be integrated.
1. Development Process
3.1 Create a scene detection analyzer instance.
Code:
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
3.2 Create an MLFrame object by using android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are currently supported.
Code:
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
3.3 Scene detection
Code:
Task<List<MLSceneDetection>> task = this.analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
@Override
public void onSuccess(List<MLSceneDetection> sceneInfos) {
// Processing logic for scene detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
if (e instanceof MLException) {
MLException exception = (MLException) e;
// Obtain the result code.
int errCode = exception.getErrCode();
// Obtain the error information.
String message = exception.getMessage();
} else {
// Other errors.
}
}
});
3.4 Stop the analyzer to release detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
The Maven repository address.
buildscript {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
allprojects {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
Import the SDK.
Code:
dependencies {
// Scene detection SDK.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection:2.0.3.300'
// Scene detection model.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection-model:2.0.3.300'
}
Manifest files
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="1" />
...
</manifest>
Permissions
Code:
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.autofocus" />
Submit a dynamic permission application.
Code:
if (!(ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED)) {
requestCameraPermission();
}
4. Learn More
ML Kit's scene detection service also comes equipped with intelligent features, such as smart album management and image searching, which allows users to find and sort images in a refreshingly natural and intuitive way.
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow
Is there any restriction using the Kit ?
Very nice

Precise and Immersive AR for Interior Design

Augmented reality (AR) technologies are increasingly widespread, notably in the field of interior design, as they allow users to visualize real spaces and apply furnishing to them, with remarkable ease. HMS Core AR Engine is a must-have for developers creating AR-based interior design apps, since it's easy to use, covers all the basics, and considerably streamlines the development process. It is an engine for AR apps that bridge the virtual and real worlds, for a brand new visually interactive user experience. AR Engine's motion tracking capability allows your app to output the real-time 3D coordinates of interior spaces, convert these coordinates between real and virtual worlds, and use this information to determine the correct position of furniture. With AR Engine integrated, your app will be able to provide users with AR-based interior design features that are easy to use.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
As a key component of AR Engine, the motion tracking capability bridges real and virtual worlds, by facilitating the construction of a virtual framework, tracking how the position and pose of user devices change in relation to their surroundings, and outputting the 3D coordinates of the surroundings.
About This Feature​The motion tracking capability provides a geometric link between real and virtual worlds, by tracking the changes of the device's position and pose in relation to its surroundings, and determining the conversion of coordinate systems between the real and virtual worlds. This allows virtual furnishings to be rendered from the perspective of the device user, and overlaid on images captured by the camera.
For example, in an AR-based car exhibition, virtual cars can be placed precisely in the target position, creating a virtual space that's seamlessly in sync with the real world.
The basic condition for implementing real-virtual interaction is tracking the motion of the device in real time, and updating the status of virtual objects in real time based on the motion tracking results. This means that the precision and quality of motion tracking directly affect the AR effects available on your app. Any delay or error can cause a virtual object to jitter or drift, which undermines the sense of reality and immersion offered to users by AR.
Advantages​Simultaneous localization and mapping (SLAM) 3.0 released in AR Engine 3.0 enhances the motion tracking performance in the following ways:
With the 6DoF motion tracking mode, users are able to observe virtual objects in an immersive manner from different distances, directions, and angles.
Stability of virtual objects is ensured, thanks to monocular absolute trajectory error (ATE) as low as 1.6 cm.
The plane detection takes no longer than one second, facilitating plane recognition and expansion.
Integration Procedure​Logging In to HUAWEI Developers and Creating an App​The header is quite self-explanatory
Integrating the AR Engine SDK​1. Open the project-level build.gradle file in Android Studio, and add the Maven repository (versions earlier than 7.0 are used as an example).
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Go to allprojects > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url "https://developer.huawei.com/repo/" }
}
}
allprojects {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url "https://developer.huawei.com/repo/" }
}
}
2. Open the app-level build.gradle file in your project.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:3.1.0.1'
}
Code Development​1. Check whether AR Engine has been installed on the current device. If yes, your app can run properly. If not, your app should automatically redirect the user to AppGallery to install AR Engine.
Code:
private boolean arEngineAbilityCheck() {
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk && isRemindInstall) {
Toast.makeText(this, "Please agree to install.", Toast.LENGTH_LONG).show();
finish();
}
LogUtil.debug(TAG, "Is Install AR Engine Apk: " + isInstallArEngineApk);
if (!isInstallArEngineApk) {
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
return AREnginesApk.isAREngineApkReady(this);
}
2. Check permissions before running.
Configure the camera permission in the AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.CAMERA" />
private static final int REQUEST_CODE_ASK_PERMISSIONS = 1;
private static final int MAX_ARRAYS = 10;
private static final String[] PERMISSIONS_ARRAYS = new String[]{Manifest.permission.CAMERA};
List<String> permissionsList = new ArrayList<>(MAX_ARRAYS);
boolean isHasPermission = true;
for (String permission : PERMISSIONS_ARRAYS) {
if (ContextCompat.checkSelfPermission(activity, permission) != PackageManager.PERMISSION_GRANTED) {
isHasPermission = false;
break;
}
}
if (!isHasPermission) {
for (String permission : PERMISSIONS_ARRAYS) {
if (ContextCompat.checkSelfPermission(activity, permission) != PackageManager.PERMISSION_GRANTED) {
permissionsList.add(permission);
}
}
ActivityCompat.requestPermissions(activity,
permissionsList.toArray(new String[permissionsList.size()]), REQUEST_CODE_ASK_PERMISSIONS);
}
3. Create an ARSession object for motion tracking by calling ARWorldTrackingConfig.
Code:
private ARSession mArSession;
private ARWorldTrackingConfig mConfig;
config.setCameraLensFacing(ARConfigBase.CameraLensFacing.FRONT); // Set scene parameters by calling config.setXXX.
config.setPowerMode(ARConfigBase.PowerMode.ULTRA_POWER_SAVING);
mArSession.configure(config);
mArSession.resume();
mArSession.configure(config);
mSession.setCameraTextureName(mTextureDisplay.getExternalTextureId());
ARFrame arFrame = mSession.update(); // Obtain a frame of data from ARSession.
// Set the environment texture probe and mode after the camera is initialized.
setEnvTextureData();
ARCamera arCamera = arFrame.getCamera(); // Obtain ARCamera from ARFrame. ARCamera can then be used for obtaining the camera's projection matrix to render the window.
// The size of the projection matrix is 4 x 4.
float[] projectionMatrix = new float[16];
arCamera.getProjectionMatrix(projectionMatrix, PROJ_MATRIX_OFFSET, PROJ_MATRIX_NEAR, PROJ_MATRIX_FAR);
mTextureDisplay.onDrawFrame(arFrame);
StringBuilder sb = new StringBuilder();
updateMessageData(arFrame, sb);
mTextDisplay.onDrawFrame(sb);
// The size of ViewMatrix is 4 x 4.
float[] viewMatrix = new float[16];
arCamera.getViewMatrix(viewMatrix, 0);
for (ARPlane plane : mSession.getAllTrackables(ARPlane.class)) { // Obtain all trackable planes from ARSession.
if (plane.getType() != ARPlane.PlaneType.UNKNOWN_FACING
&& plane.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
hideLoadingMessage();
break;
}
}
drawTarget(mSession.getAllTrackables(ARTarget.class), arCamera, viewMatrix, projectionMatrix);
mLabelDisplay.onDrawFrame(mSession.getAllTrackables(ARPlane.class), arCamera.getDisplayOrientedPose(),
projectionMatrix);
handleGestureEvent(arFrame, arCamera, projectionMatrix, viewMatrix);
ARLightEstimate lightEstimate = arFrame.getLightEstimate();
ARPointCloud arPointCloud = arFrame.acquirePointCloud();
getEnvironmentTexture(lightEstimate);
drawAllObjects(projectionMatrix, viewMatrix, getPixelIntensity(lightEstimate));
mPointCloud.onDrawFrame(arPointCloud, viewMatrix, projectionMatrix);
ARHitResult hitResult = hitTest4Result(arFrame, arCamera, event.getEventSecond());
if (hitResult != null) {
mSelectedObj.setAnchor(hitResult.createAnchor()); // Create an anchor at the hit position to enable AR Engine to continuously track the position.
}
4. Draw the required virtual object based on the anchor position.
Code:
mEnvTextureBtn.setOnCheckedChangeListener((compoundButton, b) -> {
mEnvTextureBtn.setEnabled(false);
handler.sendEmptyMessageDelayed(MSG_ENV_TEXTURE_BUTTON_CLICK_ENABLE,
BUTTON_REPEAT_CLICK_INTERVAL_TIME);
mEnvTextureModeOpen = !mEnvTextureModeOpen;
if (mEnvTextureModeOpen) {
mEnvTextureLayout.setVisibility(View.VISIBLE);
} else {
mEnvTextureLayout.setVisibility(View.GONE);
}
int lightingMode = refreshLightMode(mEnvTextureModeOpen, ARConfigBase.LIGHT_MODE_ENVIRONMENT_TEXTURE);
refreshConfig(lightingMode);
});
Reference​>> About AR Engine
>> AR Engine Development Guide
>> Open-source repository at GitHub and Gitee
>> HUAWEI Developers
>> Development Documentation

Categories

Resources