More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201332917232620018&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
CameraX
CameraX is a Jetpack support library, built to help you make camera app development easier. It provides a consistent and easy-to-use API surface that works across most Android devices, with backward-compatibility to Android 5.0
While it leverages the capabilities of camera2, it uses a simpler, uses a case-based approach that is lifecycle-aware. It also resolves device compatibility issues for you so that you don’t have to include device-specific code in your codebase. These features reduce the amount of code you need to write when adding camera capabilities to your app.
Use Cases
CameraX introduces use cases, which allow you to focus on the task you need to get done instead of spending time managing device-specific nuances. There are several basic use cases:
Preview: get an image on the display
Image analysis: access a buffer seamlessly for use in your algorithms, such as to pass into MLKit
Image capture: save high-quality images
CameraX has an optional add-on, called Extensions, which allow you to access the same features and capabilities as those in the native camera app that ships with the device, with just two lines of code.
The first set of capabilities available include Portrait, HDR, Night, and Beauty. These capabilities are available on supported devices
Implementing Preview
When adding a preview to your app, use PreviewView, which is a View that can be cropped, scaled, and rotated for proper display.
The image preview streams to a surface inside the PreviewView when the camera becomes active.
Implementing a preview for CameraX using PreviewView involves the following steps, which are covered in later sections:
Optionally configure a CameraXConfig.Provider.
Add a PreviewView to your layout.
Request a CameraProvider.
On View creation, check for the CameraProvider.
Select a camera and bind the lifecycle and use cases.
Using PreviewView has some limitations. When using PreviewView, you can’t do any of the following things:
Create a SurfaceTexture to set on TextureView and PreviewSurfaceProvider.
Retrieve the SurfaceTexture from TextureView and set it on PreviewSurfaceProvider.
Get the Surface from SurfaceView and set it on PreviewSurfaceProvider.
If any of these happen, then the Preview will stop streaming frames to the PreviewView.
On your app level build.gradle file add the following:
Code:
// CameraX core library using the camera2 implementation
def camerax_version = "1.0.0-beta03"
def camerax_extensions = "1.0.0-alpha10"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-camera2:${camerax_version}"
// If you want to additionally use the CameraX Lifecycle library
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
// If you want to additionally use the CameraX View class
implementation "androidx.camera:camera-view:${camerax_extensions}"
// If you want to additionally use the CameraX Extensions library
implementation "androidx.camera:camera-extensions:${camerax_extensions}"
On your .xml file using the PreviewView is highly recommended:
Code:
<androidx.camera.view.PreviewView
android:id="@+id/camera"
android:layout_width="math_parent"
android:layout_height="math_parent"
android:contentDescription="@string/preview_area"
android:importantForAccessibility="no"/>
Let's start the backend coding for our previewView in our Activity or a Fragment:
Code:
private val REQUIRED_PERMISSIONS = arrayOf(Manifest.permission.CAMERA)
private lateinit var cameraSelector: CameraSelector
private lateinit var previewView: PreviewView
private lateinit var cameraProviderFeature: ListenableFuture<ProcessCameraProvider>
private lateinit var cameraControl: CameraControl
private lateinit var cameraInfo: CameraInfo
private lateinit var imageCapture: ImageCapture
private lateinit var imageAnalysis: ImageAnalysis
private lateinit var torchView: ImageView
private val executor = Executors.newSingleThreadExecutor()
takePicture() method:
Code:
fun takePicture() {
val file = createFile(
outputDirectory,
FILENAME,
PHOTO_EXTENSION
)
val outputFileOptions = ImageCapture.OutputFileOptions.Builder(file).build()
imageCapture.takePicture(
outputFileOptions,
executor,
object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
val msg = "Photo capture succeeded: ${file.absolutePath}"
previewView.post {
Toast.makeText(
context.applicationContext,
msg,
Toast.LENGTH_SHORT
).show()
//You can create a task to save your image to any database you like
getImageTask(file)
}
}
override fun onError(exception: ImageCaptureException) {
val msg = "Photo capture failed: ${exception.message}"
showLogError(mTAG, msg)
}
})
}
As I said you may get uri from file and use it on anywhere you like:
Code:
fun getImageTask(file: File) {
val uri = Uri.fromFile(file)
}
This part is an example for starting front camera with minor changes I am sure you may switch between front and back:
Code:
fun startCameraFront() {
showLogDebug(mTAG, "startCameraFront")
CameraX.unbindAll()
torchView.visibility = View.INVISIBLE
imagePreviewView = Preview.Builder().apply {
setTargetAspectRatio(AspectRatio.RATIO_4_3)
setTargetRotation(previewView.display.rotation)
setDefaultResolution(Size(1920, 1080))
setMaxResolution(Size(3024, 4032))
}.build()
imageAnalysis = ImageAnalysis.Builder().apply {
setImageQueueDepth(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
}.build()
imageAnalysis.setAnalyzer(executor, LuminosityAnalyzer())
imageCapture = ImageCapture.Builder().apply {
setCaptureMode(ImageCapture.CAPTURE_MODE_MAXIMIZE_QUALITY)
}.build()
cameraSelector =
CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_FRONT).build()
cameraProviderFeature.addListener(Runnable {
val cameraProvider = cameraProviderFeature.get()
val camera = cameraProvider.bindToLifecycle(
this,
cameraSelector,
imagePreviewView,
imageAnalysis,
imageCapture
)
previewView.preferredImplementationMode =
PreviewView.ImplementationMode.TEXTURE_VIEW
imagePreviewView.setSurfaceProvider(previewView.createSurfaceProvider(camera.cameraInfo))
}, ContextCompat.getMainExecutor(context.applicationContext))
}
LuminosityAnalyzer is essential for autofocus measures, so I recommend you to use it:
Code:
private class LuminosityAnalyzer : ImageAnalysis.Analyzer {
private var lastAnalyzedTimestamp = 0L
/**
* Helper extension function used to extract a byte array from an
* image plane buffer
*/
private fun ByteBuffer.toByteArray(): ByteArray {
rewind() // Rewind the buffer to zero
val data = ByteArray(remaining())
get(data) // Copy the buffer into a byte array
return data // Return the byte array
}
override fun analyze(image: ImageProxy) {
val currentTimestamp = System.currentTimeMillis()
// Calculate the average luma no more often than every second
if (currentTimestamp - lastAnalyzedTimestamp >=
TimeUnit.SECONDS.toMillis(1)
) {
val buffer = image.planes[0].buffer
val data = buffer.toByteArray()
val pixels = data.map { it.toInt() and 0xFF }
val luma = pixels.average()
showLogDebug(mTAG, "Average luminosity: $luma")
lastAnalyzedTimestamp = currentTimestamp
}
image.close()
}
}
Now before saving our image to our folder lets define our constants:
Code:
companion object {
private const val REQUEST_CODE_PERMISSIONS = 10
private const val mTAG = "ExampleTag"
private const val FILENAME = "yyyy-MM-dd-HH-mm-ss-SSS"
private const val PHOTO_EXTENSION = ".jpg"
private var recPath = Environment.getExternalStorageDirectory().path + "/Pictures/YourNewFolderName"
fun getOutputDirectory(context: Context): File {
val appContext = context.applicationContext
val mediaDir = context.externalMediaDirs.firstOrNull()?.let {
File(
recPath
).apply { mkdirs() }
}
return if (mediaDir != null && mediaDir.exists()) mediaDir else appContext.filesDir
}
fun createFile(baseFolder: File, format: String, extension: String) =
File(
baseFolder, SimpleDateFormat(format, Locale.ROOT)
.format(System.currentTimeMillis()) + extension
)
}
Simple torch control:
Code:
fun toggleTorch() {
when (cameraInfo.torchState.value) {
TorchState.ON -> {
cameraControl.enableTorch(false)
}
else -> {
cameraControl.enableTorch(true)
}
}
}
private fun setTorchStateObserver() {
cameraInfo.torchState.observe(this, androidx.lifecycle.Observer { state ->
if (state == TorchState.ON) {
torchView.setImageResource(R.drawable.ic_flash_on)
} else {
torchView.setImageResource(R.drawable.ic_flash_off)
}
})
}
Remember torchView can be any View type you want to be:
Code:
torchView.setOnClickListener {
toggleTorch()
setTorchStateObserver()
}
Now in your onCreateView() for Fragments or in onCreate() you may initiate previewView start using it:
Code:
previewView.post { startCameraFront() }
} else {
requestPermissions(
REQUIRED_PERMISSIONS,
REQUEST_CODE_PERMISSIONS
)
}
Camera Kit
HUAWEI Camera Kit encapsulates the Google Camera2 API to support multiple enhanced camera capabilities.
Unlike other camera APIs, Camera Kit focuses on bringing the full capacity of your camera to your apps. Well, dear readers think like this, many other social media apps have their own camera features yet output given by their camera is somehow always worse than the camera quality that your phone actually provides. For example, your camera may support x50 zoom or super night mode or maybe wide aperture mode but we all know that full extent of our phones' camera becomes useless no matter the price or the feature that our phone has when we are trying the take a shot from any of the 3rd party camera APIs.
HUAWEI Camera Kit provides a set of advanced programming APIs for you to integrate powerful image processing capabilities of Huawei phone cameras into your apps. Camera features such as wide aperture, Portrait mode, HDR, background blur, and Super Night mode can help your users shoot stunning images and vivid videos anytime and anywhere.
Features
Unlike the rest of the open-source APIs Camera Kit access the devices’ original camera features and is able to unleash them in your apps.
Front Camera HDR: In a backlit or low-light environment, front camera High Dynamic Range (HDR) improves the details in both the well-lit and poorly-lit areas of photos to present more life-like qualities.
Super Night Mode: This mode is used for you to take photos with sufficient brightness by using a long exposure at night. It also helps you to take photos that are properly exposed in other dark environments.
Wide Aperture: This mode blurs the background and highlights the subject in a photo. You are advised to be within 2 meters of the subject when taking a photo and to disable the flash in this mode.
Recording: This mode helps you record HD videos with effects such as different colors, filters, and AI film. Effects: Video HDR, Video background blurring
Portrait: Portraits and close-ups
Photo Mode: This mode supports the general capabilities that include but are not limited to Rear camera: Flash, color modes, face/smile detection, filter, and master AI. Front camera: Face/Smile detection, filter, SensorHdr, and mirror reflection.
Super Slow-Mo Recording: This mode allows you to record super slow-motion videos with a frame rate of over 960 FPS in manual or automatic (motion detection) mode.
Slow-mo Recording: This mode allows you to record slow-motion videos with a frame rate lower than 960 FPS. This mode allows you to record slow-motion videos with a frame rate lower than 960 FPS.
Pro Mode (Video): The Pro mode is designed to open the professional photography and recording capabilities of the Huawei camera to apps to meet diversified shooting requirements.
Pro Mode (Photo): This mode allows you to adjust the following camera parameters to obtain the same shooting capabilities as those of Huawei camera: Metering mode, ISO, exposure compensation, exposure duration, focus mode, and automatic white balance.
Integration Process
Registration and Sign-in
Before you get started, you must register as a HUAWEI developer and complete identity verification on the HUAWEI Developer website. For details, please refer to Register a HUAWEI ID.
Signing the HUAWEI Developer SDK Service Cooperation Agreement
When you download the SDK from SDK Download, the system prompts you to sign in and sign the HUAWEI Media Service Usage Agreement…
Environment Preparations
Android Studio v3.0.1 or later is recommended.
Huawei phones equipped with Kirin 980 or later and running EMUI 10.0 or later are required.
Code Part (Portrait Mode)
Now let us do an example for Portrait Mode. On our manifest lets set up some permissions:
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
View for the camera doesn’t provided by Camera Kit so we have to write our own view first:
Code:
public class OurTextureView extends TextureView {
private int mRatioWidth = 0;
private int mRatioHeight = 0;
public OurTextureView(Context context) {
this(context, null);
}
public OurTextureView(Context context, AttributeSet attrs) {
this(context, attrs, 0);
}
public OurTextureView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
public void setAspectRatio(int width, int height) {
if ((width < 0) || (height < 0)) {
throw new IllegalArgumentException("Size cannot be negative.");
}
mRatioWidth = width;
mRatioHeight = height;
requestLayout();
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
int width = MeasureSpec.getSize(widthMeasureSpec);
int height = MeasureSpec.getSize(heightMeasureSpec);
if ((0 == mRatioWidth) || (0 == mRatioHeight)) {
setMeasuredDimension(width, height);
} else {
if (width < height * mRatioWidth / mRatioHeight) {
setMeasuredDimension(width, width * mRatioHeight / mRatioWidth);
} else {
setMeasuredDimension(height * mRatioWidth / mRatioHeight, height);
}
}
}
}
.xml part:
Code:
<com.huawei.camerakit.portrait.OurTextureView
android:id="@+id/texture"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentStart="true"
android:layout_alignParentTop="true" />
Let's look at our variables:
Code:
private Mode mMode;
private @Mode.Type int mCurrentModeType = Mode.Type.PORTRAIT_MODE;
private CameraKit mCameraKit;
Our permissions:
Code:
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
Log.d(TAG, "onRequestPermissionsResult: ");
if (!PermissionHelper.hasPermission(this)) {
Toast.makeText(this, "This application needs camera permission.", Toast.LENGTH_LONG).show();
finish();
}
}
First, in our code let us check if the Camera Kit is supported by our device:
Code:
private boolean initCameraKit() {
mCameraKit = CameraKit.getInstance(getApplicationContext());
if (mCameraKit == null) {
Log.e(TAG, "initCamerakit: this devices not support camerakit or not installed!");
return false;
}
return true;
}
captureImage() method to capture image
Code:
private void captureImage() {
Log.i(TAG, "captureImage begin");
if (mMode != null) {
mMode.setImageRotation(90);
// Default jpeg file path
mFile = new File(getExternalFilesDir(null), System.currentTimeMillis() + "pic.jpg");
// Take picture
mMode.takePicture();
}
Log.i(TAG, "captureImage end");
}
Callback method for our actionState:
Code:
private final ActionStateCallback actionStateCallback = new ActionStateCallback() {
@Override
public void onPreview(Mode mode, int state, PreviewResult result) {
}
@Override
public void onTakePicture(Mode mode, int state, TakePictureResult result) {
switch (state) {
case TakePictureResult.State.CAPTURE_STARTED:
Log.d(TAG, "onState: STATE_CAPTURE_STARTED");
break;
case TakePictureResult.State.CAPTURE_COMPLETED:
Log.d(TAG, "onState: STATE_CAPTURE_COMPLETED");
showToast("take picture success! file=" + mFile);
break;
default:
break;
}
}
};
Now let us compare CameraX with Camera Kit
CameraX
Limited to already built-in functions
No Video capture
ML only exists on luminosity builds
Easy to use, lightweight, easy to implement
Any device that supports above API level 21 can use it.
Has averagely acceptable outputs
Gives you the mirrored image
Implementation requires only app level build.gradle integration
Has limited image adjusting while capturing
https://developer.android.com/training/camerax
Camera Kit
Lets you use the full capacity of the phones original camera
Video capture exist with multiple modes
ML exists on both rear and front camera (face/smile detection, filter, and master AI)
Hard to implement. Implementation takes time
Requires the flagship Huawei device to operate
Has incredible quality outputs
The mirrored image can be adjusted easily.
SDK must be downloaded and handled by the developer
References:
https://developer.huawei.com/consumer/en/CameraKit
camera kit will support portrait mode?
Related
More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201333611965550036&fid=0101187876626530001
Introduction
Nowadays, you’ll see cute and funny face stickers everywhere. They’re not only used in camera apps, but also in social media and entertainment apps. In this post, I’m going to show you how to create a 2D sticker using HUAWEI ML Kit. We’ll share the development process for 3D stickers soon, so keep an eye out!
Scenarios
Apps that are used to take and edit photos, such as beauty cameras and social media apps (TikTok, Weibo, and WeChat, etc.), often offer a range of stickers which can be used to customize images. With these stickers, users can create content which is more engaging and shareable.
Preparations
Add the Huawei Maven Repository to the Project-Level build.gradle File
Open the build.gradle file in the root directory of your Android Studio project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Add the Maven repository address.
Code:
buildscript {
{
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
}
Add SDK Dependencies to the App-Level build.gradle File
Code:
// Face detection SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Face detection model.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
Apply for Camera, Network Access, and Storage Permissions in the AndroidManifest.xml File
Code:
<!--Camera permission-->
<uses-feature android:name="android.hardware.camera" />
<uses-permission android:name="android.permission.CAMERA" />
<!--Write permission-->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!--Read permission-->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Code Development
Set the Face Analyzer
Code:
MLFaceAnalyzerSetting detectorOptions;
detectorOptions = new MLFaceAnalyzerSetting.Factory()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_FEATURES)
.setShapeType(MLFaceAnalyzerSetting.TYPE_SHAPES)
.allowTracing(MLFaceAnalyzerSetting.MODE_TRACING_FAST)
.create();
detector = MLAnalyzerFactory.getInstance().getFaceAnalyzer(detectorOptions);
Obtain Face Contour Points and Pass Them to the FacePointEngine
Use the camera callback to obtain camera frame data, then call the face analyzer to obtain face contour points, and pass the points to the FacePointEngine. This will allow the sticker filter to use them later.
Code:
@Override
public void onPreviewFrame(final byte[] imgData, final Camera camera) {
int width = mPreviewWidth;
int height = mPreviewHeight;
long startTime = System.currentTimeMillis();
// Set the shooting directions of the front and rear cameras to be the same.
if (isFrontCamera()){
mOrientation = 0;
}else {
mOrientation = 2;
}
MLFrame.Property property =
new MLFrame.Property.Creator()
.setFormatType(ImageFormat.NV21)
.setWidth(width)
.setHeight(height)
.setQuadrant(mOrientation)
.create();
ByteBuffer data = ByteBuffer.wrap(imgData);
// Call the face analyzer API.
SparseArray<MLFace> faces =
detector.analyseFrame(MLFrame.fromByteBuffer(data,property));
// Determine whether face information is obtained.
if(faces.size()>0){
MLFace mLFace = faces.get(0);
EGLFace EGLFace = FacePointEngine.getInstance().getOneFace(0);
EGLFace.pitch = mLFace.getRotationAngleX();
EGLFace.yaw = mLFace.getRotationAngleY();
EGLFace.roll = mLFace.getRotationAngleZ() - 90;
if (isFrontCamera())
EGLFace.roll = -EGLFace.roll;
if (EGLFace.vertexPoints == null) {
EGLFace.vertexPoints = new PointF[131];
}
int index = 0;
// Obtain the coordinates of a user's face contour points and convert them to the floating point numbers in normalized coordinate system of OpenGL.
for (MLFaceShape contour : mLFace.getFaceShapeList()) {
if (contour == null) {
continue;
}
List<MLPosition> points = contour.getPoints();
for (int i = 0; i < points.size(); i++) {
MLPosition point = points.get(i);
float x = ( point.getY() / height) * 2 - 1;
float y = ( point.getX() / width ) * 2 - 1;
if (isFrontCamera())
x = -x;
PointF Point = new PointF(x,y);
EGLFace.vertexPoints[index] = Point;
index++;
}
}
// Insert a face object.
FacePointEngine.getInstance().putOneFace(0, EGLFace);
// Set the number of faces.
FacePointEngine.getInstance().setFaceSize(faces!= null ? faces.size() : 0);
}else{
FacePointEngine.getInstance().clearAll();
}
long endTime = System.currentTimeMillis();
Log.d("TAG","Face detect time: " + String.valueOf(endTime - startTime));
}
You can see the face contour points which the ML Kit API returns in the image below.
Sticker JSON Data Definition
Code:
public class FaceStickerJson {
public int[] centerIndexList; // Center coordinate index list. If the list contains multiple indexes, these indexes are used to calculate the central point.
public float offsetX; // X-axis offset relative to the center coordinate of the sticker, in pixels.
public float offsetY; // Y-axis offset relative to the center coordinate of the sticker, in pixels.
public float baseScale; // Base scale factor of the sticker.
public int startIndex; // Face start index, which is used for computing the face width.
public int endIndex; // Face end index, which is used for computing the face width.
public int width; // Sticker width.
public int height; // Sticker height.
public int frames; // Number of sticker frames.
public int action; // Action. 0 indicates default display. This is used for processing the sticker action.
public String stickerName; // Sticker name, which is used for marking the folder or PNG file path.
public int duration; // Sticker frame displays interval.
public boolean stickerLooping; // Indicates whether to perform rendering in loops for the sticker.
public int maxCount; // Maximum number of rendering times.
...
}
Make a Cat Sticker
Create a JSON file of the cat sticker, and find the center point between the eyebrows (84) and the point on the tip of the nose (85) through the face index. Paste the cat’s ears and nose, and then place the JSON file and the image in the assets directory.
Code:
{
"stickerList": [{
"type": "sticker",
"centerIndexList": [84],
"offsetX": 0.0,
"offsetY": 0.0,
"baseScale": 1.3024,
"startIndex": 11,
"endIndex": 28,
"width": 495,
"height": 120,
"frames": 2,
"action": 0,
"stickerName": "nose",
"duration": 100,
"stickerLooping": 1,
"maxcount": 5
}, {
"type": "sticker",
"centerIndexList": [83],
"offsetX": 0.0,
"offsetY": -1.1834,
"baseScale": 1.3453,
"startIndex": 11,
"endIndex": 28,
"width": 454,
"height": 150,
"frames": 2,
"action": 0,
"stickerName": "ear",
"duration": 100,
"stickerLooping": 1,
"maxcount": 5
}]
}
Render the Sticker to a Texture
We use the GLSurfaceView to render the sticker to a texture, which is easier than using the TextureView. Instantiate the sticker filter in the onSurfaceChanged, pass the sticker path, and start the camera.
Code:
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
GLES30.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
mTextures = new int[1];
mTextures[0] = OpenGLUtils.createOESTexture();
mSurfaceTexture = new SurfaceTexture(mTextures[0]);
mSurfaceTexture.setOnFrameAvailableListener(this);
// Pass the samplerExternalOES into the texture.
cameraFilter = new CameraFilter(this.context);
// Set the face sticker path in the assets directory.
String folderPath ="cat";
stickerFilter = new FaceStickerFilter(this.context,folderPath);
// Create a screen filter object.
screenFilter = new BaseFilter(this.context);
facePointsFilter = new FacePointsFilter(this.context);
mEGLCamera.openCamera();
}
Initialize the Sticker Filter in onSurfaceChanged
Code:
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
Log.d(TAG, "onSurfaceChanged. width: " + width + ", height: " + height);
int previewWidth = mEGLCamera.getPreviewWidth();
int previewHeight = mEGLCamera.getPreviewHeight();
if (width > height) {
setAspectRatio(previewWidth, previewHeight);
} else {
setAspectRatio(previewHeight, previewWidth);
}
// Set the image size, create a FrameBuffer, and set the display size.
cameraFilter.onInputSizeChanged(previewWidth, previewHeight);
cameraFilter.initFrameBuffer(previewWidth, previewHeight);
cameraFilter.onDisplaySizeChanged(width, height);
stickerFilter.onInputSizeChanged(previewHeight, previewWidth);
stickerFilter.initFrameBuffer(previewHeight, previewWidth);
stickerFilter.onDisplaySizeChanged(width, height);
screenFilter.onInputSizeChanged(previewWidth, previewHeight);
screenFilter.initFrameBuffer(previewWidth, previewHeight);
screenFilter.onDisplaySizeChanged(width, height);
facePointsFilter.onInputSizeChanged(previewHeight, previewWidth);
facePointsFilter.onDisplaySizeChanged(width, height);
mEGLCamera.startPreview(mSurfaceTexture);
}
Draw the Sticker on the Screen Using onDrawFrame
Code:
@Override
public void onDrawFrame(GL10 gl) {
int textureId;
// Clear the screen and depth buffer.
GLES30.glClear(GLES30.GL_COLOR_BUFFER_BIT | GLES30.GL_DEPTH_BUFFER_BIT);
// Update a texture image.
mSurfaceTexture.updateTexImage();
// Obtain the SurfaceTexture transform matrix.
mSurfaceTexture.getTransformMatrix(mMatrix);
// Set the camera display transform matrix.
cameraFilter.setTextureTransformMatrix(mMatrix);
// Draw the camera texture.
textureId = cameraFilter.drawFrameBuffer(mTextures[0],mVertexBuffer,mTextureBuffer);
// Draw the sticker texture.
textureId = stickerFilter.drawFrameBuffer(textureId,mVertexBuffer,mTextureBuffer);
// Draw on the screen.
screenFilter.drawFrame(textureId , mDisplayVertexBuffer, mDisplayTextureBuffer);
if(drawFacePoints){
facePointsFilter.drawFrame(textureId, mDisplayVertexBuffer, mDisplayTextureBuffer);
}
}
And that’s it, your face sticker is good to go.
Let’s see it in action!
We have open sourced the demo code in Github, you can download the demo and have a try:
https://github.com/HMS-Core/hms-ml-demo/tree/master/Face2D-Sticker
For more details, you can go to Our official website:
https://developer.huawei.com/consumer/en/hms
Our Development Documentation page, to find the documents you need:
https://github.com/HMS-Core
Stack Overflow to solve any integration problems:
https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Newest
In this article I will talk about HUAWEI Scene Kit. HUAWEI Scene Kit is a lightweight rendering engine that features high performance and low consumption. It provides advanced descriptive APIs for us to edit, operate, and render 3D materials. Scene Kit adopts physically based rendering (PBR) pipelines to achieve realistic rendering effects. With this Kit, we only need to call some APIs to easily load and display complicated 3D objects on Android phones.
It was announced before with just SceneView feature. But, in the Scene Kit SDK 5.0.2.300 version, they have announced Scene Kit with new features FaceView and ARView. With these new features, the Scene Kit has made the integration of Plane Detection and Face Tracking features much easier.
At this stage, the following question may come to your mind “since there are ML Kit and AR Engine, why are we going to use Scene Kit?” Let’s give the answer to this question with an example.
Differences Between Scene Kit and AR Engine or ML Kit:
For example, we have a Shopping application. And let’s assume that our application has a feature in the glasses purchasing part that the user can test the glasses using AR to see how the glasses looks like in real. Here, we do not need to track facial gestures using the Facial expression tracking feature provided by AR Engine. All we have to do is render a 3D object on the user’s eye. Face Tracking is enough for this. So if we used AR Engine, we would have to deal with graphics libraries like OpenGL. But by using the Scene Kit FaceView, we can easily add this feature to our application without dealing with any graphics library. Because the feature here is a basic feature and the Scene Kit provides this to us.
So what distinguishes AR Engine or ML Kit from Scene Kit is AR Engine and ML Kit provide more detailed controls. However, Scene Kit only provides the basic features (I’ll talk about these features later). For this reason, its integration is much simpler.
Let’s examine what these features provide us.SceneView
With SceneView, we are able to load and render 3D materials in common scenes.
It allows us to:
Load and render 3D materials.
Load the cubemap texture of a skybox to make the scene look larger and more impressive than it actually is.
Load lighting maps to mimic real-world lighting conditions through PBR pipelines.
Swipe on the screen to view rendered materials from different angles.
ARView:
ARView uses the plane detection capability of AR Engine, together with the graphics rendering capability of Scene Kit, to provide us with the capability of loading and rendering 3D materials in common AR scenes.
With ARView, we can:
Load and render 3D materials in AR scenes.
Set whether to display the lattice plane (consisting of white lattice points) to help select a plane in a real-world view.
Tap an object placed onto the lattice plane to select it. Once selected, the object will change to red. Then we can move, resize, or rotate it.
FaceView:
FaceView can use the face detection capability provided by ML Kit or AR Engine to dynamically detect faces. Along with the graphics rendering capability of Scene Kit, FaceView provides us with superb AR scene rendering dedicated for faces.
With FaceView we can:
Dynamically detect faces and apply 3D materials to the detected faces.
As I mentioned above ARView uses the plane detection capability of AR Engine and the FaceView uses the face detection capability provided by either ML Kit or AR Engine. When using the FaceView feature, we can use the SDK we want by specifying which SDK to use in the layout.
Here, we should consider the devices to be supported when choosing the SDK. You can see the supported devices in the table below. Also for more detailed information you can visit this page. (In addition to the table on this page, the Scene Kit’s SceneView feature also supports P40 Lite devices.)
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Also, I think it is useful to mention some important working principles of Scene Kit:
Scene Kit
Provides a Full-SDK, which we can integrate into our app to access 3D graphics rendering capabilities, even though our app runs on phones without HMS Core.
Uses the Entity Component System (ECS) to reduce coupling and implement multi-threaded parallel rendering.
Adopts real-time PBR pipelines to make rendered images look like in a real world.
Supports the general-purpose GPU Turbo to significantly reduce power consumption.
Demo App
Let’s learn in more detail by integrating these 3 features of the Scene Kit with a demo application that we will develop in this section.
To configure the Maven repository address for the HMS Core SDK add the below code to project level build.gradle.
Go to
〉 project level build.gradle > buildscript > repositories
〉project level build.gradle > allprojects > repositories
Code:
maven { url 'https://developer.huawei.com/repo/' }
After that go to
module level build.gradle > dependencies
then add build dependencies for the Full-SDK of Scene Kit in the dependencies block.
Code:
implementation 'com.huawei.scenekit:full-sdk:5.0.2.302'
Note: When adding build dependencies, replace the version here “full-sdk: 5.0.2.302” with the latest Full-SDK version. You can find all the SDK and Full-SDK version numbers in Version Change History.
Then click the Sync Now as shown below
After the build is successfully completed, add the following line to the manifest.xml file for Camera permission.
Code:
<uses-permission android:name="android.permission.CAMERA" />
Now our project is ready to development. We can use all the functionalities of Scene Kit.
Let’s say this demo app is a shopping app. And I want to use Scene Kit features in this application. We’ll use the Scene Kit’s ARView feature in the “office” section of our application to test how a plant and a aquarium looks on our desk.
And in the sunglasses section, we’ll use the FaceView feature to test how sunglasses look on our face.
Finally, we will use the SceneView feature in the shoes section of our application. We’ll test a shoe to see how it looks.
We will need materials to test these properties, let’s get these materials first. I will use 3D models that you can download from the links below. You can use the same or different materials if you want.
Capability: ARView, Used Models: Plant , Aquarium
Capability: FaceView, Used Model: Sunglasses
Capability: SceneView, Used Model: Shoe
Note: I used 3D models in “.glb” format as asset in ARView and FaceView features. However, these links I mentioned contain 3D models in “.gltf” format. I converted “.gltf” format files to “.glb” format. Therefore, you can obtain a 3D model in “.glb” format by uploading all the files (textures, scene.bin and scene.gltf) of the 3D models downloaded from these links to an online converter website. You can use any online conversion website for the conversion process.
All materials must be stored in the assets directory. Thus, we place the materials under app> src> main> assets in our project. After placing it, our file structure will be as follows.
After adding the materials, we will start by adding the ARView feature first. Since we assume that there are office supplies in the activity where we will use the ARView feature, let’s create an activity named OfficeActivity and first develop its layout.
Note: Activities must extend the Activity class. Update the activities that extend the AppCompatActivity with Activity”
Example: It should be “OfficeActivity extends Activity”.
ARView
In order to use the ARView feature of the Scene Kit, we add the following ARView code to the layout (activity_office.xml file).
Code:
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent">
</com.huawei.hms.scene.sdk.ARView>
Overview of the activity_office.xml file:
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="bottom"
tools:context=".OfficeActivity">
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_centerInParent="true"
android:layout_centerHorizontal="true"
android:layout_centerVertical="true"
android:gravity="bottom"
android:layout_marginBottom="30dp"
android:orientation="horizontal">
<Button
android:id="@+id/button_flower"
android:layout_width="110dp"
android:layout_height="wrap_content"
android:onClick="onButtonFlowerToggleClicked"
android:text="Load Flower"/>
<Button
android:id="@+id/button_aquarium"
android:layout_width="110dp"
android:layout_height="wrap_content"
android:onClick="onButtonAquariumToggleClicked"
android:text="Load Aquarium"/>
</LinearLayout>
</RelativeLayout>
We specified 2 buttons, one for the aquarium and the other for loading a plant. Now, let’s do the initializations from OfficeActivity and activate the ARView feature in our application. First, let’s override the onCreate () function to obtain the ARView and the button that will trigger the code of object loading.
Code:
private ARView mARView;
private Button mButtonFlower;
private boolean isLoadFlowerResource = false;
private boolean isLoadAquariumResource = false;
private Button mButtonAquarium;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_office);
mARView = findViewById(R.id.ar_view);
mButtonFlower = findViewById(R.id.button_flower);
mButtonAquarium = findViewById(R.id.button_aquarium);
Toast.makeText(this, "Please move the mobile phone slowly to find the plane", Toast.LENGTH_LONG).show();
}
Then add the method that will be triggered when the buttons are clicked. Here we will check the loading status of the object. We will clean or load the object according to the its situation.
For plant button:
Code:
public void onButtonFlowerToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadFlowerResource) {
// Load 3D model.
mARView.loadAsset("ARView/flower.glb");
float[] scale = new float[] { 0.15f, 0.15f, 0.15f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.500f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadFlowerResource = true;
mButtonFlower.setText("Clear Flower");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadFlowerResource = false;
mButtonFlower.setText("Load Flower");
}
}
For the aquarium button:
Code:
public void onButtonAquariumToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadAquariumResource) {
// Load 3D model.
mARView.loadAsset("ARView/aquarium.glb");
float[] scale = new float[] { 0.015f, 0.015f, 0.015f };
float[] rotation = new float[] { 0.0f, 0.0f, 0.0f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadAquariumResource = true;
mButtonAquarium.setText("Clear Aquarium");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadAquariumResource = false;
mButtonAquarium.setText("Load Aquarium");
}
}
Now let’s talk about what we do with the codes here, line by line. First, we set the ARView.enablePlaneDisplay() function to true, and if a plane is defined in the real world, the program will appear a lattice plane here.
Code:
mARView.enablePlaneDisplay(true);
Then we check whether the object has been loaded or not. If it is not loaded, we specify the path to the 3D model we selected with the mARView.loadAsset() function and load it. (assets> ARView> flower.glb)
Code:
mARView.loadAsset("ARView/flower.glb");
Then we create and initialize scale and rotation arrays for the starting position. For now, we are entering hardcoded values here. For the future versions, by holding the screen, etc. We can set a starting position.
Note: The Scene Kit ARView feature already allows us to move, adjust the size and change the direction of the object we have created on the screen. For this, we should select the object we created and move our finger on the screen to change the position, size or direction of the object.
Here we can adjust the direction or size of the object by adjusting the rotation and scale values.(These values will be used as parameter of setInitialPose() function)
Note: These values can be changed according to used model. To find the appropriate values, you should try yourself. For details of these values see the document of ARView setInitialPose() function.
Code:
float[] scale = new float[] { 0.15f, 0.15f, 0.15f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.500f, 0.0f };
Then we set the scale and rotation values we created as the starting position.
Code:
mARView.setInitialPose(scale, rotation);
After this process, we set the boolean value to indicate that the object has been created and we update the text of the button.
Code:
isLoadResource = true;
mButton.setText(R.string.btn_text_clear_resource);
If the object is already loaded, we clear the resource and load the empty object so that we remove the object from the screen.
Code:
mARView.clearResource();
mARView.loadAsset("");
Then we set the boolean value again and done by updating the button text.
Code:
isLoadResource = false;
mButton.setText(R.string.btn_text_load);
Finally, we should not forget to override the following methods as in the code to ensure synchronization.
Code:
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
The overview of OfficeActivity.java should be as follows.
Code:
import android.app.Activity;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.Toast;
import com.huawei.hms.scene.sdk.ARView;
public class OfficeActivity extends Activity {
private ARView mARView;
private Button mButtonFlower;
private boolean isLoadFlowerResource = false;
private boolean isLoadAquariumResource = false;
private Button mButtonAquarium;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_office);
mARView = findViewById(R.id.ar_view);
mButtonFlower = findViewById(R.id.button_flower);
mButtonAquarium = findViewById(R.id.button_aquarium);
Toast.makeText(this, "Please move the mobile phone slowly to find the plane", Toast.LENGTH_LONG).show();
}
/**
* Synchronously call the onPause() method of the ARView.
*/
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
/**
* Synchronously call the onResume() method of the ARView.
*/
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
/**
* If quick rebuilding is allowed for the current activity, destroy() of ARView must be invoked synchronously.
*/
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
public void onButtonFlowerToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadFlowerResource) {
// Load 3D model.
mARView.loadAsset("ARView/flower.glb");
float[] scale = new float[] { 0.15f, 0.15f, 0.15f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.500f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadFlowerResource = true;
mButtonFlower.setText("Clear Flower");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadFlowerResource = false;
mButtonFlower.setText("Load Flower");
}
}
public void onButtonAquariumToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadAquariumResource) {
// Load 3D model.
mARView.loadAsset("ARView/aquarium.glb");
float[] scale = new float[] { 0.015f, 0.015f, 0.015f };
float[] rotation = new float[] { 0.0f, 0.0f, 0.0f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadAquariumResource = true;
mButtonAquarium.setText("Clear Aquarium");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadAquariumResource = false;
mButtonAquarium.setText("Load Aquarium");
}
}
}
In this way, we added the ARView feature of Scene Kit to our application. We can now use the ARView feature. Now let’s test the ARView part on a device that supports the Scene Kit ARView feature.
Let’s place plants and aquariums on our table as below and see how it looks.
In order for ARView to recognize the ground, first you need to turn the camera slowly until the plane points you see in the photo appear on the screen. After the plane points appear on the ground, we specify that we will add plants by clicking the load flower button. Then we can add the plant by clicking the point on the screen where we want to add the plant. When we do the same by clicking the aquarium button, we can add an aquarium.
I placed an aquarium and plants on my table. You can test how it looks by placing plants or aquariums on your table or anywhere. You can see how it looks in the photo below.
Note: “Clear Flower” and “Clear Aquarium” buttons will remove the objects we have placed on the screen.
After creating the objects, we select the object we want to move, change its size or direction as you can see in the picture below. Under normal conditions, the color of the selected object will turn into red. (The color of some models doesn’t change. For example, when the aquarium model is selected, the color of the model doesn’t change to red.)
If we want to change the size of the object after selecting it, we can zoom in out by using our two fingers. In the picture above you can see that I changed plants sizes. Also we can move the selected object by dragging it. To change its direction, we can move our two fingers in a circular motion.
More information, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201380649772650451&fid=0101187876626530001
Is Scene Kit free of charge?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Detecting and tracking the user face movements can be very powerful feature that you can develop in your application. With this feature that you can manage and get input requests from the user face movements to execute some intents in the app.
Requirements to use ML Kit:
Android Studio 3.X or later version
JDK 1.8 or later
To able to use HMS ML Kit, you need to integrate HMS Core to your project and also add HMS ML Kit SDK. Add Write External Storage and Camera permissions afterwards.
You can click this link to integrate HMS Core to your project.
After integrating HMS Core, add HMS ML Kitdependencies in build.gradle file in app directory.
Code:
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.5.300'
implementation 'com.huawei.hms:ml-computer-vision-face-3d-model:2.0.5.300'
Add permissions to the Manifest File:
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Development Process:
In this article, we are going to use HMS ML Kit in order to track face movements from the user. In order to use this feature, we are going to use 3D features of ML Kit. We are going to use one activity and a helper class.
XML Structure of Activity:
XML:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<SurfaceView
android:id="@+id/surfaceViewCamera"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
We are going to use a surfaceView to use camera in the mobile phone screen. We are going to add a callback to our surfaceHolderCamera to track when it’s created, changed and destroyed.
Implementing HMS ML Kit Default View:
Java:
private lateinit var mAnalyzer: ML3DFaceAnalyzer
private lateinit var mLensEngine: LensEngine
private lateinit var mFaceAnalyzerTransactor: FaceAnalyzerTransactor
private var surfaceHolderCamera: SurfaceHolder? = null
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, 0)
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 0 && grantResults.isNotEmpty() && hasPermissions(requiredPermissions))
init()
}
private fun init() {
mAnalyzer = createAnalyzer()
mFaceAnalyzerTransactor = FaceAnalyzerTransactor()
mAnalyzer.setTransactor(mFaceAnalyzerTransactor)
prepareViews()
}
private fun prepareViews() {
surfaceHolderCamera = surfaceViewCamera.holder
surfaceHolderCamera?.addCallback(surfaceHolderCallback)
}
ML3DFaceAnalyzer → A face analyzer, which is used to detect 3D faces.
Lens Engine → A class with the camera initialization, frame obtaining, and logic control functions encapsulated.
FaceAnalyzerTransactor → A class for processing recognition results. This class implements MLAnalyzer.MLTransactor (A general API that needs to be implemented by a detection result processor to process detection results) and uses the transactResult method in this class to obtain the recognition results and implement specific services.
SurfaceHolder → By adding a callback to surface view, we will detect the surface changes like surface changed, created and destroyed. We will use this object callback to handle different situation in the application.
In this code block, we basically request the permissions. If the user has already given his/her permissions in init function we will create a ML3DFaceAnalyzer in order to detect 3D faces. We will handle the surface change sitations with the surfaceHolderCallback functions. Afterwards, to process the results we will create FaceAnalyzerTransactor object and the transactor of the analyzer. After everything is set, we will start to analyze the face of the user by prepareView function.
Java:
private fun createAnalyzer(): ML3DFaceAnalyzer {
val settings = ML3DFaceAnalyzerSetting.Factory()
.setTracingAllowed(true)
.setPerformanceType(ML3DFaceAnalyzerSetting.TYPE_PRECISION)
.create()
return MLAnalyzerFactory.getInstance().get3DFaceAnalyzer(settings)
}
In this code block, we will adjust the settings for our 3DFaceAnalyzer object.
setTracingAllowed → Indicates whether to enable face tracking. Due to we desire to trace the face, we set this value true.
setPerformanceType → There are two preference ways to set this value. Speed/precision preference mode for the analyzer to choose precision or speed priority in performing face detection. We choosed TYPE_PRECISION is this example.
By MLAnalyzerFactory.getInstance().get3DFaceAnalyzer(settings) method we will create a 3DFaceAnalyzer object.
Handling Surface Changes
Java:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(p0: SurfaceHolder, p1: Int, p2: Int, p3: Int) {
mLensEngine = createLensEngine(p1, p2)
mLensEngine.run(p0)
}
override fun surfaceDestroyed(p0: SurfaceHolder) {
mLensEngine.release()
}
override fun surfaceCreated(p0: SurfaceHolder) {
}
}
private fun createLensEngine(width: Int, height: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(this, mAnalyzer)
.applyFps(20F)
.setLensType(LensEngine.FRONT_LENS)
.enableAutomaticFocus(true)
return if (resources.configuration.orientation == Configuration.ORIENTATION_PORTRAIT) {
lensEngineCreator.let {
it.applyDisplayDimension(height, width)
it.create()
}
} else {
lensEngineCreator.let {
it.applyDisplayDimension(width, height)
it.create()
}
}
}
When surface is changed, we will create a LensEngine. We will get the width and height values of the surface in the surfaceChanged function by p0 and p1. When we create LensEngine, we will use these parameters. We created a LensEngine by setting the context and the 3DFaceAnalyzer that we have already created.
applyFps → Sets the preview frame rate (FPS) of a camera. The preview frame rate of a camera depends on the firmware capability of the camera.
setLensType → Sets the camera type, BACK_LENS for rear camera and FRONT_LENS for front camera. In this example, we will use front camera.
enableAutomaticFocus → Enables or disables the automatic focus function for a camera.
surfaceChanged method will be executed whenever the mobile screen orientation changes. We also handled this situation in createLensEngine function. A LensEngine will be created the width and height values of the mobile phone by checking the orientation.
run → Starts LensEngine. During startup, the LensEngine selects an appropriate camera based on the user’s requirements on the frame rate and preview image size, initializes parameters of the selected camera, and starts an analyzer thread to analyze and process frame data.
release → Releases resources occupied by LensEngine. We use this function whenever the surfaceDestroyed method is called.
Analysing Results
Java:
class FaceAnalyzerTransactor: MLTransactor<ML3DFace> {
override fun transactResult(results: MLAnalyzer.Result<ML3DFace>?) {
val items: SparseArray<ML3DFace> = results!!.analyseList
Log.i("FaceOrientation:X ",items.get(0).get3DFaceEulerX().toString())
Log.i("FaceOrientation:Y",items.get(0).get3DFaceEulerY().toString())
Log.i("FaceOrientation:Z",items.get(0).get3DFaceEulerZ().toString())
}
override fun destroy() {
}
}
We created a FaceAnalyzerTransactor object in init method. This object is used to process the results.
Due to it implemented from MLTransactor class, we have to override destroy and transactResult methods. With transactResult method we can obtain the recognition results.MLAnalyzer. It obtains the detection as a result list. Due to there will only one face in this example, we will get the first ML3DFace object of the result list.
ML3DFace represents a 3D face detected. It has features include the width, height, rotation degree, 3D coordinate points, and projection matrix.
get3DFaceEulerX() → 3D face pitch angle. A positive value indicates that the face is looking up, and a negative value indicates that the face is looking down.
get3DFaceEulerY() → 3D face yaw angle. A positive value indicates that the face turns to the right side of the image, and a negative value indicates that the face turns to the left side of the image.
get3DFaceEulerZ() → 3D face roll angle. A positive value indicates a clockwise rotation. A negative value indicates a counter-clockwise rotation.
All above methods results change from -1 to +1 according to the face movement. In this example, I will show the demonstration of get3DFaceEulerY() method.
Not making any head movements
In this example, outputs are between 0.04 and 0.1 when I did not move my head. If the values are between -0.1 to 0.1, the user possibly does not move his/her head or makes small movements.
Changing head direction to right
When I moved my head to the right direction outputs were generally between 0.5 and 1. So it can be said that if the values are bigger than 0.5, the user moved his head to right direction.
Changing head direction to left
When I moved my head to the left direction outputs were generally between -0.5 and -1. So it can be said that if the values are less than -0.5, the user moved his head to left direction.
As you can see, HMS ML Kit can detect the user face and track face movements successfully. This project and codes can be accessible from the github link in references area.
You can also implement other ML Kit features like Text-related services, Language/Voice-related services, Language/Voice-related services, Image-related services, Face/Body-related services and Natural Language Processing services.
References:
ML Kit
ML Kit Documentation
Face Detection With ML Kit
Github
Very interesting feature.
Rebis said:
Very interesting feature.
Click to expand...
Click to collapse
I can't wait to use it.
Hi, this service will work on server side?
What is the accuracy rate in low light ?
Which permissions are required?
Displaying products with 3D models is something too great to ignore for an e-commerce app. Using those fancy gadgets, such an app can leave users with the first impression upon products in a fresh way!
The 3D model plays an important role in boosting user conversion. It allows users to carefully view a product from every angle, before they make a purchase. Together with the AR technology, which gives users an insight into how the product will look in reality, the 3D model brings a fresher online shopping experience that can rival offline shopping.
Despite its advantages, the 3D model has yet to be widely adopted. The underlying reason for this is that applying current 3D modeling technology is expensive:
Technical requirements: Learning how to build a 3D model is time-consuming.
Time: It takes at least several hours to build a low polygon model for a simple object, and even longer for a high polygon one.
Spending: The average cost of building a simple model can be more than one hundred dollars, and even higher for building a complex one.
Luckily, 3D object reconstruction, a capability in 3D Modeling Kit newly launched in HMS Core, makes 3D model building straightforward. This capability automatically generates a 3D model with a texture for an object, via images shot from different angles with a common RGB-Cam. It gives an app the ability to build and preview 3D models. For instance, when an e-commerce app has integrated 3D object reconstruction, it can generate and display 3D models of shoes. Users can then freely zoom in and out on the models for a more immersive shopping experience.
Actual Effect
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions
3D object reconstruction is implemented on both the device and cloud. RGB images of an object are collected on the device and then uploaded to the cloud. Key technologies involved in the on-cloud modeling process include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Finally, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Preparations1. Configuring a Dependency on the 3D Modeling SDK
Open the app-level build.gradle file and add a dependency on the 3D Modeling SDK in the dependencies block.
Code:
// Build a dependency on the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configuring AndroidManifest.xml
Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission.
Code:
/<!-- Permission to read data from and write data into storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Permission to use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Development Procedure1. Configuring the Storage Permission Application
In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
/if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Creating a 3D Object Reconstruction Configurator
Code:
/// Set the PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Creating a 3D Object Reconstruction Engine and Initializing the Task
Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Create an engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Initialize the 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Creating a Listener Callback to Process the Image Upload Result
Create a listener callback that allows you to configure the operations triggered upon upload success and failure.
Code:
// Create an upload listener callback.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Passing the Upload Listener Callback to the Engine to Upload Images
Pass the upload listener callback to the engine. Call uploadFile(),
pass the task ID obtained in step 3 and the path of the images to be uploaded. Then, upload the images to the cloud server.
Code:
// Pass the listener callback to the engine.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Start uploading.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Querying the Task Status
Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Create a task processing instance.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask of the task processing instance to query the status of the 3D object reconstruction task.
Code:
// Query the task status, which can be: 0 (images to be uploaded); 1: (image upload completed); 2: (model being generated); 3( model generation completed); 4: (model generation failed).
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Creating a Listener Callback to Process the Model File Download Result
Create a listener callback that allows you to configure the operations triggered upon download success and failure.
Code:
// Create a download listener callback.
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Passing the Download Listener Callback to the Engine to Download the File of the Generated Model
Pass the download listener callback to the engine. Call downloadModel, pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
/ Pass the download listener callback to the engine.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
More Information
The object should have rich texture, be medium-sized, and a rigid body. The object should not be reflective, transparent, or semi-transparent. The object types include goods (like plush toys, bags, and shoes), furniture (like sofas), and cultural relics (such as bronzes, stone artifacts, and wooden artifacts).
The object dimension should be within the range from 15 x 15 x 15 cm to 150 x 150 x 150 cm. (A larger dimension requires a longer time for modeling.)
3D object reconstruction does not support modeling for the human body and face.
Ensure the following requirements are met during image collection: Put a single object on a stable plane in pure color. The environment shall not be dark or dazzling. Keep all images in focus, free from blur caused by motion or shaking. Ensure images are taken from various angles including the bottom, flat, and top (it is advised that you upload more than 50 images for an object). Move the camera as slowly as possible. Do not change the angle during shooting. Lastly, ensure the object-to-image ratio is as big as possible, and all parts of the object are present.
These are all about the sample code of 3D object reconstruction. Try to integrate it into your app and build your own 3D models!
ReferencesFor more details, you can go to:
3D Modeling Kit official website
3D Moedling Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download 3D Modeling Kit sample codes
Stack Overflow to solve any integration problems
Quick question: How do 3D models help e-commerce apps?
The most obvious answer is that it makes the shopping experience more immersive, and there are a whole host of other benefits they bring.
To begin with, a 3D model is a more impressive way of showcasing a product to potential customers. One way it does this is by displaying richer details (allowing potential customers to rotate the product and view it from every angle), to help customers make more informed purchasing decisions. Not only that, customers can virtually try-on 3D products, to recreate the experience of shopping in a physical store. In short, all these factors contribute to boosting user conversion.
As great as it is, the 3D model has not been widely adopted among those who want it. A major reason is that the cost of building a 3D model with existing advanced 3D modeling technology is very high, due to:
Technical requirements: Building a 3D model requires someone with expertise, which can take time to master.
Time: It takes at least several hours to build a low-polygon model for a simple object, not to mention a high-polygon one.
Spending: The average cost of building just a simple model can reach hundreds of dollars.
Fortunately for us, the 3D object reconstruction capability found in HMS Core 3D Modeling Kit makes 3D model creation easy-peasy. This capability automatically generates a texturized 3D model for an object, via images shot from multiple angles with a standard RGB camera on a phone. And what's more, the generated model can be previewed. Let's check out a shoe model created using the 3D object reconstruction capability.
Shoe Model Images
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions
3D object reconstruction requires both the device and cloud. Images of an object are captured on a device, covering multiple angles of the object. And then the images are uploaded to the cloud for model creation. The on-cloud modeling process and key technologies include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Once the model is created, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Now the boring part is out of the way. Let's move on to the exciting part: how to integrate the 3D object reconstruction capability.
Integrating the 3D Object Reconstruction CapabilityPreparations1. Configure the build dependency for the 3D Modeling SDK.Add the build dependency for the 3D Modeling SDK in the dependencies block in the app-level build.gradle file.
Code:
// Build dependency for the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configure AndroidManifest.xml.Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission as needed:
Code:
<!-- Write into and read from external storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Function Development1. Configure the storage permission application.In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are granted, initialize the UI; if the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Create a 3D object reconstruction configurator.
Code:
// PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Create a 3D object reconstruction engine and initialize the task.Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Initialize the engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Create a 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Create a listener callback to process the image upload result.Create a listener callback in which you can configure the operations triggered upon upload success and failure.
Code:
// Create a listener callback for the image upload task.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Set the image upload listener for the 3D object reconstruction engine and upload the captured images.Pass the upload callback to the engine. Call uploadFile(), pass the task ID obtained in step 3 and the path of the images to be uploaded, and upload the images to the cloud server.
Code:
// Set the upload listener.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Upload captured images.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Query the task status.Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Initialize the task processing class.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask to query the status of the 3D object reconstruction task.
Code:
// Query the reconstruction task execution result. The options are as follows: 0: To be uploaded; 1: Generating; 3: Completed; 4: Failed.
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Create a listener callback to process the model file download result.Create a listener callback in which you can configure the operations triggered upon download success and failure.
Code:
// Create a download callback listener
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Pass the download listener callback to the engine to download the generated model file.Pass the download listener callback to the engine. Call downloadModel. Pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
// Set the listener for the model file download task.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
Notes1. To deliver an ideal modeling result, 3D object reconstruction has some requirements on the object to be modeled. For example, the object should have rich textures and a fixed shape. The object is expected to be non-reflective and medium-sized. Transparency or semi-transparency is not recommended. An object that meets these requirements may fall into one of the following types: goods (including plush toys, bags, and shoes), furniture (like sofas), and cultural relics (like bronzes, stone artifacts, and wooden artifacts).
2. The object dimensions should be within the range of 15 x 15 x 15 cm to 150 x 150 x 150 cm. (Larger dimensions require a longer modeling time.)
3. Modeling for the human body or face is not yet supported by the capability.
4. Suggestions for image capture: Put a single object on a stable plane in pure color. The environment should be well lit and plain. Keep all images in focus, free from blur caused by motion or shaking, and take pictures of the object from various angles including the bottom, face, and top. Uploading more than 50 images for an object is recommended. Move the camera as slowly as possible, and do not suddenly alter the angle when taking pictures. The object-to-image ratio should be as big as possible, and not a part of the object is missing.
With all these in mind, as well as the development procedure of the capability, now we are ready to create a 3D model like the shoe model above. Looking forward to seeing your own models created using this capability in the comments section below.
Reference
Home page of 3D Modeling Kit
Service introduction to 3D Modeling Kit
Detailed information about 3D object reconstruction