How to Integrate Face Stickers into Your Apps with HUAWEI ML Kit - Huawei Developers

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201333611965550036&fid=0101187876626530001
Introduction
Nowadays, you’ll see cute and funny face stickers everywhere. They’re not only used in camera apps, but also in social media and entertainment apps. In this post, I’m going to show you how to create a 2D sticker using HUAWEI ML Kit. We’ll share the development process for 3D stickers soon, so keep an eye out!
Scenarios
Apps that are used to take and edit photos, such as beauty cameras and social media apps (TikTok, Weibo, and WeChat, etc.), often offer a range of stickers which can be used to customize images. With these stickers, users can create content which is more engaging and shareable.
Preparations
Add the Huawei Maven Repository to the Project-Level build.gradle File
Open the build.gradle file in the root directory of your Android Studio project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Add the Maven repository address.
Code:
buildscript {
{
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
}
Add SDK Dependencies to the App-Level build.gradle File
Code:
// Face detection SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Face detection model.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
Apply for Camera, Network Access, and Storage Permissions in the AndroidManifest.xml File
Code:
<!--Camera permission-->
<uses-feature android:name="android.hardware.camera" />
<uses-permission android:name="android.permission.CAMERA" />
<!--Write permission-->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!--Read permission-->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Code Development
Set the Face Analyzer
Code:
MLFaceAnalyzerSetting detectorOptions;
detectorOptions = new MLFaceAnalyzerSetting.Factory()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_FEATURES)
.setShapeType(MLFaceAnalyzerSetting.TYPE_SHAPES)
.allowTracing(MLFaceAnalyzerSetting.MODE_TRACING_FAST)
.create();
detector = MLAnalyzerFactory.getInstance().getFaceAnalyzer(detectorOptions);
Obtain Face Contour Points and Pass Them to the FacePointEngine
Use the camera callback to obtain camera frame data, then call the face analyzer to obtain face contour points, and pass the points to the FacePointEngine. This will allow the sticker filter to use them later.
Code:
@Override
public void onPreviewFrame(final byte[] imgData, final Camera camera) {
int width = mPreviewWidth;
int height = mPreviewHeight;
long startTime = System.currentTimeMillis();
// Set the shooting directions of the front and rear cameras to be the same.
if (isFrontCamera()){
mOrientation = 0;
}else {
mOrientation = 2;
}
MLFrame.Property property =
new MLFrame.Property.Creator()
.setFormatType(ImageFormat.NV21)
.setWidth(width)
.setHeight(height)
.setQuadrant(mOrientation)
.create();
ByteBuffer data = ByteBuffer.wrap(imgData);
// Call the face analyzer API.
SparseArray<MLFace> faces =
detector.analyseFrame(MLFrame.fromByteBuffer(data,property));
// Determine whether face information is obtained.
if(faces.size()>0){
MLFace mLFace = faces.get(0);
EGLFace EGLFace = FacePointEngine.getInstance().getOneFace(0);
EGLFace.pitch = mLFace.getRotationAngleX();
EGLFace.yaw = mLFace.getRotationAngleY();
EGLFace.roll = mLFace.getRotationAngleZ() - 90;
if (isFrontCamera())
EGLFace.roll = -EGLFace.roll;
if (EGLFace.vertexPoints == null) {
EGLFace.vertexPoints = new PointF[131];
}
int index = 0;
// Obtain the coordinates of a user's face contour points and convert them to the floating point numbers in normalized coordinate system of OpenGL.
for (MLFaceShape contour : mLFace.getFaceShapeList()) {
if (contour == null) {
continue;
}
List<MLPosition> points = contour.getPoints();
for (int i = 0; i < points.size(); i++) {
MLPosition point = points.get(i);
float x = ( point.getY() / height) * 2 - 1;
float y = ( point.getX() / width ) * 2 - 1;
if (isFrontCamera())
x = -x;
PointF Point = new PointF(x,y);
EGLFace.vertexPoints[index] = Point;
index++;
}
}
// Insert a face object.
FacePointEngine.getInstance().putOneFace(0, EGLFace);
// Set the number of faces.
FacePointEngine.getInstance().setFaceSize(faces!= null ? faces.size() : 0);
}else{
FacePointEngine.getInstance().clearAll();
}
long endTime = System.currentTimeMillis();
Log.d("TAG","Face detect time: " + String.valueOf(endTime - startTime));
}
You can see the face contour points which the ML Kit API returns in the image below.
Sticker JSON Data Definition
Code:
public class FaceStickerJson {
public int[] centerIndexList; // Center coordinate index list. If the list contains multiple indexes, these indexes are used to calculate the central point.
public float offsetX; // X-axis offset relative to the center coordinate of the sticker, in pixels.
public float offsetY; // Y-axis offset relative to the center coordinate of the sticker, in pixels.
public float baseScale; // Base scale factor of the sticker.
public int startIndex; // Face start index, which is used for computing the face width.
public int endIndex; // Face end index, which is used for computing the face width.
public int width; // Sticker width.
public int height; // Sticker height.
public int frames; // Number of sticker frames.
public int action; // Action. 0 indicates default display. This is used for processing the sticker action.
public String stickerName; // Sticker name, which is used for marking the folder or PNG file path.
public int duration; // Sticker frame displays interval.
public boolean stickerLooping; // Indicates whether to perform rendering in loops for the sticker.
public int maxCount; // Maximum number of rendering times.
...
}
Make a Cat Sticker
Create a JSON file of the cat sticker, and find the center point between the eyebrows (84) and the point on the tip of the nose (85) through the face index. Paste the cat’s ears and nose, and then place the JSON file and the image in the assets directory.
Code:
{
"stickerList": [{
"type": "sticker",
"centerIndexList": [84],
"offsetX": 0.0,
"offsetY": 0.0,
"baseScale": 1.3024,
"startIndex": 11,
"endIndex": 28,
"width": 495,
"height": 120,
"frames": 2,
"action": 0,
"stickerName": "nose",
"duration": 100,
"stickerLooping": 1,
"maxcount": 5
}, {
"type": "sticker",
"centerIndexList": [83],
"offsetX": 0.0,
"offsetY": -1.1834,
"baseScale": 1.3453,
"startIndex": 11,
"endIndex": 28,
"width": 454,
"height": 150,
"frames": 2,
"action": 0,
"stickerName": "ear",
"duration": 100,
"stickerLooping": 1,
"maxcount": 5
}]
}
Render the Sticker to a Texture
We use the GLSurfaceView to render the sticker to a texture, which is easier than using the TextureView. Instantiate the sticker filter in the onSurfaceChanged, pass the sticker path, and start the camera.
Code:
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
GLES30.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
mTextures = new int[1];
mTextures[0] = OpenGLUtils.createOESTexture();
mSurfaceTexture = new SurfaceTexture(mTextures[0]);
mSurfaceTexture.setOnFrameAvailableListener(this);
// Pass the samplerExternalOES into the texture.
cameraFilter = new CameraFilter(this.context);
// Set the face sticker path in the assets directory.
String folderPath ="cat";
stickerFilter = new FaceStickerFilter(this.context,folderPath);
// Create a screen filter object.
screenFilter = new BaseFilter(this.context);
facePointsFilter = new FacePointsFilter(this.context);
mEGLCamera.openCamera();
}
Initialize the Sticker Filter in onSurfaceChanged
Code:
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
Log.d(TAG, "onSurfaceChanged. width: " + width + ", height: " + height);
int previewWidth = mEGLCamera.getPreviewWidth();
int previewHeight = mEGLCamera.getPreviewHeight();
if (width > height) {
setAspectRatio(previewWidth, previewHeight);
} else {
setAspectRatio(previewHeight, previewWidth);
}
// Set the image size, create a FrameBuffer, and set the display size.
cameraFilter.onInputSizeChanged(previewWidth, previewHeight);
cameraFilter.initFrameBuffer(previewWidth, previewHeight);
cameraFilter.onDisplaySizeChanged(width, height);
stickerFilter.onInputSizeChanged(previewHeight, previewWidth);
stickerFilter.initFrameBuffer(previewHeight, previewWidth);
stickerFilter.onDisplaySizeChanged(width, height);
screenFilter.onInputSizeChanged(previewWidth, previewHeight);
screenFilter.initFrameBuffer(previewWidth, previewHeight);
screenFilter.onDisplaySizeChanged(width, height);
facePointsFilter.onInputSizeChanged(previewHeight, previewWidth);
facePointsFilter.onDisplaySizeChanged(width, height);
mEGLCamera.startPreview(mSurfaceTexture);
}
Draw the Sticker on the Screen Using onDrawFrame
Code:
@Override
public void onDrawFrame(GL10 gl) {
int textureId;
// Clear the screen and depth buffer.
GLES30.glClear(GLES30.GL_COLOR_BUFFER_BIT | GLES30.GL_DEPTH_BUFFER_BIT);
// Update a texture image.
mSurfaceTexture.updateTexImage();
// Obtain the SurfaceTexture transform matrix.
mSurfaceTexture.getTransformMatrix(mMatrix);
// Set the camera display transform matrix.
cameraFilter.setTextureTransformMatrix(mMatrix);
// Draw the camera texture.
textureId = cameraFilter.drawFrameBuffer(mTextures[0],mVertexBuffer,mTextureBuffer);
// Draw the sticker texture.
textureId = stickerFilter.drawFrameBuffer(textureId,mVertexBuffer,mTextureBuffer);
// Draw on the screen.
screenFilter.drawFrame(textureId , mDisplayVertexBuffer, mDisplayTextureBuffer);
if(drawFacePoints){
facePointsFilter.drawFrame(textureId, mDisplayVertexBuffer, mDisplayTextureBuffer);
}
}
And that’s it, your face sticker is good to go.
Let’s see it in action!
We have open sourced the demo code in Github, you can download the demo and have a try:
https://github.com/HMS-Core/hms-ml-demo/tree/master/Face2D-Sticker
For more details, you can go to Our official website:
https://developer.huawei.com/consumer/en/hms
Our Development Documentation page, to find the documents you need:
https://github.com/HMS-Core
Stack Overflow to solve any integration problems:
https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Newest

Related

All About Maps - Episode 2: Moving Map Camera to Bounded Regions

More articles like this, you can visit HUAWEI Developer Forum and Medium.​
Previously on All About Maps: Episode 1:
The principles of clean architecture
The importance of eliminating map provider dependencies with abstraction
Drawing polylines and markers on Mapbox Maps, Google Maps (GMS), and Huawei Maps (HMS)
Episode 2: Bounded Regions
Welcome to the second episode of AllAboutMaps. In order to understand this blog post better, I would first suggest reading the Episode 1. Otherwise, it will be difficult to follow the context.
In this episode we will talk about bounded regions:
The GPX parser datasource will parse the the file to get the list of attraction points (waypoints in this case).
The datasource module will emit the bounded region information in every 3 seconds
A rectangular bounded area from the centered attraction points with a given radius using a utility method (No dependency to any Map Provider!)
We will move the map camera to the bounded region each time a new bounded region is emitted.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
ChangeLog since Episode 1
As we all know, software development is continous process. It helps a lot when you have reviewers who can comment on your code and point out issues or come up with suggestions. Since this project is one person task, it is not always easy to spot the flows in the code duirng implementation. The software gets better and evolves hopefully in a good way when we add new features. Once again. I would like to add the disclaimer that my suggestions here are not silver bullets. There are always better approaches. I am more than happy to hear your suggestions in the comments!
You can see the full code change between episode 1 and 2 here:
https://github.com/ulusoyca/AllAboutMaps/compare/episode_1-parse-gpx...episode_2-bounded-region
Here are the main changes I would like to mention:
1- Added MapLifecycleHandlerFragment.kt base class
In episode 1, I had one feature: show the polyline and markers on the map. The base class of all 3 fragments (RouteInfoMapboxFragment, RouteInfoGoogleFragment and RouteInfoHuaweiFragment) called these lifecycle methods. When I added another feature (showing bounded regions) I realized that the new base class of this feature again implemented the same lifecycle methods. This is against the DRY rule (Dont Repeat Yourself)! Here is the base class I introduced so that each feature's base class will extend this one:
Code:
/**
* The base fragment handles map lifecycle. To use it, the mapview classes should implement
* [AllAboutMapView] interface.
*/
abstract class MapLifecycleHandlerFragment : DaggerFragment() {
protected lateinit var mapView: AllAboutMapView
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
mapView.onMapViewCreate(savedInstanceState)
}
override fun onResume() {
super.onResume()
mapView.onMapViewResume()
}
override fun onPause() {
super.onPause()
mapView.onMapViewPause()
}
override fun onStart() {
super.onStart()
mapView.onMapViewStart()
}
override fun onStop() {
super.onStop()
mapView.onMapViewStop()
}
override fun onDestroyView() {
super.onDestroyView()
mapView.onMapViewDestroy()
}
override fun onSaveInstanceState(outState: Bundle) {
super.onSaveInstanceState(outState)
mapView.onMapViewSaveInstanceState(outState)
}
}
Let's see the big picture now:
2- Refactored the abstraction for styles, marker options, and line options.
In the first episode, we encapsulated a dark map style inside each custom MapView. When I intended to use outdoor map style for the second episode, I realized that my first approach was a mistake. A specific style should not be encapsulated inside MapView. Each feature should be able to select different style. I took the responsibility to load the style from MapViews to fragments. Once the style is loaded, the style object is passed to MapView.
Code:
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
mapView = binding.mapView
super.onViewCreated(view, savedInstanceState)
binding.mapView.getMapAsync { mapboxMap ->
binding.mapView.onMapReady(mapboxMap)
mapboxMap.setStyle(Style.OUTDOORS) {
binding.mapView.onStyleLoaded(it)
onMapStyleLoaded()
}
}
}
I also realized the need for MarkerOptions and LineOptions entities in our domain module:
Code:
data class MarkerOptions(
var latLng: LatLng,
var text: String? = null,
@DrawableRes var iconResId: Int,
var iconMapStyleId: String,
@ColorRes var iconColor: Int,
@ColorRes var textColor: Int
)
Code:
data class LineOptions(
var latLngs: List<LatLng>,
@DimenRes var lineWidth: Int,
@ColorRes var lineColor: Int
)
Above entities has properties based on the needs of my project. I only care about the color, text, location, and icon properties of the marker. For polyline, I will customize width, color and text properties. If your project needs to customize the marker offset, opacity, line join type, and other properties, then feel free to add them in your case.
These entities are mapped to corresponding map provider classes:
LineOptions:
Code:
private fun LineOptions.toGoogleLineOptions(context: Context) = PolylineOptions()
.color(ContextCompat.getColor(context, lineColor))
.width(resources.getDimension(lineWidth))
.addAll(latLngs.map { it.toGoogleLatLng() })
Code:
private fun LineOptions.toHuaweiLineOptions(context: Context) = PolylineOptions()
.color(ContextCompat.getColor(context, lineColor))
.width(resources.getDimension(lineWidth))
.addAll(latLngs.map { it.toHuaweiLatLng() })
Code:
private fun LineOptions.toMapboxLineOptions(context: Context): MapboxLineOptions {
val color = ColorUtils.colorToRgbaString(ContextCompat.getColor(context, lineColor))
return MapboxLineOptions()
.withLineColor(color)
.withLineWidth(resources.getDimension(lineWidth))
.withLatLngs(latLngs.map { it.toMapboxLatLng() })
}
MarkerOptions
Code:
private fun DomainMarkerOptions.toGoogleMarkerOptions(): GoogleMarkerOptions {
var markerOptions = GoogleMarkerOptions()
.icon(BitmapDescriptorFactory.fromResource(iconResId))
.position(latLng.toGoogleLatLng())
markerOptions = text?.let { markerOptions.title(it) } ?: markerOptions
return markerOptions
}
Code:
private fun DomainMarkerOptions.toHuaweiMarkerOptions(context: Context): HuaweiMarkerOptions {
BitmapDescriptorFactory.setContext(context)
var markerOptions = HuaweiMarkerOptions()
.icon(BitmapDescriptorFactory.fromResource(iconResId))
.position(latLng.toHuaweiLatLng())
markerOptions = text?.let { markerOptions.title(it) } ?: markerOptions
return markerOptions
}
Code:
private fun DomainMarkerOptions.toMapboxSymbolOptions(context: Context, style: Style): SymbolOptions {
val drawable = ContextCompat.getDrawable(context, iconResId)
val bitmap = BitmapUtils.getBitmapFromDrawable(drawable)!!
style.addImage(iconMapStyleId, bitmap)
val iconColor = ColorUtils.colorToRgbaString(ContextCompat.getColor(context, iconColor))
val textColor = ColorUtils.colorToRgbaString(ContextCompat.getColor(context, textColor))
var symbolOptions = SymbolOptions()
.withIconImage(iconMapStyleId)
.withLatLng(latLng.toMapboxLatLng())
.withIconColor(iconColor)
.withTextColor(textColor)
symbolOptions = text?.let { symbolOptions.withTextField(it) } ?: symbolOptions
return symbolOptions
}
There are minor technical details to handle the differences between map provider APIs but it is out of this blog post's scope.
Earlier our methods for drawing polyline and marker looked like this:
Code:
fun drawPolyline(latLngs: List<LatLng>, @ColorRes mapLineColor: Int)
fun drawMarker(latLng: LatLng, icon: Bitmap, name: String?)
After this refactor they look like this:
Code:
fun drawPolyline(lineOptions: LineOptions)
fun drawMarker(markerOptions: MarkerOptions)
It is a code smell when the number of the arguments in a method increases when you add a new feature. That's why we created data holders to pass around.
3- A secondary constructor method for LatLng
While working on this feature, I realized that a secondary method that constructs the LatLng entity from double values would also be useful when mapping the entities with different map providers. I mentioned the reason why I use inline classes for Latitude and Longitude in the first episode.
Code:
inline class Latitude(val value: Float)
inline class Longitude(val value: Float)
data class LatLng(
val latitude: Latitude,
val longitude: Longitude
) {
constructor(latitude: Double, longitude: Double) : this(
Latitude(latitude.toFloat()),
Longitude(longitude.toFloat())
)
val latDoubleValue: Double
get() = latitude.value.toDouble()
val lngDoubleValue: Double
get() = longitude.value.toDouble()
}
Bounded Region
A bounded region is used to describe a particular area (in many cases it is rectangular) on a map. We usually need two coordinate pairs to describe a region: Soutwest and Northeast. In this stackoverflow answer (https://stackoverflow.com/a/31029389), it is well described:
As expected Mapbox, GMS and HMS maps provide LatLngBounds classes. However, they require a pair of coordinates to construct the bound. In our case we only have one location for each attraction point. We want to show the region with a radius from center on map. We need to do a little bit extra work to calculate the location pair but first let's add LatLngBound entity to our domain module:
Code:
data class LatLngBounds(
val southwestCorner: LatLng,
val northeastCorner: LatLng
)
Implementation
First, let's see the big (literally!) picture:
Thanks to our clean architecture, it is very easy to add a new feature with a new use case. Let's start with the domain module as always:
Code:
/**
* Emits the list of waypoints with a given update interval
*/
class StartWaypointPlaybackUseCase
@Inject constructor(
private val routeInfoRepository: RouteInfoRepository
) {
suspend operator fun invoke(
points: List<Point>,
updateInterval: Long
): Flow<Point> {
return routeInfoRepository.startWaypointPlayback(points, updateInterval)
}
}
The user interacts with the app to start the playback of waypoints. I call this playback because playback is "the reproduction of previously recorded sounds or moving images." We have a list of points to be listened in a given time. We will move map camera periodically from one bounded region to another. The waypoints are emitted from datasource with a given update interval. Domain module doesn't know the implementation details. It sends the request to our datasource module.
Let's see our datasource module. We added a new method in RouteInfoDataRepository:
Code:
override suspend fun startWaypointPlayback(
points: List<Point>,
updateInterval: Long
): Flow<Point> = flow {
val routeInfo = gpxFileDatasource.parseGpxFile()
routeInfo.wayPoints.forEachIndexed { index, waypoint ->
if (index != 0) {
delay(updateInterval)
}
emit(waypoint)
}
}.flowOn(Dispatchers.Default)
Thanks to Kotlin Coroutines, it is very simple to emit the points with a delay. Roman Elizarov describes the flow api in very neat diagram below. If you are interested to learn more about it, his talks are the best to start with.
Long story short, our app module invokes the use case from domain module, domain module forwards the request to datasource module. The corresponding repository class inside datasource module gets the data from GPX datasource and the datasource module orchestrates the data flow.
For full content, you can visit HUAWEI Developer Forum.

CameraX — Camera Kit comparison

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201332917232620018&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
CameraX
CameraX is a Jetpack support library, built to help you make camera app development easier. It provides a consistent and easy-to-use API surface that works across most Android devices, with backward-compatibility to Android 5.0
While it leverages the capabilities of camera2, it uses a simpler, uses a case-based approach that is lifecycle-aware. It also resolves device compatibility issues for you so that you don’t have to include device-specific code in your codebase. These features reduce the amount of code you need to write when adding camera capabilities to your app.
Use Cases
CameraX introduces use cases, which allow you to focus on the task you need to get done instead of spending time managing device-specific nuances. There are several basic use cases:
Preview: get an image on the display
Image analysis: access a buffer seamlessly for use in your algorithms, such as to pass into MLKit
Image capture: save high-quality images
CameraX has an optional add-on, called Extensions, which allow you to access the same features and capabilities as those in the native camera app that ships with the device, with just two lines of code.
The first set of capabilities available include Portrait, HDR, Night, and Beauty. These capabilities are available on supported devices
Implementing Preview
When adding a preview to your app, use PreviewView, which is a View that can be cropped, scaled, and rotated for proper display.
The image preview streams to a surface inside the PreviewView when the camera becomes active.
Implementing a preview for CameraX using PreviewView involves the following steps, which are covered in later sections:
Optionally configure a CameraXConfig.Provider.
Add a PreviewView to your layout.
Request a CameraProvider.
On View creation, check for the CameraProvider.
Select a camera and bind the lifecycle and use cases.
Using PreviewView has some limitations. When using PreviewView, you can’t do any of the following things:
Create a SurfaceTexture to set on TextureView and PreviewSurfaceProvider.
Retrieve the SurfaceTexture from TextureView and set it on PreviewSurfaceProvider.
Get the Surface from SurfaceView and set it on PreviewSurfaceProvider.
If any of these happen, then the Preview will stop streaming frames to the PreviewView.
On your app level build.gradle file add the following:
Code:
// CameraX core library using the camera2 implementation
def camerax_version = "1.0.0-beta03"
def camerax_extensions = "1.0.0-alpha10"
implementation "androidx.camera:camera-core:${camerax_version}"
implementation "androidx.camera:camera-camera2:${camerax_version}"
// If you want to additionally use the CameraX Lifecycle library
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
// If you want to additionally use the CameraX View class
implementation "androidx.camera:camera-view:${camerax_extensions}"
// If you want to additionally use the CameraX Extensions library
implementation "androidx.camera:camera-extensions:${camerax_extensions}"
On your .xml file using the PreviewView is highly recommended:
Code:
<androidx.camera.view.PreviewView
android:id="@+id/camera"
android:layout_width="math_parent"
android:layout_height="math_parent"
android:contentDescription="@string/preview_area"
android:importantForAccessibility="no"/>
Let's start the backend coding for our previewView in our Activity or a Fragment:
Code:
private val REQUIRED_PERMISSIONS = arrayOf(Manifest.permission.CAMERA)
private lateinit var cameraSelector: CameraSelector
private lateinit var previewView: PreviewView
private lateinit var cameraProviderFeature: ListenableFuture<ProcessCameraProvider>
private lateinit var cameraControl: CameraControl
private lateinit var cameraInfo: CameraInfo
private lateinit var imageCapture: ImageCapture
private lateinit var imageAnalysis: ImageAnalysis
private lateinit var torchView: ImageView
private val executor = Executors.newSingleThreadExecutor()
takePicture() method:
Code:
fun takePicture() {
val file = createFile(
outputDirectory,
FILENAME,
PHOTO_EXTENSION
)
val outputFileOptions = ImageCapture.OutputFileOptions.Builder(file).build()
imageCapture.takePicture(
outputFileOptions,
executor,
object : ImageCapture.OnImageSavedCallback {
override fun onImageSaved(outputFileResults: ImageCapture.OutputFileResults) {
val msg = "Photo capture succeeded: ${file.absolutePath}"
previewView.post {
Toast.makeText(
context.applicationContext,
msg,
Toast.LENGTH_SHORT
).show()
//You can create a task to save your image to any database you like
getImageTask(file)
}
}
override fun onError(exception: ImageCaptureException) {
val msg = "Photo capture failed: ${exception.message}"
showLogError(mTAG, msg)
}
})
}
As I said you may get uri from file and use it on anywhere you like:
Code:
fun getImageTask(file: File) {
val uri = Uri.fromFile(file)
}
This part is an example for starting front camera with minor changes I am sure you may switch between front and back:
Code:
fun startCameraFront() {
showLogDebug(mTAG, "startCameraFront")
CameraX.unbindAll()
torchView.visibility = View.INVISIBLE
imagePreviewView = Preview.Builder().apply {
setTargetAspectRatio(AspectRatio.RATIO_4_3)
setTargetRotation(previewView.display.rotation)
setDefaultResolution(Size(1920, 1080))
setMaxResolution(Size(3024, 4032))
}.build()
imageAnalysis = ImageAnalysis.Builder().apply {
setImageQueueDepth(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
}.build()
imageAnalysis.setAnalyzer(executor, LuminosityAnalyzer())
imageCapture = ImageCapture.Builder().apply {
setCaptureMode(ImageCapture.CAPTURE_MODE_MAXIMIZE_QUALITY)
}.build()
cameraSelector =
CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_FRONT).build()
cameraProviderFeature.addListener(Runnable {
val cameraProvider = cameraProviderFeature.get()
val camera = cameraProvider.bindToLifecycle(
this,
cameraSelector,
imagePreviewView,
imageAnalysis,
imageCapture
)
previewView.preferredImplementationMode =
PreviewView.ImplementationMode.TEXTURE_VIEW
imagePreviewView.setSurfaceProvider(previewView.createSurfaceProvider(camera.cameraInfo))
}, ContextCompat.getMainExecutor(context.applicationContext))
}
LuminosityAnalyzer is essential for autofocus measures, so I recommend you to use it:
Code:
private class LuminosityAnalyzer : ImageAnalysis.Analyzer {
private var lastAnalyzedTimestamp = 0L
/**
* Helper extension function used to extract a byte array from an
* image plane buffer
*/
private fun ByteBuffer.toByteArray(): ByteArray {
rewind() // Rewind the buffer to zero
val data = ByteArray(remaining())
get(data) // Copy the buffer into a byte array
return data // Return the byte array
}
override fun analyze(image: ImageProxy) {
val currentTimestamp = System.currentTimeMillis()
// Calculate the average luma no more often than every second
if (currentTimestamp - lastAnalyzedTimestamp >=
TimeUnit.SECONDS.toMillis(1)
) {
val buffer = image.planes[0].buffer
val data = buffer.toByteArray()
val pixels = data.map { it.toInt() and 0xFF }
val luma = pixels.average()
showLogDebug(mTAG, "Average luminosity: $luma")
lastAnalyzedTimestamp = currentTimestamp
}
image.close()
}
}
Now before saving our image to our folder lets define our constants:
Code:
companion object {
private const val REQUEST_CODE_PERMISSIONS = 10
private const val mTAG = "ExampleTag"
private const val FILENAME = "yyyy-MM-dd-HH-mm-ss-SSS"
private const val PHOTO_EXTENSION = ".jpg"
private var recPath = Environment.getExternalStorageDirectory().path + "/Pictures/YourNewFolderName"
fun getOutputDirectory(context: Context): File {
val appContext = context.applicationContext
val mediaDir = context.externalMediaDirs.firstOrNull()?.let {
File(
recPath
).apply { mkdirs() }
}
return if (mediaDir != null && mediaDir.exists()) mediaDir else appContext.filesDir
}
fun createFile(baseFolder: File, format: String, extension: String) =
File(
baseFolder, SimpleDateFormat(format, Locale.ROOT)
.format(System.currentTimeMillis()) + extension
)
}
Simple torch control:
Code:
fun toggleTorch() {
when (cameraInfo.torchState.value) {
TorchState.ON -> {
cameraControl.enableTorch(false)
}
else -> {
cameraControl.enableTorch(true)
}
}
}
private fun setTorchStateObserver() {
cameraInfo.torchState.observe(this, androidx.lifecycle.Observer { state ->
if (state == TorchState.ON) {
torchView.setImageResource(R.drawable.ic_flash_on)
} else {
torchView.setImageResource(R.drawable.ic_flash_off)
}
})
}
Remember torchView can be any View type you want to be:
Code:
torchView.setOnClickListener {
toggleTorch()
setTorchStateObserver()
}
Now in your onCreateView() for Fragments or in onCreate() you may initiate previewView start using it:
Code:
previewView.post { startCameraFront() }
} else {
requestPermissions(
REQUIRED_PERMISSIONS,
REQUEST_CODE_PERMISSIONS
)
}
Camera Kit
HUAWEI Camera Kit encapsulates the Google Camera2 API to support multiple enhanced camera capabilities.
Unlike other camera APIs, Camera Kit focuses on bringing the full capacity of your camera to your apps. Well, dear readers think like this, many other social media apps have their own camera features yet output given by their camera is somehow always worse than the camera quality that your phone actually provides. For example, your camera may support x50 zoom or super night mode or maybe wide aperture mode but we all know that full extent of our phones' camera becomes useless no matter the price or the feature that our phone has when we are trying the take a shot from any of the 3rd party camera APIs.
HUAWEI Camera Kit provides a set of advanced programming APIs for you to integrate powerful image processing capabilities of Huawei phone cameras into your apps. Camera features such as wide aperture, Portrait mode, HDR, background blur, and Super Night mode can help your users shoot stunning images and vivid videos anytime and anywhere.
Features
Unlike the rest of the open-source APIs Camera Kit access the devices’ original camera features and is able to unleash them in your apps.
Front Camera HDR: In a backlit or low-light environment, front camera High Dynamic Range (HDR) improves the details in both the well-lit and poorly-lit areas of photos to present more life-like qualities.
Super Night Mode: This mode is used for you to take photos with sufficient brightness by using a long exposure at night. It also helps you to take photos that are properly exposed in other dark environments.
Wide Aperture: This mode blurs the background and highlights the subject in a photo. You are advised to be within 2 meters of the subject when taking a photo and to disable the flash in this mode.
Recording: This mode helps you record HD videos with effects such as different colors, filters, and AI film. Effects: Video HDR, Video background blurring
Portrait: Portraits and close-ups
Photo Mode: This mode supports the general capabilities that include but are not limited to Rear camera: Flash, color modes, face/smile detection, filter, and master AI. Front camera: Face/Smile detection, filter, SensorHdr, and mirror reflection.
Super Slow-Mo Recording: This mode allows you to record super slow-motion videos with a frame rate of over 960 FPS in manual or automatic (motion detection) mode.
Slow-mo Recording: This mode allows you to record slow-motion videos with a frame rate lower than 960 FPS. This mode allows you to record slow-motion videos with a frame rate lower than 960 FPS.
Pro Mode (Video): The Pro mode is designed to open the professional photography and recording capabilities of the Huawei camera to apps to meet diversified shooting requirements.
Pro Mode (Photo): This mode allows you to adjust the following camera parameters to obtain the same shooting capabilities as those of Huawei camera: Metering mode, ISO, exposure compensation, exposure duration, focus mode, and automatic white balance.
Integration Process
Registration and Sign-in
Before you get started, you must register as a HUAWEI developer and complete identity verification on the HUAWEI Developer website. For details, please refer to Register a HUAWEI ID.
Signing the HUAWEI Developer SDK Service Cooperation Agreement
When you download the SDK from SDK Download, the system prompts you to sign in and sign the HUAWEI Media Service Usage Agreement…
Environment Preparations
Android Studio v3.0.1 or later is recommended.
Huawei phones equipped with Kirin 980 or later and running EMUI 10.0 or later are required.
Code Part (Portrait Mode)
Now let us do an example for Portrait Mode. On our manifest lets set up some permissions:
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
View for the camera doesn’t provided by Camera Kit so we have to write our own view first:
Code:
public class OurTextureView extends TextureView {
private int mRatioWidth = 0;
private int mRatioHeight = 0;
public OurTextureView(Context context) {
this(context, null);
}
public OurTextureView(Context context, AttributeSet attrs) {
this(context, attrs, 0);
}
public OurTextureView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
public void setAspectRatio(int width, int height) {
if ((width < 0) || (height < 0)) {
throw new IllegalArgumentException("Size cannot be negative.");
}
mRatioWidth = width;
mRatioHeight = height;
requestLayout();
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
int width = MeasureSpec.getSize(widthMeasureSpec);
int height = MeasureSpec.getSize(heightMeasureSpec);
if ((0 == mRatioWidth) || (0 == mRatioHeight)) {
setMeasuredDimension(width, height);
} else {
if (width < height * mRatioWidth / mRatioHeight) {
setMeasuredDimension(width, width * mRatioHeight / mRatioWidth);
} else {
setMeasuredDimension(height * mRatioWidth / mRatioHeight, height);
}
}
}
}
.xml part:
Code:
<com.huawei.camerakit.portrait.OurTextureView
android:id="@+id/texture"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentStart="true"
android:layout_alignParentTop="true" />
Let's look at our variables:
Code:
private Mode mMode;
private @Mode.Type int mCurrentModeType = Mode.Type.PORTRAIT_MODE;
private CameraKit mCameraKit;
Our permissions:
Code:
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
Log.d(TAG, "onRequestPermissionsResult: ");
if (!PermissionHelper.hasPermission(this)) {
Toast.makeText(this, "This application needs camera permission.", Toast.LENGTH_LONG).show();
finish();
}
}
First, in our code let us check if the Camera Kit is supported by our device:
Code:
private boolean initCameraKit() {
mCameraKit = CameraKit.getInstance(getApplicationContext());
if (mCameraKit == null) {
Log.e(TAG, "initCamerakit: this devices not support camerakit or not installed!");
return false;
}
return true;
}
captureImage() method to capture image
Code:
private void captureImage() {
Log.i(TAG, "captureImage begin");
if (mMode != null) {
mMode.setImageRotation(90);
// Default jpeg file path
mFile = new File(getExternalFilesDir(null), System.currentTimeMillis() + "pic.jpg");
// Take picture
mMode.takePicture();
}
Log.i(TAG, "captureImage end");
}
Callback method for our actionState:
Code:
private final ActionStateCallback actionStateCallback = new ActionStateCallback() {
@Override
public void onPreview(Mode mode, int state, PreviewResult result) {
}
@Override
public void onTakePicture(Mode mode, int state, TakePictureResult result) {
switch (state) {
case TakePictureResult.State.CAPTURE_STARTED:
Log.d(TAG, "onState: STATE_CAPTURE_STARTED");
break;
case TakePictureResult.State.CAPTURE_COMPLETED:
Log.d(TAG, "onState: STATE_CAPTURE_COMPLETED");
showToast("take picture success! file=" + mFile);
break;
default:
break;
}
}
};
Now let us compare CameraX with Camera Kit
CameraX
Limited to already built-in functions
No Video capture
ML only exists on luminosity builds
Easy to use, lightweight, easy to implement
Any device that supports above API level 21 can use it.
Has averagely acceptable outputs
Gives you the mirrored image
Implementation requires only app level build.gradle integration
Has limited image adjusting while capturing
https://developer.android.com/training/camerax
Camera Kit
Lets you use the full capacity of the phones original camera
Video capture exist with multiple modes
ML exists on both rear and front camera (face/smile detection, filter, and master AI)
Hard to implement. Implementation takes time
Requires the flagship Huawei device to operate
Has incredible quality outputs
The mirrored image can be adjusted easily.
SDK must be downloaded and handled by the developer
References:
https://developer.huawei.com/consumer/en/CameraKit
camera kit will support portrait mode?

How to Build a 3D Product Model Within Just 5 Minutes

Displaying products with 3D models is something too great to ignore for an e-commerce app. Using those fancy gadgets, such an app can leave users with the first impression upon products in a fresh way!
The 3D model plays an important role in boosting user conversion. It allows users to carefully view a product from every angle, before they make a purchase. Together with the AR technology, which gives users an insight into how the product will look in reality, the 3D model brings a fresher online shopping experience that can rival offline shopping.
Despite its advantages, the 3D model has yet to be widely adopted. The underlying reason for this is that applying current 3D modeling technology is expensive:
Technical requirements: Learning how to build a 3D model is time-consuming.
Time: It takes at least several hours to build a low polygon model for a simple object, and even longer for a high polygon one.
Spending: The average cost of building a simple model can be more than one hundred dollars, and even higher for building a complex one.
Luckily, 3D object reconstruction, a capability in 3D Modeling Kit newly launched in HMS Core, makes 3D model building straightforward. This capability automatically generates a 3D model with a texture for an object, via images shot from different angles with a common RGB-Cam. It gives an app the ability to build and preview 3D models. For instance, when an e-commerce app has integrated 3D object reconstruction, it can generate and display 3D models of shoes. Users can then freely zoom in and out on the models for a more immersive shopping experience.
Actual Effect​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions​
3D object reconstruction is implemented on both the device and cloud. RGB images of an object are collected on the device and then uploaded to the cloud. Key technologies involved in the on-cloud modeling process include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Finally, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Preparations​1. Configuring a Dependency on the 3D Modeling SDK
Open the app-level build.gradle file and add a dependency on the 3D Modeling SDK in the dependencies block.
Code:
// Build a dependency on the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configuring AndroidManifest.xml
Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission.
Code:
/<!-- Permission to read data from and write data into storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Permission to use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Development Procedure​1. Configuring the Storage Permission Application
In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
/if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Creating a 3D Object Reconstruction Configurator
Code:
/// Set the PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Creating a 3D Object Reconstruction Engine and Initializing the Task
Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Create an engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Initialize the 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Creating a Listener Callback to Process the Image Upload Result
Create a listener callback that allows you to configure the operations triggered upon upload success and failure.
Code:
// Create an upload listener callback.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Passing the Upload Listener Callback to the Engine to Upload Images
Pass the upload listener callback to the engine. Call uploadFile(),
pass the task ID obtained in step 3 and the path of the images to be uploaded. Then, upload the images to the cloud server.
Code:
// Pass the listener callback to the engine.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Start uploading.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Querying the Task Status
Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Create a task processing instance.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask of the task processing instance to query the status of the 3D object reconstruction task.
Code:
// Query the task status, which can be: 0 (images to be uploaded); 1: (image upload completed); 2: (model being generated); 3( model generation completed); 4: (model generation failed).
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Creating a Listener Callback to Process the Model File Download Result
Create a listener callback that allows you to configure the operations triggered upon download success and failure.
Code:
// Create a download listener callback.
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Passing the Download Listener Callback to the Engine to Download the File of the Generated Model
Pass the download listener callback to the engine. Call downloadModel, pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
/ Pass the download listener callback to the engine.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
More Information​
The object should have rich texture, be medium-sized, and a rigid body. The object should not be reflective, transparent, or semi-transparent. The object types include goods (like plush toys, bags, and shoes), furniture (like sofas), and cultural relics (such as bronzes, stone artifacts, and wooden artifacts).
The object dimension should be within the range from 15 x 15 x 15 cm to 150 x 150 x 150 cm. (A larger dimension requires a longer time for modeling.)
3D object reconstruction does not support modeling for the human body and face.
Ensure the following requirements are met during image collection: Put a single object on a stable plane in pure color. The environment shall not be dark or dazzling. Keep all images in focus, free from blur caused by motion or shaking. Ensure images are taken from various angles including the bottom, flat, and top (it is advised that you upload more than 50 images for an object). Move the camera as slowly as possible. Do not change the angle during shooting. Lastly, ensure the object-to-image ratio is as big as possible, and all parts of the object are present.
These are all about the sample code of 3D object reconstruction. Try to integrate it into your app and build your own 3D models!
References​For more details, you can go to:
3D Modeling Kit official website
3D Moedling Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download 3D Modeling Kit sample codes
Stack Overflow to solve any integration problems

I Decorated My House Using AR: Here's How I Did It

Background​
Around half a year ago I decided to start decorating my new house. Before getting started, I did lots of research on a variety of different topics relating to interior decoration, such as how to choose a consistent color scheme, which measurements to make and how to make them, and how to choose the right furniture. However, my preparations made me realize that no matter how well prepared you are, you're always going to run into many unexpected challenges. Before rushing to the furniture store, I listed all the different pieces of furniture that I wanted to place in my living room, including a sofa, tea table, potted plants, dining table, and carpet, and determined the expected dimensions, colors, and styles of these various items of furniture. However, when I finally got to the furniture store, the dizzying variety of choices had me confused, and I found it very difficult to imagine how the different choices of furniture would actually look like in actual living room. At that moment a thought came to my mind: wouldn't it be great if there was an app that allows users to upload images of their home and then freely select different furniture products to see how they'll look like in their home? Such an app would surely save users wishing to decorate their home lots of time and unnecessary trouble, and reduce the risks of users being dissatisfied with the final decoration result.
That's when the idea of developing an app by myself came to my mind. My initial idea was to design an app that people could use to help them quickly satisfy their home decoration needs by allowing them see what furniture would look like in their homes. The basic way the app works is that users first upload one or multiple images of a room they want to decorate, and then set a reference parameter, such as the distance between the floor and the ceiling. Armed with this information, the app would then automatically calculate the parameters of other areas in the room. Then, users can upload images of furniture they like into a virtual shopping cart. When uploading such images, users need to specify the dimensions of the furniture. From the editing screen, users can drag and drop furniture from the shopping cart onto the image of the room to preview the effect. But then a problem arises: images of furniture dragged and dropped into the room look pasted on and do not blend naturally with their surroundings.
By a stroke of luck, I happened to discover HMS Core AR Engine when looking for a solution for the aforementioned problem. This development kit provides the ability to integrate virtual objects realistically into the real world, which is exactly what my app needs. With its plane detection capability, my app will be able to detect the real planes in a home and allow users to place virtual furniture based on these planes; and with its hit test capability, users can interact with virtual furniture to change their position and orientation in a natural manner.
Next, I'd like to briefly introduce the two capabilities this development kit offers.
AR Engine tracks the illumination, planes, images, objects, surfaces, and other environmental information, to allow apps to integrate virtual objects into the physical world and look and behave like they would if they were real. Its plane detection capability identifies feature points in groups on horizontal and vertical planes, as well as the boundaries of the planes, ensuring that your app can place virtual objects on them.
In addition, the kit continuously tracks the location and orientation of devices relative to their surrounding environment, and establishes a unified geometric space between the virtual world and the physical world. The kit uses its hit test capability to map a point of interest that users tap on the screen to a point of interest in the real environment, from where a ray will be emitted pointing to the location of the device camera, and return the intersecting point between the ray and the plane. In this way, users can interact with any virtual object on their device screen.
Functions and Features​
Plane detection: Both horizontal and vertical planes are supported.
Accuracy: The margin of error is around 2.5 cm when the target plane is 1 m away from the camera.
Texture recognition delay: < 1s
Supports polygon fitting and plane merging.
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hit test
As shown in the demo, the app is able to identify the floor plane, so that the virtual suitcase can move over it as if it were real.
Developing Plane Detection​
1. Create a WorldActivity object. This example demonstrates how to use the world AR scenario of AR Engine.
Code:
Public class WorldActivity extends BaseActivity{
Protected void onCreate (Bundle saveInstanceState) {
Initialize DisplayRotationManager.
mDisplayRotationManager = new DisplayRotationManager(this);
Initialize WorldRenderManager.
mWorldRenderManager = new WorldRenderManager(this,this);
}
// Create a gesture processor.
Private void initGestureDetector(){
mGestureDetector = new GestureDetector(this,new GestureDetector.SimpleOnGestureListener()){
}
}
mSurfaceView.setOnTouchListener(new View.OnTouchListener()){
public Boolean onTouch(View v,MotionEvent event){
return mGestureDetector.onTouchEvent(event);
}
}
// Create ARWorldTrackingConfig in the onResume lifecycle.
protected void onResume(){
mArSession = new ARSession(this.getApplicationContext());
mConfig = new ARWorldTrackingConfig(mArSession);
…
}
// Initialize a refresh configuration class.
private void refreshConfig(int lightingMode){
// Set the focus.
mConfig.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS);
mArSession.configure(mConfig);
}
}
2. Initialize the WorldRenderManager class, which manages rendering related to world scenarios, including label rendering and virtual object rendering.
Code:
public class WorldRenderManager implements GLSurfaceView.Renderr{
// Initialize a class for frame drawing.
Public void onDrawFrame(GL10 unused){
// Set the openGL textureId for storing the camera preview stream data.
mSession.setCameraTextureName(mTextureDisplay.getExternalTextureId());
// Update the calculation result of AR Engine. You are advised to call this API when your app needs to obtain the latest data.
ARFrame arFrame = mSession.update();
// Obtains the camera specifications of the current frame.
ARCamera arCamera = arFrame.getCamera();
// Returns a projection matrix used for coordinate calculation, which can be used for the transformation from the camera coordinate system to the clip coordinate system.
arCamera.getProjectionMatrix(projectionMatrix, PROJ_MATRIX_OFFSET, PROJ_MATRIX_NEAR, PROJ_MATRIX_FAR);
Session.getAllTrackables(ARPlane.class)
...
}
}
3. Initialize the VirtualObject class, which provides properties of the virtual object and the necessary methods for rendering the virtual object.
Code:
Public class VirtualObject{
}
4. Initialize the ObjectDisplay class to draw virtual objects based on specified parameters.
Code:
Public class ObjectDisplay{
}
Developing Hit Test​
1. Initialize the WorldRenderManager class, which manages rendering related to world scenarios, including label rendering and virtual object rendering.
Code:
public class WorldRenderManager implementsGLSurfaceView.Renderer{
// Pass the context.
public WorldRenderManager(Activity activity, Context context) {
mActivity = activity;
mContext = context;
…
}
// Set ARSession, which updates and obtains the latest data in OnDrawFrame.
public void setArSession(ARSession arSession) {
if (arSession == null) {
LogUtil.error(TAG, "setSession error, arSession is null!");
return;
}
mSession = arSession;
}
// Set ARWorldTrackingConfig to obtain the configuration mode.
public void setArWorldTrackingConfig(ARWorldTrackingConfig arConfig) {
if (arConfig == null) {
LogUtil.error(TAG, "setArWorldTrackingConfig error, arConfig is null!");
return;
}
mArWorldTrackingConfig = arConfig;
}
// Implement the onDrawFrame() method.
@Override
public void onDrawFrame(GL10 unused) {
mSession.setCameraTextureName(mTextureDisplay.getExternalTextureId());
ARFrame arFrame = mSession.update();
ARCamera arCamera = arFrame.getCamera();
...
}
// Output the hit result.
private ARHitResult hitTest4Result(ARFrame frame, ARCamera camera, MotionEvent event) {
ARHitResult hitResult = null;
List<ARHitResult> hitTestResults = frame.hitTest(event);
// Determine whether the hit point is within the plane polygon.
ARHitResult hitResultTemp = hitTestResults.get(i);
if (hitResultTemp == null) {
continue;
}
ARTrackable trackable = hitResultTemp.getTrackable();
// Determine whether the point cloud is tapped and whether the point faces the camera.
boolean isPointHitJudge = trackable instanceof ARPoint
&& ((ARPoint) trackable).getOrientationMode() == ARPoint.OrientationMode.ESTIMATED_SURFACE_NORMAL;
// Select points on the plane preferentially.
if (isPlanHitJudge || isPointHitJudge) {
hitResult = hitResultTemp;
if (trackable instanceof ARPlane) {
break;
}
}
return hitResult;
}
}
2. Create a WorldActivity object. This example demonstrates how to use the world AR scenario of AR Engine.
Code:
public class WorldActivity extends BaseActivity {
private ARSession mArSession;
private GLSurfaceView mSurfaceView;
private ARWorldTrackingConfig mConfig;
@Override
protected void onCreate(Bundle savedInstanceState) {
LogUtil.info(TAG, "onCreate");
super.onCreate(savedInstanceState);
setContentView(R.layout.world_java_activity_main);
mWorldRenderManager = new WorldRenderManager(this, this);
mWorldRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mWorldRenderManager.setQueuedSingleTaps(mQueuedSingleTaps)
}
@Override
protected void onResume() {
if (!PermissionManager.hasPermission(this)) {
this.finish();
}
errorMessage = null;
if (mArSession == null) {
try {
if (!arEngineAbilityCheck()) {
finish();
return;
}
mArSession = new ARSession(this.getApplicationContext());
mConfig = new ARWorldTrackingConfig(mArSession);
refreshConfig(ARConfigBase.LIGHT_MODE_ENVIRONMENT_LIGHTING | ARConfigBase.LIGHT_MODE_ENVIRONMENT_TEXTURE);
} catch (Exception capturedException) {
setMessageWhenError(capturedException);
}
if (errorMessage != null) {
stopArSession();
return;
}
}
@Override
protected void onPause() {
LogUtil.info(TAG, "onPause start.");
super.onPause();
if (mArSession != null) {
mDisplayRotationManager.unregisterDisplayListener();
mSurfaceView.onPause();
mArSession.pause();
}
LogUtil.info(TAG, "onPause end.");
}
@Override
protected void onDestroy() {
LogUtil.info(TAG, "onDestroy start.");
if (mArSession != null) {
mArSession.stop();
mArSession = null;
}
if (mWorldRenderManager != null) {
mWorldRenderManager.releaseARAnchor();
}
super.onDestroy();
LogUtil.info(TAG, "onDestroy end.");
}
...
}
Summary​
If you've ever done any interior decorating, I'm sure you've wanted the ability to see what furniture would look like in your home without having to purchase them first. After all, most furniture isn't cheap and delivery and assembly can be quite a hassle. That's why apps that allow users to place and view virtual furniture in their real homes are truly life-changing technologies. HMS Core AR Engine can help greatly streamline the development of such apps. With its plane detection and hit test capabilities, the development kit enables your app to accurately detect planes in the real world, and then blend virtual objects naturally into the real world. In addition to virtual home decoration, this powerful kit also has a broad range of other applications. For example, you can leverage its capabilities to develop an AR video game, an AR-based teaching app that allows students to view historical artifacts in 3D, or an e-commerce app with a virtual try-on feature. Try AR Engine now and explore the unlimited possibilities it provides.
Reference​
AR Engine Development Guide

3D Product Model: See How to Create One in 5 Minutes

Quick question: How do 3D models help e-commerce apps?
The most obvious answer is that it makes the shopping experience more immersive, and there are a whole host of other benefits they bring.
To begin with, a 3D model is a more impressive way of showcasing a product to potential customers. One way it does this is by displaying richer details (allowing potential customers to rotate the product and view it from every angle), to help customers make more informed purchasing decisions. Not only that, customers can virtually try-on 3D products, to recreate the experience of shopping in a physical store. In short, all these factors contribute to boosting user conversion.
As great as it is, the 3D model has not been widely adopted among those who want it. A major reason is that the cost of building a 3D model with existing advanced 3D modeling technology is very high, due to:
Technical requirements: Building a 3D model requires someone with expertise, which can take time to master.
Time: It takes at least several hours to build a low-polygon model for a simple object, not to mention a high-polygon one.
Spending: The average cost of building just a simple model can reach hundreds of dollars.
Fortunately for us, the 3D object reconstruction capability found in HMS Core 3D Modeling Kit makes 3D model creation easy-peasy. This capability automatically generates a texturized 3D model for an object, via images shot from multiple angles with a standard RGB camera on a phone. And what's more, the generated model can be previewed. Let's check out a shoe model created using the 3D object reconstruction capability.
Shoe Model Images​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions​
3D object reconstruction requires both the device and cloud. Images of an object are captured on a device, covering multiple angles of the object. And then the images are uploaded to the cloud for model creation. The on-cloud modeling process and key technologies include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Once the model is created, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Now the boring part is out of the way. Let's move on to the exciting part: how to integrate the 3D object reconstruction capability.
Integrating the 3D Object Reconstruction Capability​Preparations​1. Configure the build dependency for the 3D Modeling SDK.​Add the build dependency for the 3D Modeling SDK in the dependencies block in the app-level build.gradle file.
Code:
// Build dependency for the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configure AndroidManifest.xml.​Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission as needed:
Code:
<!-- Write into and read from external storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Function Development​1. Configure the storage permission application.​In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are granted, initialize the UI; if the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Create a 3D object reconstruction configurator.​
Code:
// PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Create a 3D object reconstruction engine and initialize the task.​Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Initialize the engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Create a 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Create a listener callback to process the image upload result.​Create a listener callback in which you can configure the operations triggered upon upload success and failure.
Code:
// Create a listener callback for the image upload task.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Set the image upload listener for the 3D object reconstruction engine and upload the captured images.​Pass the upload callback to the engine. Call uploadFile(), pass the task ID obtained in step 3 and the path of the images to be uploaded, and upload the images to the cloud server.
Code:
// Set the upload listener.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Upload captured images.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Query the task status.​Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Initialize the task processing class.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask to query the status of the 3D object reconstruction task.
Code:
// Query the reconstruction task execution result. The options are as follows: 0: To be uploaded; 1: Generating; 3: Completed; 4: Failed.
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Create a listener callback to process the model file download result.​Create a listener callback in which you can configure the operations triggered upon download success and failure.
Code:
// Create a download callback listener
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Pass the download listener callback to the engine to download the generated model file.​Pass the download listener callback to the engine. Call downloadModel. Pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
// Set the listener for the model file download task.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
Notes​1. To deliver an ideal modeling result, 3D object reconstruction has some requirements on the object to be modeled. For example, the object should have rich textures and a fixed shape. The object is expected to be non-reflective and medium-sized. Transparency or semi-transparency is not recommended. An object that meets these requirements may fall into one of the following types: goods (including plush toys, bags, and shoes), furniture (like sofas), and cultural relics (like bronzes, stone artifacts, and wooden artifacts).
2. The object dimensions should be within the range of 15 x 15 x 15 cm to 150 x 150 x 150 cm. (Larger dimensions require a longer modeling time.)
3. Modeling for the human body or face is not yet supported by the capability.
4. Suggestions for image capture: Put a single object on a stable plane in pure color. The environment should be well lit and plain. Keep all images in focus, free from blur caused by motion or shaking, and take pictures of the object from various angles including the bottom, face, and top. Uploading more than 50 images for an object is recommended. Move the camera as slowly as possible, and do not suddenly alter the angle when taking pictures. The object-to-image ratio should be as big as possible, and not a part of the object is missing.
With all these in mind, as well as the development procedure of the capability, now we are ready to create a 3D model like the shoe model above. Looking forward to seeing your own models created using this capability in the comments section below.
Reference​
Home page of 3D Modeling Kit
Service introduction to 3D Modeling Kit
Detailed information about 3D object reconstruction

Categories

Resources