Being creative is hard, but thinking of a fantastic idea is even more challenging. And once you've done that, the hardest part is expressing that idea in an attractive way.
This, I think, is the very reason why templates are gaining popularity in text, image, audio, and video editing, and more. Of all these templates, video templates are probably the most in demand by users. This is because a video is a lengthy creation which may also require high costs. And therefore, it is much more convenient to create a video using a template, rather than from scratch — which is particularly true for video editing amateurs.
The video template solution I have got for my app is a template capability of HMS Core Video Editor Kit. This capability comes preloaded with a library of templates that my users can use directly to quickly create a short video, make a vlog during their journey, create a product display video, generate a news video, and more.
On top of this, this capability comes with a platform where I can manage the templates easily, like this.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Template management platform — AppGallery ConnectTo be honest, one of the things that I really like about the capability is that it's easy to integrate, thanks to its straightforward code, as well as a whole set of APIs and relevant description on how to use them. Below is the process I followed to integrate the capability into my app.
Development ProcedurePreparations
Configure the app information.
Integrate the SDK.
Set up the obfuscation scripts.
Declare necessary permissions, including: device vibration permission, microphone use permission, storage read permission, and storage write permission.
Project ConfigurationSetting Authentication InformationSet the authentication information by using:
An access token. The setting is required only once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, an API key, which also needs to be set only once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Configuring a License IDSince this ID is used to manage the usage quotas of the service, the ID must be unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
Initialize the runtime environment for HuaweiVideoEditor.
During project configuration, an object of HuaweiVideoEditor must be created first, and its runtime environment must be initialized. When exiting the project, the object shall be released.
1. Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
2. Specify the preview area position. Such an area renders video images, which is implemented via SurfaceView created within the SDK. Before creating this area, specify its position first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify a preview area.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Set the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
3. Initialize the runtime environment. LicenseException will be thrown, if the license verification fails.
When a HuaweiVideoEditor object is created, no system resources have been used. Manually set the time when its runtime environment is initialized, and required threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Capability IntegrationIn this part, I use HVETemplateManager to obtain the on-cloud template list, and then provide the list to my app users.
Code:
// Obtain the template column list.
final HVEColumnInfo[] column = new HVEColumnInfo[1];
HVETemplateManager.getInstance().getColumnInfos(new HVETemplateManager.HVETemplateColumnsCallback() {
@Override
public void onSuccess(List<HVEColumnInfo> result) {
// Called when the list is successfully obtained.
column[0] = result.get(0);
}
@Override
public void onFail(int error) {
// Called when the list failed to be obtained.
}
});
// Obtain the list details.
final String[] templateIds = new String[1];
// size indicates the number of the to-be-requested on-cloud templates. The size value must be greater than 0. Offset indicates the offset of the to-be-requested on-cloud templates. The offset value must be greater than or equal to 0. true indicates to forcibly obtain the data of the on-cloud templates.
HVETemplateManager.getInstance().getTemplateInfos(column[0].getColumnId(), size, offset, true, new HVETemplateManager.HVETemplateInfosCallback() {
@Override
public void onSuccess(List<HVETemplateInfo> result, boolean hasMore) {
// Called when the list details are successfully obtained.
HVETemplateInfo templateInfo = result.get(0);
// Obtain the template ID.
templateIds[0] = templateInfo.getId();
}
@Override
public void onFail(int errorCode) {
// Called when the list details failed to be obtained.
}
});
// Obtain the template ID when the list details are obtained.
String templateId = templateIds[0];
// Obtain a template project.
final List<HVETemplateElement>[] editableElementList = new ArrayList[1];;
HVETemplateManager.getInstance().getTemplateProject(templateId, new HVETemplateManager.HVETemplateProjectCallback() {
@Override
public void onSuccess(List<HVETemplateElement> editableElements) {
// Direct to the material selection screen when the project is successfully obtained. Update editableElements with the paths of the selected local materials.
editableElementList[0] = editableElements;
}
@Override
public void onProgress(int progress) {
// Called when the progress of obtaining the project is received.
}
@Override
public void onFail(int errorCode) {
// Called when the project failed to be obtained.
}
});
// Prepare a template project.
HVETemplateManager.getInstance().prepareTemplateProject(templateId, new HVETemplateManager.HVETemplateProjectPrepareCallback() {
@Override
public void onSuccess() {
// Called when the preparation is successful. Create an instance of HuaweiVideoEditor, for operations like playback, preview, and export.
}
@Override
public void onProgress(int progress) {
// Called when the preparation progress is received.
}
@Override
public void onFail(int errorCode) {
// Called when the preparation failed.
}
});
// Create an instance of HuaweiVideoEditor.
// Such an instance will be used for operations like playback and export.
HuaweiVideoEditor editor = HuaweiVideoEditor.create(templateId, editableElementList[0]);
try {
editor.initEnvironment();
} catch (LicenseException e) {
SmartLog.e(TAG, "editor initEnvironment ERROR.");
}
Once you've completed this process, you'll have created an app just like in the demo displayed below.
ConclusionAn eye-catching video, for all the good it can bring, can be difficult to create. But with the help of video templates, users can create great-looking videos in even less time, so they can spend more time creating more videos.
This article illustrates a video template solution for mobile apps. The template capability offers various out-of-the-box preset templates that can be easily managed on a platform. And what's better is that the whole integration process is easy. So easy in fact even I could create a video app with templates.
ReferencesTips for Using Templates to Create Amazing Videos
Integrating the Template Capability
Related
More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201257812100840239&fid=0101187876626530001
It’s an application level development and we won’t go through the algorithm of image segmentation. Use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don’t know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don’t have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn’t meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
1.2 Add SDK Dependency in Application Level build.gradle
Introducing SDK and basic SDK of face recognition:
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user’s device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--Uses storage permissions--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // Open the corresponding configuration page based on the package name.
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
The image segmentation detector can be created through the image segmentation detection configurator “mlimagesegmentation setting".
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create “mlframe” Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator “MLImageSegmentationSetting".
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call “asyncanalyseframe” Method for Image Segmentation
Code:
// Create a task to process the result returned by the image segmentation detector. Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // Asynchronously processing the result returned by the image segmentation detector Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let’s see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-MLKit/HUAWEI-HMS-MLKit-Sample=
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
People’s portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
Previous link:
NO. 1:One article to understand Huawei HMS ML Kit text recognition, bank card recognition, general card identification
NO.2: Integrating MLkit and publishing ur app on Huawei AppGallery
NO.3.: Comparison Between Zxing and Huawei HMS Scan Kit
NO.4: How to use Huawei HMS MLKit service to quickly develop a photo translation app
Displaying products with 3D models is something too great to ignore for an e-commerce app. Using those fancy gadgets, such an app can leave users with the first impression upon products in a fresh way!
The 3D model plays an important role in boosting user conversion. It allows users to carefully view a product from every angle, before they make a purchase. Together with the AR technology, which gives users an insight into how the product will look in reality, the 3D model brings a fresher online shopping experience that can rival offline shopping.
Despite its advantages, the 3D model has yet to be widely adopted. The underlying reason for this is that applying current 3D modeling technology is expensive:
Technical requirements: Learning how to build a 3D model is time-consuming.
Time: It takes at least several hours to build a low polygon model for a simple object, and even longer for a high polygon one.
Spending: The average cost of building a simple model can be more than one hundred dollars, and even higher for building a complex one.
Luckily, 3D object reconstruction, a capability in 3D Modeling Kit newly launched in HMS Core, makes 3D model building straightforward. This capability automatically generates a 3D model with a texture for an object, via images shot from different angles with a common RGB-Cam. It gives an app the ability to build and preview 3D models. For instance, when an e-commerce app has integrated 3D object reconstruction, it can generate and display 3D models of shoes. Users can then freely zoom in and out on the models for a more immersive shopping experience.
Actual Effect
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions
3D object reconstruction is implemented on both the device and cloud. RGB images of an object are collected on the device and then uploaded to the cloud. Key technologies involved in the on-cloud modeling process include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Finally, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Preparations1. Configuring a Dependency on the 3D Modeling SDK
Open the app-level build.gradle file and add a dependency on the 3D Modeling SDK in the dependencies block.
Code:
// Build a dependency on the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configuring AndroidManifest.xml
Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission.
Code:
/<!-- Permission to read data from and write data into storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Permission to use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Development Procedure1. Configuring the Storage Permission Application
In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
/if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Creating a 3D Object Reconstruction Configurator
Code:
/// Set the PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Creating a 3D Object Reconstruction Engine and Initializing the Task
Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Create an engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Initialize the 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Creating a Listener Callback to Process the Image Upload Result
Create a listener callback that allows you to configure the operations triggered upon upload success and failure.
Code:
// Create an upload listener callback.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Passing the Upload Listener Callback to the Engine to Upload Images
Pass the upload listener callback to the engine. Call uploadFile(),
pass the task ID obtained in step 3 and the path of the images to be uploaded. Then, upload the images to the cloud server.
Code:
// Pass the listener callback to the engine.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Start uploading.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Querying the Task Status
Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Create a task processing instance.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask of the task processing instance to query the status of the 3D object reconstruction task.
Code:
// Query the task status, which can be: 0 (images to be uploaded); 1: (image upload completed); 2: (model being generated); 3( model generation completed); 4: (model generation failed).
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Creating a Listener Callback to Process the Model File Download Result
Create a listener callback that allows you to configure the operations triggered upon download success and failure.
Code:
// Create a download listener callback.
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Passing the Download Listener Callback to the Engine to Download the File of the Generated Model
Pass the download listener callback to the engine. Call downloadModel, pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
/ Pass the download listener callback to the engine.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
More Information
The object should have rich texture, be medium-sized, and a rigid body. The object should not be reflective, transparent, or semi-transparent. The object types include goods (like plush toys, bags, and shoes), furniture (like sofas), and cultural relics (such as bronzes, stone artifacts, and wooden artifacts).
The object dimension should be within the range from 15 x 15 x 15 cm to 150 x 150 x 150 cm. (A larger dimension requires a longer time for modeling.)
3D object reconstruction does not support modeling for the human body and face.
Ensure the following requirements are met during image collection: Put a single object on a stable plane in pure color. The environment shall not be dark or dazzling. Keep all images in focus, free from blur caused by motion or shaking. Ensure images are taken from various angles including the bottom, flat, and top (it is advised that you upload more than 50 images for an object). Move the camera as slowly as possible. Do not change the angle during shooting. Lastly, ensure the object-to-image ratio is as big as possible, and all parts of the object are present.
These are all about the sample code of 3D object reconstruction. Try to integrate it into your app and build your own 3D models!
ReferencesFor more details, you can go to:
3D Modeling Kit official website
3D Moedling Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download 3D Modeling Kit sample codes
Stack Overflow to solve any integration problems
BackgroundVideos are memories — so why not spend more time making them look better? Many mobile apps on the market simply offer basic editing functions, such as applying filters and adding stickers. That said, it is not enough for those who want to create dynamic videos, where a moving person stays in focus. Traditionally, this requires a keyframe to be added and the video image to be manually adjusted, which could scare off many amateur video editors.
I am one of those people and I've been looking for an easier way of implementing this kind of feature. Fortunately for me, I stumbled across the track person capability from HMS Core Video Editor Kit, which automatically generates a video that centers on a moving person, as the images below show.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Before using the capability
After using the capability
Thanks to the capability, I can now confidently create a video with the person tracking effect.
Let's see how the function is developed.
Development ProcessPreparationsConfigure the app information in AppGallery Connect.
Project Configuration1. Set the authentication information for the app via an access token or API key.
Use the setAccessToken method to set an access token during app initialization. This needs setting only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. The API key needs to be set only once.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a unique License ID.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for HuaweiVideoEditor.
When creating a video editing project, first create a HuaweiVideoEditor object and initialize its runtime environment. Release this object when exiting a video editing project.
(1) Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
(2) Specify the preview area position.
The area renders video images. This process is implemented via SurfaceView creation in the SDK. The preview area position must be specified before the area is created.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Configure the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
(3) Initialize the runtime environment. LicenseException will be thrown if license verification fails.
Creating the HuaweiVideoEditor object will not occupy any system resources. The initialization time for the runtime environment has to be manually set. Then, necessary threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
4. Add a video or an image.
Create a video lane. Add a video or an image to the lane using the file path.
Code:
// Obtain the HVETimeLine object.
HVETimeLine timeline = editor.getTimeLine();
// Create a video lane.
HVEVideoLane videoLane = timeline.appendVideoLane();
// Add a video to the end of the lane.
HVEVideoAsset videoAsset = videoLane.appendVideoAsset("test.mp4");
// Add an image to the end of the video lane.
HVEImageAsset imageAsset = videoLane.appendImageAsset("test.jpg");
Function Building
Code:
// Initialize the capability engine.
visibleAsset.initHumanTrackingEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Initialization progress.
}
@Override
public void onSuccess() {
// The initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The initialization failed.
}
});
// Track a person using the coordinates. Coordinates of two vertices that define the rectangle containing the person are returned.
List<Float> rects = visibleAsset.selectHumanTrackingPerson(bitmap, position2D);
// Enable the effect of person tracking.
visibleAsset.addHumanTrackingEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Handling progress.
}
@Override
public void onSuccess() {
// Handling successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Handling failed.
}
});
// Interrupt the effect.
visibleAsset.interruptHumanTracking();
// Remove the effect.
visibleAsset.removeHumanTrackingEffect();
ReferencesThe Importance of Visual Effects
Track Person
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Have you ever watched a video of the northern lights? Mesmerizing light rays that swirl and dance through the star-encrusted sky. It's even more stunning when they are backdropped by crystal-clear waters that flow smoothly between and under ice crusts. Complementing each other, the moving sky and water compose a dynamic scene that reflects the constant rhythm of the mother nature.
Now imagine that the video is frozen into an image: It still looks beautiful, but lacks the dynamism of the video. Such a contrast between still and moving images shows how videos are sometimes better than still images when it comes to capturing majestic scenery, since the former can convey more information and thus be more engaging.
This may be the reason why we sometimes regret just taking photos instead of capturing a video when we encounter beautiful scenery or a memorable moment.
In addition to this, when we try to add a static image to a short video, we will find that the transition between the image and other segments of the video appears very awkward, since the image is the only static segment in the whole video.
If we want to turn a static image into a dynamic video by adding some motion effects to the sky and water, one way to do this is to use a professional PC program to modify the image. However, this process is often very complicated and time-consuming: It requires adjustment of the timeline, frames, and much more, which can be a daunting prospect for amateur image editors.
Luckily, there are now numerous AI-driven capabilities that can automatically create time-lapse videos for users. I chose to use the auto-timelapse capability provided by HMS Core Video Editor Kit. It can automatically detect the sky and water in an image and produce vivid dynamic effects for them, just like this:
The movement speed and angle of the sky and water are customizable.
Now let's take a look at the detailed integration procedure for this capability, to better understand how such a dynamic effect is created.
Integration ProcedurePreparations1. Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable the required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration1. Set the app authentication information. This can be done via an API key or an access token.
Set an API key via the setApiKey method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, set an access token by using the setAccessToken method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID. This ID should be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development
Code:
// Initialize the auto-timelapse engine.
imageAsset.initTimeLapseEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// When the initialization is successful, check whether there is sky or water in the image.
int motionType = -1;
imageAsset.detectTimeLapse(new HVETimeLapseDetectCallback() {
@Override
public void onResult(int state) {
// Record the state parameter, which is used to define a motion effect.
motionType = state;
}
});
// skySpeed indicates the speed at which the sky moves; skyAngle indicates the direction to which the sky moves; waterSpeed indicates the speed at which the water moves; waterAngle indicates the direction to which the water moves.
HVETimeLapseEffectOptions options =
new HVETimeLapseEffectOptions.Builder().setMotionType(motionType)
.setSkySpeed(skySpeed)
.setSkyAngle(skyAngle)
.setWaterAngle(waterAngle)
.setWaterSpeed(waterSpeed)
.build();
// Add the auto-timelapse effect.
imageAsset.addTimeLapseEffect(options, new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
}
@Override
public void onSuccess() {
}
@Override
public void onError(int errorCode, String errorMessage) {
}
});
// Stop applying the auto-timelapse effect.
imageAsset.interruptTimeLapse();
// Remove the auto-timelapse effect.
imageAsset.removeTimeLapseEffect();
Now, the auto-timelapse capability has been successfully integrated into an app.
ConclusionWhen capturing scenic vistas, videos, which can show the dynamic nature of the world around us, are often a better choice than static images. In addition, when creating videos with multiple shots, dynamic pictures deliver a smoother transition effect than static ones.
However, for users not familiar with the process of animating static images, if they try do so manually using computer software, they may find the results unsatisfying.
The good news is that there are now mobile apps integrated with capabilities such as Video Editor Kit's auto-timelapse feature that can create time-lapse effects for users. The generated effect appears authentic and natural, the capability is easy to use, and its integration is straightforward. With such capabilities in place, a video/image app can provide users with a more captivating user experience.
In addition to video/image editing apps, I believe the auto-timelapse capability can also be utilized by many other types of apps. What other kinds of apps do you think would benefit from such a feature? Let me know in the comments section.
Quick question: How do 3D models help e-commerce apps?
The most obvious answer is that it makes the shopping experience more immersive, and there are a whole host of other benefits they bring.
To begin with, a 3D model is a more impressive way of showcasing a product to potential customers. One way it does this is by displaying richer details (allowing potential customers to rotate the product and view it from every angle), to help customers make more informed purchasing decisions. Not only that, customers can virtually try-on 3D products, to recreate the experience of shopping in a physical store. In short, all these factors contribute to boosting user conversion.
As great as it is, the 3D model has not been widely adopted among those who want it. A major reason is that the cost of building a 3D model with existing advanced 3D modeling technology is very high, due to:
Technical requirements: Building a 3D model requires someone with expertise, which can take time to master.
Time: It takes at least several hours to build a low-polygon model for a simple object, not to mention a high-polygon one.
Spending: The average cost of building just a simple model can reach hundreds of dollars.
Fortunately for us, the 3D object reconstruction capability found in HMS Core 3D Modeling Kit makes 3D model creation easy-peasy. This capability automatically generates a texturized 3D model for an object, via images shot from multiple angles with a standard RGB camera on a phone. And what's more, the generated model can be previewed. Let's check out a shoe model created using the 3D object reconstruction capability.
Shoe Model Images
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions
3D object reconstruction requires both the device and cloud. Images of an object are captured on a device, covering multiple angles of the object. And then the images are uploaded to the cloud for model creation. The on-cloud modeling process and key technologies include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Once the model is created, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Now the boring part is out of the way. Let's move on to the exciting part: how to integrate the 3D object reconstruction capability.
Integrating the 3D Object Reconstruction CapabilityPreparations1. Configure the build dependency for the 3D Modeling SDK.Add the build dependency for the 3D Modeling SDK in the dependencies block in the app-level build.gradle file.
Code:
// Build dependency for the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configure AndroidManifest.xml.Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission as needed:
Code:
<!-- Write into and read from external storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Function Development1. Configure the storage permission application.In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are granted, initialize the UI; if the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Create a 3D object reconstruction configurator.
Code:
// PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Create a 3D object reconstruction engine and initialize the task.Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Initialize the engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Create a 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Create a listener callback to process the image upload result.Create a listener callback in which you can configure the operations triggered upon upload success and failure.
Code:
// Create a listener callback for the image upload task.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Set the image upload listener for the 3D object reconstruction engine and upload the captured images.Pass the upload callback to the engine. Call uploadFile(), pass the task ID obtained in step 3 and the path of the images to be uploaded, and upload the images to the cloud server.
Code:
// Set the upload listener.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Upload captured images.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Query the task status.Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Initialize the task processing class.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask to query the status of the 3D object reconstruction task.
Code:
// Query the reconstruction task execution result. The options are as follows: 0: To be uploaded; 1: Generating; 3: Completed; 4: Failed.
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Create a listener callback to process the model file download result.Create a listener callback in which you can configure the operations triggered upon download success and failure.
Code:
// Create a download callback listener
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Pass the download listener callback to the engine to download the generated model file.Pass the download listener callback to the engine. Call downloadModel. Pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
// Set the listener for the model file download task.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
Notes1. To deliver an ideal modeling result, 3D object reconstruction has some requirements on the object to be modeled. For example, the object should have rich textures and a fixed shape. The object is expected to be non-reflective and medium-sized. Transparency or semi-transparency is not recommended. An object that meets these requirements may fall into one of the following types: goods (including plush toys, bags, and shoes), furniture (like sofas), and cultural relics (like bronzes, stone artifacts, and wooden artifacts).
2. The object dimensions should be within the range of 15 x 15 x 15 cm to 150 x 150 x 150 cm. (Larger dimensions require a longer modeling time.)
3. Modeling for the human body or face is not yet supported by the capability.
4. Suggestions for image capture: Put a single object on a stable plane in pure color. The environment should be well lit and plain. Keep all images in focus, free from blur caused by motion or shaking, and take pictures of the object from various angles including the bottom, face, and top. Uploading more than 50 images for an object is recommended. Move the camera as slowly as possible, and do not suddenly alter the angle when taking pictures. The object-to-image ratio should be as big as possible, and not a part of the object is missing.
With all these in mind, as well as the development procedure of the capability, now we are ready to create a 3D model like the shoe model above. Looking forward to seeing your own models created using this capability in the comments section below.
Reference
Home page of 3D Modeling Kit
Service introduction to 3D Modeling Kit
Detailed information about 3D object reconstruction