BackgroundVideos are memories — so why not spend more time making them look better? Many mobile apps on the market simply offer basic editing functions, such as applying filters and adding stickers. That said, it is not enough for those who want to create dynamic videos, where a moving person stays in focus. Traditionally, this requires a keyframe to be added and the video image to be manually adjusted, which could scare off many amateur video editors.
I am one of those people and I've been looking for an easier way of implementing this kind of feature. Fortunately for me, I stumbled across the track person capability from HMS Core Video Editor Kit, which automatically generates a video that centers on a moving person, as the images below show.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Before using the capability
After using the capability
Thanks to the capability, I can now confidently create a video with the person tracking effect.
Let's see how the function is developed.
Development ProcessPreparationsConfigure the app information in AppGallery Connect.
Project Configuration1. Set the authentication information for the app via an access token or API key.
Use the setAccessToken method to set an access token during app initialization. This needs setting only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. The API key needs to be set only once.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a unique License ID.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for HuaweiVideoEditor.
When creating a video editing project, first create a HuaweiVideoEditor object and initialize its runtime environment. Release this object when exiting a video editing project.
(1) Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
(2) Specify the preview area position.
The area renders video images. This process is implemented via SurfaceView creation in the SDK. The preview area position must be specified before the area is created.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Configure the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
(3) Initialize the runtime environment. LicenseException will be thrown if license verification fails.
Creating the HuaweiVideoEditor object will not occupy any system resources. The initialization time for the runtime environment has to be manually set. Then, necessary threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
4. Add a video or an image.
Create a video lane. Add a video or an image to the lane using the file path.
Code:
// Obtain the HVETimeLine object.
HVETimeLine timeline = editor.getTimeLine();
// Create a video lane.
HVEVideoLane videoLane = timeline.appendVideoLane();
// Add a video to the end of the lane.
HVEVideoAsset videoAsset = videoLane.appendVideoAsset("test.mp4");
// Add an image to the end of the video lane.
HVEImageAsset imageAsset = videoLane.appendImageAsset("test.jpg");
Function Building
Code:
// Initialize the capability engine.
visibleAsset.initHumanTrackingEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Initialization progress.
}
@Override
public void onSuccess() {
// The initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The initialization failed.
}
});
// Track a person using the coordinates. Coordinates of two vertices that define the rectangle containing the person are returned.
List<Float> rects = visibleAsset.selectHumanTrackingPerson(bitmap, position2D);
// Enable the effect of person tracking.
visibleAsset.addHumanTrackingEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Handling progress.
}
@Override
public void onSuccess() {
// Handling successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Handling failed.
}
});
// Interrupt the effect.
visibleAsset.interruptHumanTracking();
// Remove the effect.
visibleAsset.removeHumanTrackingEffect();
ReferencesThe Importance of Visual Effects
Track Person
Related
AR placement apps have enhanced daily life in a myriad of different ways, from AR furniture placement for home furnishing and interior decorating, AR fitting in retail, to AR-based 3D models in education which gives students an opportunity to take an in-depth look at the internal workings of objects.
From integrating HUAWEI Scene Kit, you're only eight steps away from launching an AR placement app of your own. ARView, a set of AR-oriented scenario-based APIs in Scene Kit, uses the plane detection capability of AR Engine, along with the graphics rendering capability in Scene Kit, to create a 3D materials loading and rendering capability for common AR scenes.
ARView Functions
With ARView, you'll be able to:
1. Load and render 3D materials in AR scenes.
2. Set whether to display the lattice plane (consisting of white lattice points) to help select a plane in a real-world view.
3. Tap an object placed on the lattice plane to select it. Once selected, the object will turn red. You can then move, resize, or rotate it as needed.
Development Procedure
Before using ARView, you'll need to integrate the Scene Kit SDK into your Android Studio project. For details, please refer to Integrating the HMS Core SDK.
ARView inherits from GLSurfaceView, and overrides lifecycle-related methods. It can facilitate the creation of an ARView-based app in just eight steps:
1. Create an ARViewActivity that inherits from Activity. Add a Button to load materials.
Code:
public class ARViewActivity extends Activity {
private ARView mARView;
private Button mButton;
private boolean isLoadResource = false;
}
2. Add an ARView to the layout.
Code:
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent">
</com.huawei.hms.scene.sdk.ARView>
Note: To achieve the desired experience offered by ARView, your app should not support screen orientation changes or split-screen mode; thus, add the following configuration in the AndroidManifest.xml file:
Code:
android:screenOrientation="portrait"
android:resizeableActivity="false"
3. Override the onCreate method of ARViewActivity and obtain the ARView.
Code:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_ar_view);
mARView = findViewById(R.id.ar_view);
mButton = findViewById(R.id.button);
}
4. Add a Switch button in the onCreate method to set whether or not to display the lattice plane.
Code:
Switch mSwitch = findViewById(R.id.show_plane_view);
mSwitch.setChecked(true);
mSwitch.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
mARView.enablePlaneDisplay(isChecked);
}
});
Note: Add the Switch button in the layout before using it.
Code:
<Switch
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/show_plane_view"
android:layout_alignParentTop="true"
android:layout_marginTop="15dp"
android:layout_alignParentEnd="true"
android:layout_marginEnd ="15dp"
android:layout_gravity="end"
android:text="@string/show_plane"
android:theme="@style/AppTheme"
tools:ignore="RelativeOverlap" />
5. Add a button callback method. Tapping the button once will load a material, and tapping it again will clear the material.
Code:
public void onBtnClearResourceClicked(View view) {
if (!isLoadResource) {
mARView.loadAsset("ARView/scene.gltf");
isLoadResource = true;
mButton.setText(R.string.btn_text_clear_resource);
} else {
mARView.clearResource();
mARView.loadAsset("");
isLoadResource = false;
mButton.setText(R.string.btn_text_load);
}
}
Note: The onBtnSceneKitDemoClicked method must be registered in the layout attribute onClick of the button, which is tapped to load or clear a material.
6. Override the onPause method of ARViewActivity and call the onPause method of ARView.
Code:
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
7. Override the onResume method of ARViewActivity and call the onResume method of ARView.
Code:
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
8. Override the onDestroy method for ARViewActivity and call the destroy method of ARView.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
9. (Optional) After the material is loaded, use setInitialPose to set its initial status (scale and rotation).
Code:
float[] scale = new float[] { 0.1f, 0.1f, 0.1f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.707f, 0.0f };
mARView.setInitialPose(scale, rotation);
Effects
You can develop a basic AR placement app, simply by calling ARView from Scene Kit, as detailed in the eight steps above. If you are interested in this implementation method, you can view the Scene Kit demo on GitHub.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The ARView capability can be used to do far more than just develop AR placement apps; it can also help you implement a range of engaging functions, such as AR games, virtual exhibitions, and AR navigation features.
GitHub to download demos and sample codes
Displaying products with 3D models is something too great to ignore for an e-commerce app. Using those fancy gadgets, such an app can leave users with the first impression upon products in a fresh way!
The 3D model plays an important role in boosting user conversion. It allows users to carefully view a product from every angle, before they make a purchase. Together with the AR technology, which gives users an insight into how the product will look in reality, the 3D model brings a fresher online shopping experience that can rival offline shopping.
Despite its advantages, the 3D model has yet to be widely adopted. The underlying reason for this is that applying current 3D modeling technology is expensive:
Technical requirements: Learning how to build a 3D model is time-consuming.
Time: It takes at least several hours to build a low polygon model for a simple object, and even longer for a high polygon one.
Spending: The average cost of building a simple model can be more than one hundred dollars, and even higher for building a complex one.
Luckily, 3D object reconstruction, a capability in 3D Modeling Kit newly launched in HMS Core, makes 3D model building straightforward. This capability automatically generates a 3D model with a texture for an object, via images shot from different angles with a common RGB-Cam. It gives an app the ability to build and preview 3D models. For instance, when an e-commerce app has integrated 3D object reconstruction, it can generate and display 3D models of shoes. Users can then freely zoom in and out on the models for a more immersive shopping experience.
Actual Effect
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions
3D object reconstruction is implemented on both the device and cloud. RGB images of an object are collected on the device and then uploaded to the cloud. Key technologies involved in the on-cloud modeling process include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Finally, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Preparations1. Configuring a Dependency on the 3D Modeling SDK
Open the app-level build.gradle file and add a dependency on the 3D Modeling SDK in the dependencies block.
Code:
// Build a dependency on the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configuring AndroidManifest.xml
Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission.
Code:
/<!-- Permission to read data from and write data into storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Permission to use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Development Procedure1. Configuring the Storage Permission Application
In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
/if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Creating a 3D Object Reconstruction Configurator
Code:
/// Set the PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Creating a 3D Object Reconstruction Engine and Initializing the Task
Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Create an engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Initialize the 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Creating a Listener Callback to Process the Image Upload Result
Create a listener callback that allows you to configure the operations triggered upon upload success and failure.
Code:
// Create an upload listener callback.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Passing the Upload Listener Callback to the Engine to Upload Images
Pass the upload listener callback to the engine. Call uploadFile(),
pass the task ID obtained in step 3 and the path of the images to be uploaded. Then, upload the images to the cloud server.
Code:
// Pass the listener callback to the engine.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Start uploading.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Querying the Task Status
Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Create a task processing instance.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask of the task processing instance to query the status of the 3D object reconstruction task.
Code:
// Query the task status, which can be: 0 (images to be uploaded); 1: (image upload completed); 2: (model being generated); 3( model generation completed); 4: (model generation failed).
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Creating a Listener Callback to Process the Model File Download Result
Create a listener callback that allows you to configure the operations triggered upon download success and failure.
Code:
// Create a download listener callback.
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Passing the Download Listener Callback to the Engine to Download the File of the Generated Model
Pass the download listener callback to the engine. Call downloadModel, pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
/ Pass the download listener callback to the engine.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
More Information
The object should have rich texture, be medium-sized, and a rigid body. The object should not be reflective, transparent, or semi-transparent. The object types include goods (like plush toys, bags, and shoes), furniture (like sofas), and cultural relics (such as bronzes, stone artifacts, and wooden artifacts).
The object dimension should be within the range from 15 x 15 x 15 cm to 150 x 150 x 150 cm. (A larger dimension requires a longer time for modeling.)
3D object reconstruction does not support modeling for the human body and face.
Ensure the following requirements are met during image collection: Put a single object on a stable plane in pure color. The environment shall not be dark or dazzling. Keep all images in focus, free from blur caused by motion or shaking. Ensure images are taken from various angles including the bottom, flat, and top (it is advised that you upload more than 50 images for an object). Move the camera as slowly as possible. Do not change the angle during shooting. Lastly, ensure the object-to-image ratio is as big as possible, and all parts of the object are present.
These are all about the sample code of 3D object reconstruction. Try to integrate it into your app and build your own 3D models!
ReferencesFor more details, you can go to:
3D Modeling Kit official website
3D Moedling Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download 3D Modeling Kit sample codes
Stack Overflow to solve any integration problems
Text to speech (TTS) is highly sought after by audio/video editors, thanks to its ability to automatically turn text into naturally sounding speech, as a low cost alternative to human dubbing. It can be used on all kinds of video, regardless of whether the video is long or short.
I recently stumbled upon the AI dubbing capability of HMS Core Audio Editor Kit, which does just that. It is able to turn input text into speech with just a tap, and comes loaded with a selection of smooth, naturally-sounding male and female timbres.
This is ideal for developing apps that involve e-books, creating audio content, and editing audio/video. Below describes how I integrated this capability.
Making PreparationsComplete all necessary preparations by following the official guide.
Configuring the Project1. Set the app authentication information
The information can be set via an API key or access token (recommended).
Use setAccessToken to set an access token during app initialization.
Java:
HAEApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. The API key needs to be set only once.
Java:
HAEApplication.getInstance().setApiKey("your ApiKey");
2. Initialize the runtime environment
Initialize HuaweiAudioEditor, and create a timeline and necessary lanes.
Java:
// Create a HuaweiAudioEditor instance.
HuaweiAudioEditor mEditor = HuaweiAudioEditor.create(mContext);
// Initialize the runtime environment of HuaweiAudioEditor.
mEditor.initEnvironment();
// Create a timeline.
HAETimeLine mTimeLine = mEditor.getTimeLine();
// Create a lane.
HAEAudioLane audioLane = mTimeLine.appendAudioLane();
Import audio.
Java:
// Add an audio asset to the end of the lane.
HAEAudioAsset audioAsset = audioLane.appendAudioAsset("/sdcard/download/test.mp3", mTimeLine.getCurrentTime());
3. Integrate AI dubbing.
Call HAEAiDubbingEngine to implement AI dubbing.
Java:
// Configure the AI dubbing engine.
HAEAiDubbingConfig haeAiDubbingConfig = new HAEAiDubbingConfig()
// Set the volume.
.setVolume(volumeVal)
// Set the speech speed.
.setSpeed(speedVal)
// Set the speaker.
.setType(defaultSpeakerType);
// Create a callback for an AI dubbing task.
HAEAiDubbingCallback callback = new HAEAiDubbingCallback() {
@Override
public void onError(String taskId, HAEAiDubbingError err) {
// Callback when an error occurs.
}
@Override
public void onWarn(String taskId, HAEAiDubbingWarn warn) {}
@Override
public void onRangeStart(String taskId, int start, int end) {}
@Override
public void onAudioAvailable(String taskId, HAEAiDubbingAudioInfo haeAiDubbingAudioFragment, int i, Pair<Integer, Integer> pair, Bundle bundle) {
// Start receiving and then saving the file.
}
@Override
public void onEvent(String taskId, int eventID, Bundle bundle) {
// Synthesis is complete.
if (eventID == HAEAiDubbingConstants.EVENT_SYNTHESIS_COMPLETE) {
// The AI dubbing task has been complete. That is, the synthesized audio data is completely processed.
}
}
@Override
public void onSpeakerUpdate(List<HAEAiDubbingSpeaker> speakerList, List<String> lanList,
List<String> lanDescList) { }
};
// AI dubbing engine.
HAEAiDubbingEngine mHAEAiDubbingEngine = new HAEAiDubbingEngine(haeAiDubbingConfig);
// Set the listener for the playback process of an AI dubbing task.
mHAEAiDubbingEngine.setAiDubbingCallback(callback);
// Convert text to speech and play the speech. In the method, text indicates the text to be converted to speech, and mode indicates the mode for playing the converted audio.
String taskId = mHAEAiDubbingEngine.speak(text, mode);
// Pause playback.
mHAEAiDubbingEngine.pause();
// Resume playback.
mHAEAiDubbingEngine.resume();
// Stop AI dubbing.
mHAEAiDubbingEngine.stop();
ResultIn the demo below, I successfully implement the AI dubbing function in app. Now, I can converts text into emotionally expressive speech, with default and custom timbres.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> Audio Editor Kit official website
>> Audio Editor Kit Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Being creative is hard, but thinking of a fantastic idea is even more challenging. And once you've done that, the hardest part is expressing that idea in an attractive way.
This, I think, is the very reason why templates are gaining popularity in text, image, audio, and video editing, and more. Of all these templates, video templates are probably the most in demand by users. This is because a video is a lengthy creation which may also require high costs. And therefore, it is much more convenient to create a video using a template, rather than from scratch — which is particularly true for video editing amateurs.
The video template solution I have got for my app is a template capability of HMS Core Video Editor Kit. This capability comes preloaded with a library of templates that my users can use directly to quickly create a short video, make a vlog during their journey, create a product display video, generate a news video, and more.
On top of this, this capability comes with a platform where I can manage the templates easily, like this.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Template management platform — AppGallery ConnectTo be honest, one of the things that I really like about the capability is that it's easy to integrate, thanks to its straightforward code, as well as a whole set of APIs and relevant description on how to use them. Below is the process I followed to integrate the capability into my app.
Development ProcedurePreparations
Configure the app information.
Integrate the SDK.
Set up the obfuscation scripts.
Declare necessary permissions, including: device vibration permission, microphone use permission, storage read permission, and storage write permission.
Project ConfigurationSetting Authentication InformationSet the authentication information by using:
An access token. The setting is required only once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, an API key, which also needs to be set only once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Configuring a License IDSince this ID is used to manage the usage quotas of the service, the ID must be unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
Initialize the runtime environment for HuaweiVideoEditor.
During project configuration, an object of HuaweiVideoEditor must be created first, and its runtime environment must be initialized. When exiting the project, the object shall be released.
1. Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
2. Specify the preview area position. Such an area renders video images, which is implemented via SurfaceView created within the SDK. Before creating this area, specify its position first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify a preview area.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Set the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
3. Initialize the runtime environment. LicenseException will be thrown, if the license verification fails.
When a HuaweiVideoEditor object is created, no system resources have been used. Manually set the time when its runtime environment is initialized, and required threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Capability IntegrationIn this part, I use HVETemplateManager to obtain the on-cloud template list, and then provide the list to my app users.
Code:
// Obtain the template column list.
final HVEColumnInfo[] column = new HVEColumnInfo[1];
HVETemplateManager.getInstance().getColumnInfos(new HVETemplateManager.HVETemplateColumnsCallback() {
@Override
public void onSuccess(List<HVEColumnInfo> result) {
// Called when the list is successfully obtained.
column[0] = result.get(0);
}
@Override
public void onFail(int error) {
// Called when the list failed to be obtained.
}
});
// Obtain the list details.
final String[] templateIds = new String[1];
// size indicates the number of the to-be-requested on-cloud templates. The size value must be greater than 0. Offset indicates the offset of the to-be-requested on-cloud templates. The offset value must be greater than or equal to 0. true indicates to forcibly obtain the data of the on-cloud templates.
HVETemplateManager.getInstance().getTemplateInfos(column[0].getColumnId(), size, offset, true, new HVETemplateManager.HVETemplateInfosCallback() {
@Override
public void onSuccess(List<HVETemplateInfo> result, boolean hasMore) {
// Called when the list details are successfully obtained.
HVETemplateInfo templateInfo = result.get(0);
// Obtain the template ID.
templateIds[0] = templateInfo.getId();
}
@Override
public void onFail(int errorCode) {
// Called when the list details failed to be obtained.
}
});
// Obtain the template ID when the list details are obtained.
String templateId = templateIds[0];
// Obtain a template project.
final List<HVETemplateElement>[] editableElementList = new ArrayList[1];;
HVETemplateManager.getInstance().getTemplateProject(templateId, new HVETemplateManager.HVETemplateProjectCallback() {
@Override
public void onSuccess(List<HVETemplateElement> editableElements) {
// Direct to the material selection screen when the project is successfully obtained. Update editableElements with the paths of the selected local materials.
editableElementList[0] = editableElements;
}
@Override
public void onProgress(int progress) {
// Called when the progress of obtaining the project is received.
}
@Override
public void onFail(int errorCode) {
// Called when the project failed to be obtained.
}
});
// Prepare a template project.
HVETemplateManager.getInstance().prepareTemplateProject(templateId, new HVETemplateManager.HVETemplateProjectPrepareCallback() {
@Override
public void onSuccess() {
// Called when the preparation is successful. Create an instance of HuaweiVideoEditor, for operations like playback, preview, and export.
}
@Override
public void onProgress(int progress) {
// Called when the preparation progress is received.
}
@Override
public void onFail(int errorCode) {
// Called when the preparation failed.
}
});
// Create an instance of HuaweiVideoEditor.
// Such an instance will be used for operations like playback and export.
HuaweiVideoEditor editor = HuaweiVideoEditor.create(templateId, editableElementList[0]);
try {
editor.initEnvironment();
} catch (LicenseException e) {
SmartLog.e(TAG, "editor initEnvironment ERROR.");
}
Once you've completed this process, you'll have created an app just like in the demo displayed below.
ConclusionAn eye-catching video, for all the good it can bring, can be difficult to create. But with the help of video templates, users can create great-looking videos in even less time, so they can spend more time creating more videos.
This article illustrates a video template solution for mobile apps. The template capability offers various out-of-the-box preset templates that can be easily managed on a platform. And what's better is that the whole integration process is easy. So easy in fact even I could create a video app with templates.
ReferencesTips for Using Templates to Create Amazing Videos
Integrating the Template Capability
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Have you ever watched a video of the northern lights? Mesmerizing light rays that swirl and dance through the star-encrusted sky. It's even more stunning when they are backdropped by crystal-clear waters that flow smoothly between and under ice crusts. Complementing each other, the moving sky and water compose a dynamic scene that reflects the constant rhythm of the mother nature.
Now imagine that the video is frozen into an image: It still looks beautiful, but lacks the dynamism of the video. Such a contrast between still and moving images shows how videos are sometimes better than still images when it comes to capturing majestic scenery, since the former can convey more information and thus be more engaging.
This may be the reason why we sometimes regret just taking photos instead of capturing a video when we encounter beautiful scenery or a memorable moment.
In addition to this, when we try to add a static image to a short video, we will find that the transition between the image and other segments of the video appears very awkward, since the image is the only static segment in the whole video.
If we want to turn a static image into a dynamic video by adding some motion effects to the sky and water, one way to do this is to use a professional PC program to modify the image. However, this process is often very complicated and time-consuming: It requires adjustment of the timeline, frames, and much more, which can be a daunting prospect for amateur image editors.
Luckily, there are now numerous AI-driven capabilities that can automatically create time-lapse videos for users. I chose to use the auto-timelapse capability provided by HMS Core Video Editor Kit. It can automatically detect the sky and water in an image and produce vivid dynamic effects for them, just like this:
The movement speed and angle of the sky and water are customizable.
Now let's take a look at the detailed integration procedure for this capability, to better understand how such a dynamic effect is created.
Integration ProcedurePreparations1. Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable the required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration1. Set the app authentication information. This can be done via an API key or an access token.
Set an API key via the setApiKey method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, set an access token by using the setAccessToken method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID. This ID should be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development
Code:
// Initialize the auto-timelapse engine.
imageAsset.initTimeLapseEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// When the initialization is successful, check whether there is sky or water in the image.
int motionType = -1;
imageAsset.detectTimeLapse(new HVETimeLapseDetectCallback() {
@Override
public void onResult(int state) {
// Record the state parameter, which is used to define a motion effect.
motionType = state;
}
});
// skySpeed indicates the speed at which the sky moves; skyAngle indicates the direction to which the sky moves; waterSpeed indicates the speed at which the water moves; waterAngle indicates the direction to which the water moves.
HVETimeLapseEffectOptions options =
new HVETimeLapseEffectOptions.Builder().setMotionType(motionType)
.setSkySpeed(skySpeed)
.setSkyAngle(skyAngle)
.setWaterAngle(waterAngle)
.setWaterSpeed(waterSpeed)
.build();
// Add the auto-timelapse effect.
imageAsset.addTimeLapseEffect(options, new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
}
@Override
public void onSuccess() {
}
@Override
public void onError(int errorCode, String errorMessage) {
}
});
// Stop applying the auto-timelapse effect.
imageAsset.interruptTimeLapse();
// Remove the auto-timelapse effect.
imageAsset.removeTimeLapseEffect();
Now, the auto-timelapse capability has been successfully integrated into an app.
ConclusionWhen capturing scenic vistas, videos, which can show the dynamic nature of the world around us, are often a better choice than static images. In addition, when creating videos with multiple shots, dynamic pictures deliver a smoother transition effect than static ones.
However, for users not familiar with the process of animating static images, if they try do so manually using computer software, they may find the results unsatisfying.
The good news is that there are now mobile apps integrated with capabilities such as Video Editor Kit's auto-timelapse feature that can create time-lapse effects for users. The generated effect appears authentic and natural, the capability is easy to use, and its integration is straightforward. With such capabilities in place, a video/image app can provide users with a more captivating user experience.
In addition to video/image editing apps, I believe the auto-timelapse capability can also be utilized by many other types of apps. What other kinds of apps do you think would benefit from such a feature? Let me know in the comments section.