Using Motion Capture to Animate a Model - Huawei Developers

It's so rewarding to set the model you've created into motion. If only there were an easy way to do this… well, actually there is!
I had long sought out this kind of solution, and then voila! I got my hands on motion capture, a capability from HMS Core 3D Modeling Kit, which comes with technologies like human body detection, model acceleration, and model compression, as well as a monocular human pose estimation algorithm from the deep learning perspective.
Crucially, this capability does NOT require advanced devices — a mobile phone with an RGB camera is good enough on its own. The camera captures 3D data from 24 key skeletal points on the body, which the capability uses to seamlessly animate a model.
What makes the motion capture capability even better is its straightforward integration process, which I'd like to share with you.
Application Scenarios​Motion capture is ideal for 3D content creation for gaming, film & TV, and healthcare, among other similar fields. It can be used to animate characters and create videos for user generated content (UGC) games, animate virtual streamers in real time, and provide injury rehab, to cite just a few examples.
Integration Process​Preparations​Refer to the official instructions to complete all necessary preparations.
Configuring the Project
Before developing the app, there are a few more things you'll need to do: Configure app information in AppGallery Connect; make sure that the Maven repository address of the 3D Modeling SDK has been configured in the project, and that the SDK has been integrated.
1. Create a motion capture engine
Java:
// Set necessary parameters as needed.
Modeling3dMotionCaptureEngineSetting setting = new Modeling3dMotionCaptureEngineSetting.Factory()
// Set the detection mode.
// Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON_QUATERNION: skeleton point quaternions of a human pose.
// Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON: skeleton point coordinates of a human pose.
.setAnalyzeType(Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON_QUATERNION
| Modeling3dMotionCaptureEngineSetting.TYPE_3DSKELETON)
.create();
Modeling3dMotionCaptureEngine engine = Modeling3dMotionCaptureEngineFactory.getInstance().getMotionCaptureEngine(setting);
Modeling3dFrame encapsulates video frame or static image data sourced from a camera, as well as related data processing logic.
Customize the logic for processing the input video frames, to convert them to the Modeling3dFrame object for detection. The video frame format can be NV21.
Use android.graphics.Bitmap to convert the input image to the Modeling3dFrame object for detection. The image format can be JPG, JPEG, or PNG.
Java:
// Create a Modeling3dFrame object using a bitmap.
Modeling3dFrame frame = Modeling3dFrame.fromBitmap(bitmap);
// Create a Modeling3dFrame object using a video frame.
Modeling3dFrame.Property property = new Modeling3dFrame.Property.Creator().setFormatType(ImageFormat.NV21)
// Set the frame width.
.setWidth(width)
// Set the frame height.
.setHeight(height)
// Set the video frame rotation angle.
.setQuadrant(quadant)
// Set the video frame number.
.setItemIdentity(framIndex)
.create();
Modeling3dFrame frame = Modeling3dFrame.fromByteBuffer(byteBuffer, property);
2. Call the asynchronous or synchronous API for motion detection.
Sample code for calling the asynchronous API asyncAnalyseFrame:
Java:
Task<List<Modeling3dMotionCaptureSkeleton>> task = engine.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<Modeling3dMotionCaptureSkeleton>>() {
@Override
public void onSuccess(List<Modeling3dMotionCaptureSkeleton> results) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
Sample code for calling the synchronous API analyseFrame
Java:
SparseArray<Modeling3dMotionCaptureSkeleton> sparseArray = engine.analyseFrame(frame);
for (int i = 0; i < sparseArray.size(); i++) {
// Process the detection result.
};
3. Stop the motion capture engine to release detection resources, once the detection is complete
Java:
try {
if (engine != null) {
engine.stop();
}
} catch (IOException e) {
// Handle exceptions.
}
Result
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> 3D Modeling Kit official website
>> 3D Modeling Kit Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

Related

How to Integrate HUAWEI ML Kit's Image Super-Resolution Capability

Have you ever been sent compressed images that have poor definition? Even when you zoom in, the image is still blurry. I recently received a ZIP file of travel photos from a trip I went on with a friend. After opening it, I found to my dismay that each image was either too dark, too dim or too blurry. How am I going to show off with such terrible photos? So, I sought help from the Internet, and luckily, I came across HUAWEI ML Kit's image super-resolution capability. The amazing thing is that this SDK is free of charge, and can be used with all Android phones.
Background
ML Kit's image super-resolution capability is backed by a deep neural network and provides two super-resolution capabilities for mobile apps:
1x super-resolution: Doesn’t change the size but removes compression noise, for vivid, natural images.
3x super-resolution: Provides 3x magnification (with 9x higher pixel resolution) while also removing compression noise, for clear details and textures.
Of course, these capabilities are not easy for non-professionals to develop themselves. But here is the sample code, so you can download and experience it yourself!
Application Scenarios
Image super-resolution is not just about optimizing faces and text; it’s useful in a huge range of situations. When shopping apps integrate 3x image super-resolution, they can give shoppers the option to zoom in on a product, without losing any image quality. For news and reading apps, the 1x image super-resolution capability can give users clearer images without changing the image resolution. Then, for photography apps which integrate image super-resolution, users can capture more vivid and natural images.
Code Development
Before we get started with API development, we need to make necessary development preparations. Also, ensure that the Maven repository address of the HMS Core SDK has been configured in your project, and the SDK of this service has been integrated.
Note
When using the image super-resolution capability, you need to set the value of targetSdkVersion to less than 29 in the build.gradle file.
1. Create an image super-resolution analyzer.
You can create the analyzer using the MLImageSuperResolutionAnalyzerSetting class.
Code:
// Method 1: Use default parameter settings, that is, the 1x super-resolution capability.
MLImageSuperResolutionAnalyzer analyzer = MLImageSuperResolutionAnalyzerFactory.getInstance
().getImageSuperResolutionAnalyzer();
// Method 2: Use custom parameter settings. Currently, only 1x super-resolution is supported. More capabilities will be supported in future.
MLImageSuperResolutionAnalyzerSetting settings = new MLImageSuperResolutionAnalyzerSetting.Factory()
// Set the image super-resolution multiplier to 1x.
setScale(MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_1X)
create();
MLImageSuperResolutionAnalyzer analyzer = MLImageSuperResolutionAnalyzerFactory.getInstance
().getImageSuperResolutionAnalyzer(setting)
2. Create an MLFrame object by using android.graphics.Bitmap. (Note that the bitmap type must be ARGB8888. Convert if necessary.)
Code:
// Create an MLFrame object using the bitmap. The parameter bitmap indicates the input image.
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
3. Perform super-resolution processing on the image. (For details about the error codes, please refer to ML Kit Error Codes).
Code:
Task<MLImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSuperResolutionResult>() {
public void onSuccess(MLImageSuperResolutionResult result) {
// Processing logic for detection success.
}})
addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for detection failure.
// failure.
if(e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the error code. You can process the error code and customize the messages users see.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the error code.
String errorMessage = mlException.getMessage();
} else {
// Other errors.
}
}
});
4. Stop the analyzer to release recognition resources after the recognition is complete.
Code:
if (analyzer != null) {
analyzer.stop();
}
Demo
Now, let's compare some before and after images, to see what this capability can do.
Original image
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Image after 1x super-resolution
Original image
Image after 3x super-resolution
Related Links
With HUAWEI ML Kit, you can integrate a wide range of capabilities and services into your apps, including text recognition, card recognition, text translation, facial recognition, speech recognition, text to speech, image classification, image segmentation, and product visual search.
If you’re interested, go to HUAWEI ML Kit - About the Service to learn more.
To find out more about these capabilities and services, here are some useful resources:
https://developer.huawei.com/consumer/cn/doc/31403
How much time it takes to integrate ML kit in a project. What is the use case of ML kit super resolution capability.

How to Add AI Dubbing to App

Text to speech (TTS) is highly sought after by audio/video editors, thanks to its ability to automatically turn text into naturally sounding speech, as a low cost alternative to human dubbing. It can be used on all kinds of video, regardless of whether the video is long or short.
I recently stumbled upon the AI dubbing capability of HMS Core Audio Editor Kit, which does just that. It is able to turn input text into speech with just a tap, and comes loaded with a selection of smooth, naturally-sounding male and female timbres.
This is ideal for developing apps that involve e-books, creating audio content, and editing audio/video. Below describes how I integrated this capability.
Making Preparations​Complete all necessary preparations by following the official guide.
Configuring the Project​1. Set the app authentication information
The information can be set via an API key or access token (recommended).
Use setAccessToken to set an access token during app initialization.
Java:
HAEApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. The API key needs to be set only once.
Java:
HAEApplication.getInstance().setApiKey("your ApiKey");
2. Initialize the runtime environment
Initialize HuaweiAudioEditor, and create a timeline and necessary lanes.
Java:
// Create a HuaweiAudioEditor instance.
HuaweiAudioEditor mEditor = HuaweiAudioEditor.create(mContext);
// Initialize the runtime environment of HuaweiAudioEditor.
mEditor.initEnvironment();
// Create a timeline.
HAETimeLine mTimeLine = mEditor.getTimeLine();
// Create a lane.
HAEAudioLane audioLane = mTimeLine.appendAudioLane();
Import audio.
Java:
// Add an audio asset to the end of the lane.
HAEAudioAsset audioAsset = audioLane.appendAudioAsset("/sdcard/download/test.mp3", mTimeLine.getCurrentTime());
3. Integrate AI dubbing.
Call HAEAiDubbingEngine to implement AI dubbing.
Java:
// Configure the AI dubbing engine.
HAEAiDubbingConfig haeAiDubbingConfig = new HAEAiDubbingConfig()
// Set the volume.
.setVolume(volumeVal)
// Set the speech speed.
.setSpeed(speedVal)
// Set the speaker.
.setType(defaultSpeakerType);
// Create a callback for an AI dubbing task.
HAEAiDubbingCallback callback = new HAEAiDubbingCallback() {
@Override
public void onError(String taskId, HAEAiDubbingError err) {
// Callback when an error occurs.
}
@Override
public void onWarn(String taskId, HAEAiDubbingWarn warn) {}
@Override
public void onRangeStart(String taskId, int start, int end) {}
@Override
public void onAudioAvailable(String taskId, HAEAiDubbingAudioInfo haeAiDubbingAudioFragment, int i, Pair<Integer, Integer> pair, Bundle bundle) {
// Start receiving and then saving the file.
}
@Override
public void onEvent(String taskId, int eventID, Bundle bundle) {
// Synthesis is complete.
if (eventID == HAEAiDubbingConstants.EVENT_SYNTHESIS_COMPLETE) {
// The AI dubbing task has been complete. That is, the synthesized audio data is completely processed.
}
}
@Override
public void onSpeakerUpdate(List<HAEAiDubbingSpeaker> speakerList, List<String> lanList,
List<String> lanDescList) { }
};
// AI dubbing engine.
HAEAiDubbingEngine mHAEAiDubbingEngine = new HAEAiDubbingEngine(haeAiDubbingConfig);
// Set the listener for the playback process of an AI dubbing task.
mHAEAiDubbingEngine.setAiDubbingCallback(callback);
// Convert text to speech and play the speech. In the method, text indicates the text to be converted to speech, and mode indicates the mode for playing the converted audio.
String taskId = mHAEAiDubbingEngine.speak(text, mode);
// Pause playback.
mHAEAiDubbingEngine.pause();
// Resume playback.
mHAEAiDubbingEngine.resume();
// Stop AI dubbing.
mHAEAiDubbingEngine.stop();
Result​In the demo below, I successfully implement the AI dubbing function in app. Now, I can converts text into emotionally expressive speech, with default and custom timbres.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> Audio Editor Kit official website
>> Audio Editor Kit Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

How to Develop an AR-Based Health Check App

Now that spring has arrived, it's time to get out and stretch your legs! As programmers, many of us are used to being seated for hours and hours at time, which can lead to back pain and aches. We're all aware that building the workout plan, and keeping track of health indicators round-the-clock can have enormous benefits for body, mind, and soul.
Fortunately, AR Engine makes that remarkably easy. It comes with face tracking capabilities, and will soon support body tracking as well. Thanks to core AR algorithms, AR Engine is able to monitor heart rate, respiratory rate, facial health status, and heart rate waveform signals in real time during your workouts. You can also use it to build an app, for example, to track the real-time workout status, perform real-time health check for patients, or to monitor real-time health indicators of vulnerable users, like the elderly or the disabled. With AR Engine, you can make your health or fitness app more engaging and visually immersive than you might have believed possible.
Advantages and Device Model Restrictions​1. Monitors core health indicators like heart rate, respiratory rate, facial health status, and heart rate waveform signals in real time.
2. Enables devices to better understand their users. Thanks to technologies like Simultaneous Localization and Mapping (SLAM) and 3D reconstruction, AR Engine renders images to build 3D human faces on mobile phones, resulting in seamless virtual-physical cohesion.
3. Supports all of the device models listed in Software and Hardware Requirements of AR Engine Features.
Demo Introduction​A simple demo is available to give you a grasp of how to integrate AR Engine, and use its human body and face tracking capabilities.
ENABLE_HEALTH_DEVICE: indicates whether to enable health check.
HealthParameter: health check parameter, including heart rate, respiratory rate, age and gender probability based on facial features, and heart rate waveform signals.
FaceDetectMode: face detection mode, including health rate checking, respiratory rate checking, real-time health checking, and all of the three above.
Effect​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The following details how you can run the demo using the source code.
Key Steps​1. Add the Huawei Maven repository to the project-level build.gradle file.
Java:
buildscript {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
dependencies {
...
// Add the AppGallery Connect plugin configuration.
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
}
}allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
}
2. Add dependencies on the SDK to the app-level build.gradle file.
Java:
implementation 'com.huawei.hms:arenginesdk:3.7.0.3'
3. Declare system permissions in the AndroidManifest.xml file.
Java:
<uses-permission android:name="android.permission.CAMERA" />
4. Check whether AR Engine has been installed on the current device. If yes, the app can run properly. If not, the app automatically redirects the user to AppGallery to install AR Engine.
Java:
boolean isInstallArEngineApk = AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk && isRemindInstall) {
Toast.makeText(this, "Please agree to install.", Toast.LENGTH_LONG).show();
finish();
}
if (!isInstallArEngineApk) {
startActivity(new Intent(this, ConnectAppMarketActivity.class));
isRemindInstall = true;
}
return AREnginesApk.isAREngineApkReady(this);
Key Code​1. Call ARFaceTrackingConfig and create an ARSession object. Then, set the human face detection mode, configure AR parameters for motion tracking, and enable motion tracking.
Java:
mArSession = new ARSession(this);
mArFaceTrackingConfig = new ARFaceTrackingConfig(mArSession);
mArFaceTrackingConfig.setEnableItem(ARConfigBase.ENABLE_HEALTH_DEVICE);
mArFaceTrackingConfig
.setFaceDetectMode(ARConfigBase.FaceDetectMode.HEALTH_ENABLE_DEFAULT.getEnumValue());
2. Call FaceHealthServiceListener to add your app and pass the health check status and progress. Call handleProcessProgressEvent() to obtain the health check progress.
Java:
mArSession.addServiceListener(new FaceHealthServiceListener() {
@Override
public void handleEvent(EventObject eventObject) {
if (!(eventObject instanceof FaceHealthCheckStateEvent)) {
return;
}
final FaceHealthCheckState faceHealthCheckState =
((FaceHealthCheckStateEvent) eventObject).getFaceHealthCheckState();
runOnUiThread(new Runnable() {
@Override
public void run() {
mHealthCheckStatusTextView.setText(faceHealthCheckState.toString());
}
});
}
@Override
public void handleProcessProgressEvent(final int progress) {
mHealthRenderManager.setHealthCheckProgress(progress);
runOnUiThread(new Runnable() {
@Override
public void run() {
setProgressTips(progress);
}
});
}
});
private void setProgressTips(int progress) {
String progressTips = "processing";
if (progress >= MAX_PROGRESS) {
progressTips = "finish";
}
mProgressTips.setText(progressTips);
mHealthProgressBar.setProgress(progress);
}
Update data in real time and display the health check result.
Java:
mActivity.runOnUiThread(new Runnable() {
@Override
public void run() {
mHealthParamTable.removeAllViews();
TableRow heatRateTableRow = initTableRow(ARFace.HealthParameter.PARAMETER_HEART_RATE.toString(),
healthParams.getOrDefault(ARFace.HealthParameter.PARAMETER_HEART_RATE, 0.0f).toString());
mHealthParamTable.addView(heatRateTableRow);
TableRow breathRateTableRow = initTableRow(ARFace.HealthParameter.PARAMETER_BREATH_RATE.toString(),
healthParams.getOrDefault(ARFace.HealthParameter.PARAMETER_BREATH_RATE, 0.0f).toString());
mHealthParamTable.addView(breathRateTableRow);
}
});
​References​>> AR Engine official website
>> AR Engine Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems

Top Tips for Developing a Recordist Function

Efficient records management is more relevant now than ever. In our digital age, huge growth of information — audio, video, and more — must be handled in a limited time. This makes a real-time transcription function essential, because it is useful in many scenarios.
In audio or video conferencing, this function records meeting minutes that I can refer to later, which is more convenient than writing them all by myself. I've seen my kids struggling to take notes during their online courses, so I know this process can be so much easier with the help of the transcription function. In short, it removed the job of writing down everything the teacher says, allowing the kids to focus on the lecture itself and easily review the content again later. Also, the live captions provide viewers with real-time subtitles, for a better watching experience.
As a coder, I am a believer in "Actions speak louder than words". That's why I developed a real-time transcription function, with the help of a real-time transcription capability from ML Kit, like this.
​Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
This function transcribes up to 5 hours of speech into Chinese, English (or both), and French languages in real time. In addition, the output text is punctuated and contains timestamps.
This function has some requirements: the support for French is dependent on the mobile phone model, whereas Chinese and English are available on all mobile phone models. Also, the function requires Internet connection.
Okay, let's move on to the point of this article: How I developed this real-time transcription function.
Development Procedure​1. Make necessary preparations. This is described in detail in the References section.
2. Create and then configure a speech recognizer.
Code:
MLSpeechRealTimeTranscriptionConfig config = new MLSpeechRealTimeTranscriptionConfig.Factory()
// Set the language, which can be Chinese, English, both Chinese and English, or French.
.setLanguage(MLSpeechRealTimeTranscriptionConstants.LAN_ZH_CN)
// Punctuate the text recognized from the speech.
.enablePunctuation(true)
// Set the sentence offset.
.enableSentenceTimeOffset(true)
// Set the word offset.
.enableWordTimeOffset(true)
.create();
MLSpeechRealTimeTranscription mSpeechRecognizer = MLSpeechRealTimeTranscription.getInstance();
3. Create a callback for the speech recognition result listener.
Code:
// Use the callback to implement the MLSpeechRealTimeTranscriptionListener API and methods in the API.
Protected class SpeechRecognitionListener implements MLSpeechRealTimeTranscriptionListener{
@Override
public void onStartListening() {
// The recorder starts to receive speech.
}
@Override
public void onStartingOfSpeech() {
// The speech recognizer detects the user speaking.
}
@Override
public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) {
// Return the original PCM stream and audio power to the user. The API does not run in the main thread, and the return result is processed in a sub-thread.
}
@Override
public void onRecognizingResults(Bundle partialResults) {
// Receive recognized text from MLSpeechRealTimeTranscription.
}
@Override
public void onError(int error, String errorMessage) {
// Callback when an error occurs during recognition.
}
@Override
public void onState(int state,Bundle params) {
// Notify the app of the recognizer status change.
}
}
4. Bind the speech recognizer.
Code:
mSpeechRecognizer.setRealTimeTranscriptionListener(new SpeechRecognitionListener());
5. Call startRecognizing to begin speech recognition.
Code:
mSpeechRecognizer.startRecognizing(config);
6. Stop recognition and release resources occupied by the recognizer when the recognition is complete.
Code:
if (mSpeechRecognizer!= null) {
mSpeechRecognizer.destroy();
}
References​Audio Transcription: What It Is, What It Is Not, and Why It's in High Demand
Configuring Necessary Information During Preparation
Adding a Plug-In and the Maven Repository Address, and Configuring the Building Dependencies

How to Automatically Create a Scenic Timelapse Video

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Have you ever watched a video of the northern lights? Mesmerizing light rays that swirl and dance through the star-encrusted sky. It's even more stunning when they are backdropped by crystal-clear waters that flow smoothly between and under ice crusts. Complementing each other, the moving sky and water compose a dynamic scene that reflects the constant rhythm of the mother nature.
Now imagine that the video is frozen into an image: It still looks beautiful, but lacks the dynamism of the video. Such a contrast between still and moving images shows how videos are sometimes better than still images when it comes to capturing majestic scenery, since the former can convey more information and thus be more engaging.
This may be the reason why we sometimes regret just taking photos instead of capturing a video when we encounter beautiful scenery or a memorable moment.
In addition to this, when we try to add a static image to a short video, we will find that the transition between the image and other segments of the video appears very awkward, since the image is the only static segment in the whole video.
If we want to turn a static image into a dynamic video by adding some motion effects to the sky and water, one way to do this is to use a professional PC program to modify the image. However, this process is often very complicated and time-consuming: It requires adjustment of the timeline, frames, and much more, which can be a daunting prospect for amateur image editors.
Luckily, there are now numerous AI-driven capabilities that can automatically create time-lapse videos for users. I chose to use the auto-timelapse capability provided by HMS Core Video Editor Kit. It can automatically detect the sky and water in an image and produce vivid dynamic effects for them, just like this:
The movement speed and angle of the sky and water are customizable.
Now let's take a look at the detailed integration procedure for this capability, to better understand how such a dynamic effect is created.
Integration Procedure​Preparations​1. Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable the required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration​1. Set the app authentication information. This can be done via an API key or an access token.
Set an API key via the setApiKey method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, set an access token by using the setAccessToken method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID. This ID should be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development​
Code:
// Initialize the auto-timelapse engine.
imageAsset.initTimeLapseEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// When the initialization is successful, check whether there is sky or water in the image.
int motionType = -1;
imageAsset.detectTimeLapse(new HVETimeLapseDetectCallback() {
@Override
public void onResult(int state) {
// Record the state parameter, which is used to define a motion effect.
motionType = state;
}
});
// skySpeed indicates the speed at which the sky moves; skyAngle indicates the direction to which the sky moves; waterSpeed indicates the speed at which the water moves; waterAngle indicates the direction to which the water moves.
HVETimeLapseEffectOptions options =
new HVETimeLapseEffectOptions.Builder().setMotionType(motionType)
.setSkySpeed(skySpeed)
.setSkyAngle(skyAngle)
.setWaterAngle(waterAngle)
.setWaterSpeed(waterSpeed)
.build();
// Add the auto-timelapse effect.
imageAsset.addTimeLapseEffect(options, new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
}
@Override
public void onSuccess() {
}
@Override
public void onError(int errorCode, String errorMessage) {
}
});
// Stop applying the auto-timelapse effect.
imageAsset.interruptTimeLapse();
// Remove the auto-timelapse effect.
imageAsset.removeTimeLapseEffect();
Now, the auto-timelapse capability has been successfully integrated into an app.
Conclusion​When capturing scenic vistas, videos, which can show the dynamic nature of the world around us, are often a better choice than static images. In addition, when creating videos with multiple shots, dynamic pictures deliver a smoother transition effect than static ones.
However, for users not familiar with the process of animating static images, if they try do so manually using computer software, they may find the results unsatisfying.
The good news is that there are now mobile apps integrated with capabilities such as Video Editor Kit's auto-timelapse feature that can create time-lapse effects for users. The generated effect appears authentic and natural, the capability is easy to use, and its integration is straightforward. With such capabilities in place, a video/image app can provide users with a more captivating user experience.
In addition to video/image editing apps, I believe the auto-timelapse capability can also be utilized by many other types of apps. What other kinds of apps do you think would benefit from such a feature? Let me know in the comments section.

Categories

Resources