{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Travel and life vlogs are popular among app users: Those videos are telling, covering all the most attractive parts in a journey or a day. To create such a video first requires great editing efforts to cut out the trivial and meaningless segments in the original video, which used to be a thing of video editing pros.
This is no longer the case. Now we have an array of intelligent mobile apps that can help us automatically extract highlights from a video, so we can focus more on spicing up the video by adding special effects, for example. I opted to use the highlight capability from HMS Core Video Editor Kit to create my own vlog editor.
How It WorksThis capability assesses how appealing video frames are and then extracts the most suitable ones. To this end, it is said that the capability takes into consideration the video properties most concerned by users, a conclusion that is drawn from survey and experience assessment from users. On the basis of this, the highlight capability develops a comprehensive frame assessment scheme that covers various aspects. For example:
Aesthetics evaluation. This aspect is a data set built upon composition, lighting, color, and more, which is the essential part of the capability.
Tags and facial expressions. They represent the frames that are detected and likely to be extracted by the highlight capability, such as frames that contain people, animals, and laughter.
Frame quality and camera movement mode. The capability discards low-quality frames that are blurry, out-of-focus, overexposed, or shaky, to ensure such frames will not impact the quality of the finished video. Amazingly, despite all of these, the highlight capability is able to complete the extraction process in just 2 seconds.
See for yourself how the finished video by the highlight capability compares with the original video.
Backing TechnologyThe highlight capability stands out from the crowd by adopting models and a frame assessment scheme that are iteratively optimized. Technically and specifically speaking:
The capability introduces AMediaCodec for hardware decoding and Open Graphics Library (OpenGL) for rendering frames and automatically adjusting the frame dimensions according to the screen dimensions. The capability algorithm uses multiple neural network models. In this way, the capability checks the device model where it runs and then automatically chooses to run on NPU, CPU, or GPU. Consequently, the capability delivers a higher running performance.
To provide the extraction result more quickly, the highlight capability uses the two-stage algorithm of sparse sampling to dense sampling, checks how content distributed among numerous videos, and adopts the frame buffer. All these contribute to a higher efficiency of determining the most attractive video frames. To ensure high performance of the algorithm, the capability adopts the thread pool scheduling and producer-consumer model, to ensure that the video decoder and models can run at the same time.
During the sparse sampling stage, the capability decodes and processes some (up to 15) key frames in a video. The interval between the key frames is no less than 2 seconds. During the dense sampling stage, the algorithm picks out the best key frame and then extracts frames before and after to further analyze the highlighted part of the video.
The extraction result is closely related to the key frame position. The processing result of the highlight capability will not be ideal when the sampling points are not dense enough because, for example, the video does not have enough key frames or the duration is too long (greater than 1 minute). For the capability to deliver optimal performance, it recommends that the duration of the input video be less than 60 seconds.
Let's now move on to how this capability can be integrated.
Integration ProcessPreparationsMake necessary preparations before moving on to the next part. Required steps include:
Configure the app information in AppGallery Connect.
Integrate the SDK of HMS Core.
Configure obfuscation scripts.
Declare necessary permissions.
Setting up the Video Editing Project1. Configure the app authentication information by using either an access token or API key.
Method 1: Call setAccessToken to set an access token, which is required only once during app startup.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Method 2: Call setApiKey to set an API key, which is required only once during app startup.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a License ID.
This ID is used to manage the usage quotas of Video Editor Kit and must be unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
Initialize the runtime environment of HuaweiVideoEditor.
When creating a video editing project, we first need to create an instance of HuaweiVideoEditor and initialize its runtime environment. When you exit the project, the instance shall be released.
Create an instance of HuaweiVideoEditor.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Determine the layout of the preview area.
Such an area renders video images, and this is implemented by SurfaceView within the fundamental capability SDK. Before the area is created, we need to specify its layout.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify a preview area.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Design the layout of the area.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If the license verification fails, LicenseException will be thrown.
After the HuaweiVideoEditor instance is created, it will not use any system resources, and we need to manually set the initialization time for the runtime environment. Then, the fundamental capability SDK will internally create necessary threads and timers.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Integrating the Highlight Capability
Code:
// Create an object that will be processed by the highlight capability.
HVEVideoSelection hveVideoSelection = new HVEVideoSelection();
// Initialize the engine of the highlight capability.
hveVideoSelection.initVideoSelectionEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// After the initialization is successful, extract the highlighted video. filePath indicates the video file path, and duration indicates the desired duration for the highlighted video.
hveVideoSelection.getHighLight(filePath, duration, new HVEVideoSelectionCallback() {
@Override
public void onResult(long start) {
// The highlighted video is successfully extracted.
}
});
// Release the highlight engine.
hveVideoSelection.releaseVideoSelectionEngine();
ConclusionThe vlog has been playing a vital part in this we-media era since its appearance. In the past, there were just a handful of people who could create a vlog, because the process of picking out the most interesting part from the original video could be so demanding.
Thanks to smart mobile app technology, even video editing amateurs can now create a vlog because much of the process can be completed automatically by an app with the function of highlighted video extraction.
The highlight capability from the Video Editor Kit is one such function. This capability introduces a set of features to deliver incredible results, such as AMediaCodec, OpenGL, neural networks, a two-stage algorithm (sparse sampling to dense sampling), and more. This capability can help create either a highlighted video extractor or build a highlighted video extraction feature in an app.
Related
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I recently read an article that explained how we as human beings are hardwired to enter the fight-or-flight mode when we realize that we are being watched. This feeling is especially strong when somebody else is trying to take a picture of us, which is why many of us find it difficult to smile in photos. This effect is so strong that we've all had the experience of looking at a photo right after it was taken and noticing straight away that the photo needs to be retaken because our smile wasn't wide enough or didn't look natural. So, the next time someone criticizes my smile in a photo, I'm just going to them, "It's not my fault. It's literally an evolutionary trait!"
Or, instead of making such an excuse, what about turning to technology for help? Actually, I have tried using some photo editor apps to modify my portrait photos, making my facial expression look nicer by, for example, removing my braces, whitening my teeth, and erasing my smile lines. However, maybe it's because of my rusty image editing skills, the modified images often turn out to be strange.
My lack of success with photo editing made me wonder: Wouldn't it be great if there was a function specially designed for people like me, who find it difficult to smile naturally in photos and who aren't good at photo editing, which could automatically give us picture-perfect smiles?
I then suddenly remembered that I had heard about an interesting function called smile filter that has been going viral on different apps and platforms. A smile filter is an app feature which can automatically add a natural-looking smile to a face detected in an image. I have tried it before and was really amazed by the result. In light of my sudden recall, I decided to create a demo app with a similar function, in order to figure out the principle behind it.
To provide my app with a smile filter, I chose to use the auto-smile capability provided by HMS Core Video Editor Kit. This capability automatically detects people in an image and then lightens up the detected faces with a smile (either closed- or open-mouth) that perfectly blends in with each person's facial structure. With the help of such a capability, a mobile app can create the perfect smile in seconds and save users from the hassle of having to use a professional image editing program.
Check the result out for yourselves:
Looks pretty natural, right? This is the result offered by my demo app integrated with the auto-smile capability. The original image looks like this:
Next, I will explain how I integrated the auto-smile capability into my app and share the relevant source code from my demo app.
Integration ProcedurePreparations1, Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration1. Set the app authentication information. This can be done via an API key or an access token.
Using an API key: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, using an access token: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID, which must be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development
Code:
// Apply the auto-smile effect. Currently, this effect only supports image assets.
imageAsset.addFaceSmileAIEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Callback when the handling progress is received.
}
@Override
public void onSuccess() {
// Callback when the handling is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the handling failed.
}
});
// Stop applying the auto-smile effect.
imageAsset.interruptFaceSmile();
// Remove the auto-smile effect.
imageAsset.removeFaceSmileAIEffect();
And with that, I successfully integrated the auto-smile capability into my demo app, and now it can automatically add smiles to faces detected in the input image.
ConclusionResearch has demonstrated that it is normal for people to behave unnaturally when we are being photographed. Such unnaturalness becomes even more obvious when we try to smile. This explains why numerous social media apps and video/image editing apps have introduced smile filter functions, which allow users to easily and quickly add a naturally looking smile to faces in an image.
Among various solutions to such a function, HMS Core Video Editor Kit's auto-smile capability stands out by providing excellent, natural-looking results and featuring straightforward and quick integration.
What's better, the auto-smile capability can be used together with other capabilities from the same kit, to further enhance users' image editing experience. For example, when used in conjunction with the kit's AI color capability, you can add color to an old black-and-white photo and then use auto-smile to add smiles to the sullen expressions of the people in the photo. It's a great way to freshen up old and dreary photos from the past.
And that's just one way of using the auto-smile capability in conjunction with other capabilities. What ideas do you have? Looking forward to knowing your thoughts in the comments section.
ReferencesHow to Overcome Camera Shyness or Phobia
Introduction to Auto-Smile
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In photography, cutout is a function that is often used to editing images, such as removing the background. To achieve this function, a technique known as green screen is universally used, which is also called as chroma keying. This technique requires a green background to be added manually.
This, however, makes the green screen-dependent cutout a challenge to those new to video/image editing. The reason is that most images and videos do not come with a green background, and adding such a background is actually quite complex.
Luckily, a number of mobile apps on the market help with this, as they are able to automatically cut out the desired object for users to later edit. To create an app that is capable of doing this, I turned to the recently released object segmentation capability from HMS Core Video Editor Kit for help. This capability utilizes the AI algorithm, instead of the green screen, to intelligently separate an object from other parts of an image or a video, delivering an ideal segmentation result for removing the background and many other further editing operations.
This is what my demo has achieved with the help of the capability.
It is a perfect cutout, right? As you can see, the cut object comes with a smooth edge, without any unwanted parts appearing in the original video.
Before moving on to how I created this cutout tool with the help of object segmentation, let's see what lies behind the capability.
How It WorksThe object segmentation capability adopts an interactive way for cutting out objects. A user first taps or draws a line on the object to be cut out, and then the interactive segmentation algorithm checks the track of the user's tap and intelligently identifies their intent. The capability then selects and cuts out the object. The object segmentation capability performs interactive segmentation on the first video frame to obtain the mask of the object to be cut out. The model supporting the capability traverses frames following the first frame by using the mask obtained from the first frame and applying it to subsequent frames, and then matches the mask with the object in them before cutting the object out.
The model assigns frames with different weights, according to the segmentation accuracy of each frame. It then blends the weighted segmentation result of the intermediate frame with the mask obtained from the first frame, in order to segment the desired object from other frames. Consequently, the capability manages to cut out an object as wholly as possible, delivering a higher segmentation accuracy.
What makes the capability better is that it has no restrictions on object types. As long as an object is distinctive to the other parts of the image or video and is against a simple background, the capability can cleanly cut out this object.
Now, let's check out how the capability can be integrated.
Integration ProcedureMaking PreparationsThere are necessary steps to do before the next part. The steps include:
Configure app information in AppGallery Connect.
Integrate the SDK of HMS Core.
Configure obfuscation scripts.
Apply for necessary permissions.
Setting Up the Video Editing Project1. Configure the app authentication information. Available options include:
Call setAccessToken to set an access token, which is required only once during app startup.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, call setApiKey to set an API key, which is required only once during app startup.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a License ID.
Because this ID is used to manage the usage quotas of the mentioned service, the ID must be unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
1) Initialize the runtime environment for HuaweiVideoEditor.
When creating a video editing project, we first need to create an instance of HuaweiVideoEditor and initialize its runtime environment. When you exit the project, the instance shall be released.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Determine the layout of the preview area.
Such an area renders video images, and this is implemented by SurfaceView within the fundamental capability SDK. Before the area is created, we need to specify its layout.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify a preview area.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Design the layout for the area.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If the license verification fails, LicenseException will be thrown.
After the HuaweiVideoEditor instance is created, it will not use any system resources, and we need to manually set the initialization time for the runtime environment. Then, the fundamental capability SDK will internally create necessary threads and timers.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Integrating Object Segmentation
Code:
// Initialize the engine of object segmentation.
videoAsset.initSegmentationEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Initialization progress.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// After the initialization is successful, segment a specified object and then return the segmentation result.
// bitmap: video frame containing the object to be segmented; timeStamp: timestamp of the video frame on the timeline; points: set of coordinates determined according to the video frame, and the upper left vertex of the video frame is the coordinate origin. It is recommended that the coordinate count be greater than or equal to two. All of the coordinates must be within the object to be segmented. The object is determined according to the track of coordinates.
int result = videoAsset.selectSegmentationObject(bitmap, timeStamp, points);
// After the handling is successful, apply the object segmentation effect.
videoAsset.addSegmentationEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Progress of object segmentation.
}
@Override
public void onSuccess() {
// The object segmentation effect is successfully added.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The object segmentation effect failed to be added.
}
});
// Stop applying the object segmentation effect.
videoAsset.interruptSegmentation();
// Remove the object segmentation effect.
videoAsset.removeSegmentationEffect();
// Release the engine of object segmentation.
videoAsset.releaseSegmentationEngine();
And this concludes the integration process. A cutout function ideal for an image/video editing app was just created.
I just came up with a bunch of fields where object segmentation can help, like live commerce, online education, e-conference, and more.
In live commerce, the capability helps replace the live stream background with product details, letting viewers conveniently learn about the product while watching a live stream.
In online education and e-conference, the capability lets users switch the video background with an icon, or an image of a classroom or meeting room. This makes online lessons and meetings feel more professional.
The capability is also ideal for video editing apps. Take my demo app for example. I used it to add myself to a vlog that my friend created, which made me feel like I was traveling with her.
I think the capability can also be used together with other video editing functions, to realize effects like copying an object, deleting an object, or even adjusting the timeline of an object. I'm sure you've also got some great ideas for using this capability. Let me know in the comments section.
ConclusionCutting out objects used to be a thing of people with editing experience, a process that requires the use of a green screen.
Luckily, things have changed thanks to the cutout function found in many mobile apps. It has become a basic function in mobile apps that support video/image editing and is essential for some advanced functions like background removal.
Object segmentation from Video Editor Kit is a straightforward way of implementing the cutout feature into your app. This capability leverages an elaborate AI algorithm and depends on the interactive segmentation method, delivering an ideal and highly accurate object cutout result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I dare say there are two types of people in this world: people who love Toy Story and people who have not watched it.
Well, this is just the opinion of a huge fan of the animation film. When I was a child, I always dreamed of having toys that could move and play with me, like my own Buzz Lightyear. Thanks to a fancy technique called rigging, I can now bring my toys to life, albeit I'm probably too old for them now.
What Is Rigging in 3D Animation and Why Do We Need It?Put simply, rigging is a process whereby a skeleton is created for a 3D model to make it move. In other words, rigging creates a set of connected virtual bones that are used to control a 3D model.
It paves the way for animation because it enables a model to be deformed, making it moveable, which is the very reason that rigging is necessary for 3D animation.
What Is Auto Rigging3D animation has been adopted by mobile apps in a number of fields (gaming, e-commerce, video, and more), to achieve more realistic animations than 2D.
However, this graphic technique has daunted many developers (like me) because rigging, one of its major prerequisites, is difficult and time-consuming for people who are unfamiliar with modeling. Specifically speaking, most high-performing rigging solutions have many requirements. An example of this is that the input model should be in a standard position, seven or eight key skeletal points should be added, as well as inverse kinematics which must be added to the bones, and more.
Luckily, there are solutions that can automatically complete rigging, such as the auto rigging solution from HMS Core 3D Modeling Kit.
This capability delivers a wholly automated rigging process, requiring just a biped humanoid model that is generated using images taken from a mobile phone camera. After the model is input, auto rigging uses AI algorithms for limb rigging and generates the model skeleton and skin weights (which determine the degree to which a bone can influence a part of the mesh). Then, the capability changes the orientation and position of the skeleton so that the model can perform a range of preset actions, like walking, running, and jumping. Besides, the rigged model can also be moved according to an action generated by using motion capture technology, or be imported into major 3D engines for animation.
Lower requirements do not compromise rigging accuracy. Auto rigging is built upon hundreds of thousands of 3D model rigging data records. Thanks to some fine-tuned data records, the capability delivers ideal algorithm accuracy and generalization.
I know that words alone are no proof, so check out the animated model I've created using the capability.
Movement is smooth, making the cute panda move almost like a real one. Now I'd like to show you how I created this model and how I integrated auto rigging into my demo app.
Integration ProcedurePreparationsBefore moving on to the real integration work, make necessary preparations, which include:
Configure app information in AppGallery Connect.
Integrate the HMS Core SDK with the app project, which includes Maven repository address configuration.
Configure obfuscation scripts.
Declare necessary permissions.
Capability Integration1. Set an access token or API key — which can be found in agconnect-services.json — during app initialization for app authentication.
Using the access token: Call setAccessToken to set an access token. This task is required only once during app initialization.
Code:
ReconstructApplication.getInstance().setAccessToken("your AccessToken");
Using the API key: Call setApiKey to set an API key. This key does not need to be set again.
Code:
ReconstructApplication.getInstance().setApiKey("your api_key");
The access token is recommended. And if you prefer the API key, it is assigned to the app when it is created in AppGallery Connect.
2. Create a 3D object reconstruction engine and initialize it. Then, create an auto rigging configurator.
Code:
// Create a 3D object reconstruction engine.
Modeling3dReconstructEngine modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(context);
// Create an auto rigging configurator.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
// Set the working mode of the engine to PICTURE.
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
// Set the task type to auto rigging.
.setTaskType(Modeling3dReconstructConstants.TaskType.AUTO_RIGGING)
.create();
3. Create a listener for the result of uploading images of an object.
Code:
private Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Callback when the upload progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
// Callback when the upload is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when the upload failed.
}
};
4. Use a 3D object reconstruction configurator to initialize the task, set an upload listener for the engine created in step 1, and upload images.
Code:
// Use the configurator to initialize the task, which should be done in a sub-thread.
Modeling3dReconstructInitResult modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
String taskId = modeling3dReconstructInitResult.getTaskId();
// Set an upload listener.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Call the uploadFile API of the 3D object reconstruction engine to upload images.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
5. Query the status of the auto rigging task.
Code:
// Initialize the task processing class.
Modeling3dReconstructTaskUtils modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(context);
// Call queryTask in a sub-thread to query the task status.
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(taskId);
// Obtain the task status.
int status = queryResult.getStatus();
6. Create a listener for the result of model file download.
Code:
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
// Callback when download progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
// Callback when download is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when download failed.
}
};
7. Pass the download listener to the 3D object reconstruction engine, to download the rigged model.
Code:
// Set download configurations.
Modeling3dReconstructDownloadConfig downloadConfig = new Modeling3dReconstructDownloadConfig.Factory()
// Set the model file format to OBJ or glTF.
.setModelFormat(Modeling3dReconstructConstants.ModelFormat.OBJ)
// Set the texture map mode to normal mode or PBR mode.
.setTextureMode(Modeling3dReconstructConstants.TextureMode.PBR)
.create();
// Set the download listener.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Call downloadModelWithConfig, passing the task ID, path to which the downloaded file will be saved, and download configurations, to download the rigged model.
modeling3dReconstructEngine.downloadModelWithConfig(taskId, fileSavePath, downloadConfig);
Where to UseAuto rigging is used in many scenarios, for example:
Gaming. The most direct way of using auto rigging is to create moveable characters in a 3D game. Or, I think we can combine it with AR to create animated models that can appear in the camera display of a mobile device, which will interact with users.
Online education. We can use auto rigging to animate 3D models of popular toys, and liven them up with dance moves, voice-overs, and nursery rhymes to create educational videos. These models can be used in educational videos to appeal to kids more.
E-commerce. Anime figurines look rather plain compared to how they behave in animes. To spice up the figurines, we can use auto rigging to animate 3D models that will look more engaging and dynamic.
Conclusion3D animation is widely used in mobile apps, because it presents objects in a more fun and interactive way.
A key technique for creating great 3D animations is rigging. Conventional rigging requires modeling know-how and expertise, which puts off many amateur modelers.
Auto rigging is the perfect solution to this challenge because its full-automated rigging process can produce highly accurate rigged models that can be easily animated using major engines available on the market. Auto rigging not only lowers the costs and requirements of 3D model generation and animation, but also helps 3D models look more appealing.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What Is HDR and Why Does It MatterStreaming technology has improved significantly, giving rise to higher and higher video resolutions from those at or below 480p (which are known as standard definition or SD for short) to those at or above 720p (high definition, or HD for short).
The video resolution is vital for all apps. A research that I recently came across backs this up: 62% of people are more likely to negatively perceive a brand that provides a poor-quality video experience, while 57% of people are less likely to share a poor-quality video. With this in mind, it's no wonder that there are so many emerging solutions to enhance video resolution.
One solution is HDR — high dynamic range. It is a post-processing method used in imaging and photography, which mimics what a human eye can see by giving more details to dark areas and improving the contrast. When used in a video player, HDR can deliver richer videos with a higher resolution.
Many HDR solutions, however, are let down by annoying restrictions. These can include a lack of unified technical specifications, high level of difficulty for implementing them, and a requirement for videos in ultra-high definition. I tried to look for a solution without such restrictions and luckily, I found one. That's the HDR Vivid SDK from HMS Core Video Kit. This solution is packed with image-processing features like the opto-electronic transfer function (OETF), tone mapping, and HDR2SDR. With these features, the SDK can equip a video player with richer colors, higher level of detail, and more.
I used the SDK together with the HDR Ability SDK (which can also be used independently) to try the latter's brightness adjustment feature, and found that they could deliver an even better HDR video playback experience. And on that note, I'd like to share how I used these two SDKs to create a video player.
Before Development1. Configure the app information as needed in AppGallery Connect.
2. Integrate the HMS Core SDK.
For Android Studio, the SDK can be integrated via the Maven repository. Before the development procedure, the SDK needs to be integrated into the Android Studio project.
3. Configure the obfuscation scripts.
4. Add permissions, including those for accessing the Internet, for obtaining the network status, for accessing the Wi-Fi network, for writing data into the external storage, for reading data from the external storage, for reading device information, for checking whether a device is rooted, and obtaining the wake lock. (The last three permissions are optional.)
App DevelopmentPreparations1. Check whether the device is capable of decoding an HDR Vivid video. If the device has such a capability, the following function will return true.
Code:
public boolean isSupportDecode() {
// Check whether the device supports MediaCodec.
MediaCodecList mcList = new MediaCodecList(MediaCodecList.ALL_CODECS);
MediaCodecInfo[] mcInfos = mcList.getCodecInfos();
for (MediaCodecInfo mci : mcInfos) {
// Filter out the encoder.
if (mci.isEncoder()) {
continue;
}
String[] types = mci.getSupportedTypes();
String typesArr = Arrays.toString(types);
// Filter out the non-HEVC decoder.
if (!typesArr.contains("hevc")) {
continue;
}
for (String type : types) {
// Check whether 10-bit HEVC decoding is supported.
MediaCodecInfo.CodecCapabilities codecCapabilities = mci.getCapabilitiesForType(type);
for (MediaCodecInfo.CodecProfileLevel codecProfileLevel : codecCapabilities.profileLevels) {
if (codecProfileLevel.profile == HEVCProfileMain10
|| codecProfileLevel.profile == HEVCProfileMain10HDR10
|| codecProfileLevel.profile == HEVCProfileMain10HDR10Plus) {
// true means supported.
return true;
}
}
}
}
// false means unsupported.
return false;
}
2. Parse a video to obtain information about its resolution, OETF, color space, and color format. Then save the information in a custom variable. In the example below, the variable is named as VideoInfo.
Code:
public class VideoInfo {
private int width;
private int height;
private int tf;
private int colorSpace;
private int colorFormat;
private long durationUs;
}
3. Create a SurfaceView object that will be used by the SDK to process the rendered images.
Code:
// surface_view is defined in a layout file.
SurfaceView surfaceView = (SurfaceView) view.findViewById(R.id.surface_view);
4. Create a thread to parse video streams from a video.
Rendering and Transcoding a Video1. Create and then initialize an instance of HdrVividRender.
Code:
HdrVividRender hdrVividRender = new HdrVividRender();
hdrVividRender.init();
2. Configure the OETF and resolution for the video source.
Code:
// Configure the OETF.
hdrVividRender.setTransFunc(2);
// Configure the resolution.
hdrVividRender.setInputVideoSize(3840, 2160);
When the SDK is used on an Android device, only the rendering mode for input is supported.
3. Configure the brightness for the output. This step is optional.
Code:
hdrVividRender.setBrightness(700);
4. Create a Surface object, which will serve as the input. This method is called when HdrVividRender works in rendering mode, and the created Surface object is passed as the inputSurface parameter of configure to the SDK.
Code:
Surface inputSurface = hdrVividRender.createInputSurface();
5. Configure the output parameters.
Set the dimensions of the rendered Surface object. This step is necessary in the rendering mode for output.
Code:
// surfaceView is the video playback window.
hdrVividRender.setOutputSurfaceSize(surfaceView.getWidth(), surfaceView.getHeight());
Set the color space for the buffered output video, which can be set in the transcoding mode for output. This step is optional. However, when no color space is set, BT.709 is used by default.
Code:
hdrVividRender.setColorSpace(HdrVividRender.COLORSPACE_P3);
Set the color format for the buffered output video, which can be set in the transcoding mode for output. This step is optional. However, when no color format is specified, R8G8B8A8 is used by default.
Code:
hdrVividRender.setColorFormat(HdrVividRender.COLORFORMAT_R8G8B8A8);
6. When the rendering mode is used as the output mode, the following APIs are required.
Code:
hdrVividRender.configure(inputSurface, new HdrVividRender.InputCallback() {
@Override
public int onGetDynamicMetaData(HdrVividRender hdrVividRender, long pts) {
// Set the static metadata, which needs to be obtained from the video source.
HdrVividRender.StaticMetaData lastStaticMetaData = new HdrVividRender.StaticMetaData();
hdrVividRender.setStaticMetaData(lastStaticMetaData);
// Set the dynamic metadata, which also needs to be obtained from the video source.
ByteBuffer dynamicMetaData = ByteBuffer.allocateDirect(10);
hdrVividRender.setDynamicMetaData(20000, dynamicMetaData);
return 0;
}
}, surfaceView.getHolder().getSurface(), null);
7. When the transcoding mode is used as the output mode, call the following APIs.
Code:
hdrVividRender.configure(inputSurface, new HdrVividRender.InputCallback() {
@Override
public int onGetDynamicMetaData(HdrVividRender hdrVividRender, long pts) {
// Set the static metadata, which needs to be obtained from the video source.
HdrVividRender.StaticMetaData lastStaticMetaData = new HdrVividRender.StaticMetaData();
hdrVividRender.setStaticMetaData(lastStaticMetaData);
// Set the dynamic metadata, which also needs to be obtained from the video source.
ByteBuffer dynamicMetaData = ByteBuffer.allocateDirect(10);
hdrVividRender.setDynamicMetaData(20000, dynamicMetaData);
return 0;
}
}, null, new HdrVividRender.OutputCallback() {
@Override
public void onOutputBufferAvailable(HdrVividRender hdrVividRender, ByteBuffer byteBuffer,
HdrVividRender.BufferInfo bufferInfo) {
// Process the buffered data.
}
});
new HdrVividRender.OutputCallback() is used for asynchronously processing the returned buffered data. If this method is not used, the read method can be used instead. For example:
Code:
hdrVividRender.read(new BufferInfo(), 10); // 10 is a timestamp, which is determined by your app.
8. Start the processing flow.
Code:
hdrVividRender.start();
9. Stop the processing flow.
Code:
hdrVividRender.stop();
10. Release the resources that have been occupied.
Code:
hdrVividRender.release();
hdrVividRender = null;
During the above steps, I noticed that when the dimensions of Surface change, setOutputSurfaceSize has to be called to re-configure the dimensions of the Surface output.
Besides, in the rendering mode for output, when WisePlayer is switched from the background to the foreground or vice versa, the Surface object will be destroyed and then re-created. In this case, there is a possibility that the HdrVividRender instance is not destroyed. If so, the setOutputSurface API needs to be called so that a new Surface output can be set.
Setting Up HDR CapabilitiesHDR capabilities are provided in the class HdrAbility. It can be used to adjust brightness when the HDR Vivid SDK is rendering or transcoding an HDR Vivid video.
1. Initialize the function of brightness adjustment.
Code:
HdrAbility.init(getApplicationContext())
2. Enable the HDR feature on the device. Then, the maximum brightness of the device screen will increase.
Code:
HdrAbility.setHdrAbility(true);
3. Configure the alternative maximum brightness of white points in the output video image data.
Code:
HdrAbility.setBrightness(600);
4. Make the video layer highlighted.
Code:
HdrAbility.setHdrLayer(surfaceView, true);
5. Configure the feature of highlighting the subtitle layer or the bullet comment layer.
Code:
HdrAbility.setCaptionsLayer(captionView, 1.5f);
SummaryVideo resolution is an important influencer of user experience for mobile apps. HDR is often used to post-process video, but it is held back by a number of restrictions, which are resolved by the HDR Vivid SDK from Video Kit.
This SDK is loaded with features for image processing such as the OETF, tone mapping, and HDR2SDR, so that it can mimic what human eyes can see to deliver immersive videos that can be enhanced even further with the help of the HDR Ability SDK from the same kit. The functionality and straightforward integration process of these SDKs make them ideal for implementing the HDR feature into a mobile app.
Portrait Retouching ImportanceMobile phone camera technology is evolving — wide-angle lens, optical image stabilization, to name but a few. Thanks to this, video recording and mobile image editing apps are emerging one after another, utilizing technology to foster greater creativity.
Among these apps, live-streaming apps are growing with great momentum, thanks to an explosive number of streamers and viewers.
One function that a live-streaming app needs is portrait retouching. The reason is that though mobile phone camera parameters are already staggering, portraits captured by the camera can also be distorted for different reasons. For example, in a dim environment, a streamer's skin tone might appear dark, while factors such as the width of camera lens and shooting angle can make them look wide in videos. Issues like these can affect how viewers feel about a live video and how streamers feel about themselves, signaling the need for a portrait retouching function to address these issues.
I've developed a live-streaming demo app with such a function. Before I developed it, I identified two issues developing this function for a live-streaming app.
First, this function must be able to process video images in real time. A long duration between image input to output compromises interaction between a streamer and their viewers.
Secondly, this function requires a high level of face detection accuracy, to prevent the processed portrait from deformation, or retouched areas from appearing on unexpected parts.
To solve these challenges, I tested several available portrait retouching solutions and settled on the beauty capability from HMS Core Video Editor Kit. Let's see how the capability works to understand how it manages to address the challenges.
How the Capability Addresses the ChallengesThis capability adopts the CPU+NPU+GPU heterogeneous parallel framework, which allows it to process video images in real time. The capability algorithm runs faster, but requires less power.
Specifically speaking, the beauty capability delivers a processing frequency of over 50 fps in a device-to-device manner. For a video that contains multiple faces, the capability can simultaneously process a maximum of two faces, whose areas are the biggest in the video. This takes as little as 10 milliseconds to complete.
The capability uses 855 dense facial landmarks so that it can accurately recognize a face, allowing the capability to adapt its effects to a face that moves too fast or at a big angle during live streaming.
To ensure an excellent retouching effect, the beauty capability adopts detailed face segmentation and neutral gray for softening skin. As a result, the final effect looks very authentic.
Not only that, the capability is equipped with multiple, configurable retouching parameters. This feature, I think, is considerate and makes the capability deliver an even better user experience — considering that it is impossible to have a portrait retouching panacea that can satisfy all users. Developers like me can provide these parameters (including those for skin softening, skin tone adjustment, face contour adjustment, eye size adjustment, and eye brightness adjustment) directly to users, rather than struggle to design the parameters by ourselves. This offers more time for fine-tuning portraits in video images.
Knowing these features of the capability, I believed that it could help me create a portrait retouching function for my demo app. So let's move on to see how I developed my app.
Demo DevelopmentPreparationsMake sure the development environment is ready.
Configure app information in AppGallery Connect, including registering as a developer on the platform, creating an app, generating a signing certificate fingerprint, configuring the fingerprint, and enabling the kit.
Integrate the HMS Core SDK.
Configure obfuscation scripts.
Declare necessary permissions.
Capability Integration1. Set up the app authentication information. Two methods are available, using an API key or access token respectively:
API key: Call the setApiKey method to set the key, which only needs to be done once during app initialization.
Code:
HVEAIApplication.getInstance().setApiKey("your ApiKey");
The API key is obtained from AppGallery Connect, which is generated during app registration on the platform.
It's worth noting that you do not need to hardcode the key in the app code, or store the key in the app's configuration file. The right way to handle this is to store it on cloud, and obtain it when the app is running.
Access token: Call the setAccessToken method to set the token. This is done only once during app initialization.
Code:
HVEAIApplication.getInstance().setAccessToken("your access token");
The access token is generated by an app itself. Specifically speaking, call the
https://oauth-login.cloud.huawei.com/oauth2/v3/token
API and then an app-level access token is obtained.
2. Integrate the beauty capability.
Code:
// Create an HVEAIBeauty instance.
HVEAIBeauty hveaiBeauty = new HVEAIBeauty();
// Initialize the engine of the capability.
hveaiBeauty.initEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when engine initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when engine initialization failed.
}
});
// Initialize the runtime environment of the capability in OpenGL. The method is called in the rendering thread of OpenGL.
hveaiBeauty.prepare();
// Set textureWidth (width) and textureHeight (height) of the texture to which the capability is applied. This method is called in the rendering thread of OpenGL after initialization or texture change.
// resize is a parameter, indicating the width and height. The parameter value must be greater than 0.
hveaiBeauty.resize(textureWidth, textureHeight);
// Configure the parameters for skin softening, skin tone adjustment, face contour adjustment, eye size adjustment, and eye brightness adjustment. The value of each parameter ranges from 0 to 1.
HVEAIBeautyOptions options = new HVEAIBeautyOptions.Builder().setBigEye(1)
.setBlurDegree(1)
.setBrightEye(1)
.setThinFace(1)
.setWhiteDegree(1)
.build();
// Update the parameters, after engine initialization or parameter change.
hveaiBeauty.updateOptions(options);
// Apply the capability, by calling the method in the rendering thread of OpenGL for each frame. inputTextureId: ID of the input texture; outputTextureId: ID of the output texture.
// The ID of the input texture should correspond to a face that faces upward.
int outputTextureId = hveaiBeauty.process(inputTextureId);
// Release the engine.
hveaiBeauty.releaseEngine();
The development process ends here, so now we can check out how my demo works:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Not to brag, but I do think the retouching result is ideal and natural: With all the effects added, the processed portrait does not appear deformed.
I've got my desired solution for creating a portrait retouching function. I believe this solution can also play an important role in an image editing app or any app that requires portrait retouching. I'm quite curious as to how you will use it. Now I'm off to find a solution that can "retouch" music instead of photos for a music player app, which can, for example, add more width to a song — Wish me luck!
ConclusionThe live-streaming app market is expanding rapidly, receiving various requirements from streamers and viewers. One of the most desired functions is portrait retouching, which is used to address the distorted portraits and unfavorable video watching experience.
Compared with other kinds of apps, a live-streaming app has two distinct requirements for the portrait retouching function, which are real-time processing of video images and accurate face detection. The beauty capability from HMS Core Video Editor Kit addresses them effectively, by using technologies such as the CPU+NPU+GPU heterogeneous parallel framework and 855 dense facial landmarks. The capability also offers several customizable parameters to enable different users to retouch their portraits as needed. On top of these, the capability can be easily integrated, helping develop an app requiring the portrait retouching feature.