{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In photography, cutout is a function that is often used to editing images, such as removing the background. To achieve this function, a technique known as green screen is universally used, which is also called as chroma keying. This technique requires a green background to be added manually.
This, however, makes the green screen-dependent cutout a challenge to those new to video/image editing. The reason is that most images and videos do not come with a green background, and adding such a background is actually quite complex.
Luckily, a number of mobile apps on the market help with this, as they are able to automatically cut out the desired object for users to later edit. To create an app that is capable of doing this, I turned to the recently released object segmentation capability from HMS Core Video Editor Kit for help. This capability utilizes the AI algorithm, instead of the green screen, to intelligently separate an object from other parts of an image or a video, delivering an ideal segmentation result for removing the background and many other further editing operations.
This is what my demo has achieved with the help of the capability.
It is a perfect cutout, right? As you can see, the cut object comes with a smooth edge, without any unwanted parts appearing in the original video.
Before moving on to how I created this cutout tool with the help of object segmentation, let's see what lies behind the capability.
How It WorksThe object segmentation capability adopts an interactive way for cutting out objects. A user first taps or draws a line on the object to be cut out, and then the interactive segmentation algorithm checks the track of the user's tap and intelligently identifies their intent. The capability then selects and cuts out the object. The object segmentation capability performs interactive segmentation on the first video frame to obtain the mask of the object to be cut out. The model supporting the capability traverses frames following the first frame by using the mask obtained from the first frame and applying it to subsequent frames, and then matches the mask with the object in them before cutting the object out.
The model assigns frames with different weights, according to the segmentation accuracy of each frame. It then blends the weighted segmentation result of the intermediate frame with the mask obtained from the first frame, in order to segment the desired object from other frames. Consequently, the capability manages to cut out an object as wholly as possible, delivering a higher segmentation accuracy.
What makes the capability better is that it has no restrictions on object types. As long as an object is distinctive to the other parts of the image or video and is against a simple background, the capability can cleanly cut out this object.
Now, let's check out how the capability can be integrated.
Integration ProcedureMaking PreparationsThere are necessary steps to do before the next part. The steps include:
Configure app information in AppGallery Connect.
Integrate the SDK of HMS Core.
Configure obfuscation scripts.
Apply for necessary permissions.
Setting Up the Video Editing Project1. Configure the app authentication information. Available options include:
Call setAccessToken to set an access token, which is required only once during app startup.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, call setApiKey to set an API key, which is required only once during app startup.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a License ID.
Because this ID is used to manage the usage quotas of the mentioned service, the ID must be unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
1) Initialize the runtime environment for HuaweiVideoEditor.
When creating a video editing project, we first need to create an instance of HuaweiVideoEditor and initialize its runtime environment. When you exit the project, the instance shall be released.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Determine the layout of the preview area.
Such an area renders video images, and this is implemented by SurfaceView within the fundamental capability SDK. Before the area is created, we need to specify its layout.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify a preview area.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Design the layout for the area.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If the license verification fails, LicenseException will be thrown.
After the HuaweiVideoEditor instance is created, it will not use any system resources, and we need to manually set the initialization time for the runtime environment. Then, the fundamental capability SDK will internally create necessary threads and timers.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Integrating Object Segmentation
Code:
// Initialize the engine of object segmentation.
videoAsset.initSegmentationEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Initialization progress.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// After the initialization is successful, segment a specified object and then return the segmentation result.
// bitmap: video frame containing the object to be segmented; timeStamp: timestamp of the video frame on the timeline; points: set of coordinates determined according to the video frame, and the upper left vertex of the video frame is the coordinate origin. It is recommended that the coordinate count be greater than or equal to two. All of the coordinates must be within the object to be segmented. The object is determined according to the track of coordinates.
int result = videoAsset.selectSegmentationObject(bitmap, timeStamp, points);
// After the handling is successful, apply the object segmentation effect.
videoAsset.addSegmentationEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Progress of object segmentation.
}
@Override
public void onSuccess() {
// The object segmentation effect is successfully added.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The object segmentation effect failed to be added.
}
});
// Stop applying the object segmentation effect.
videoAsset.interruptSegmentation();
// Remove the object segmentation effect.
videoAsset.removeSegmentationEffect();
// Release the engine of object segmentation.
videoAsset.releaseSegmentationEngine();
And this concludes the integration process. A cutout function ideal for an image/video editing app was just created.
I just came up with a bunch of fields where object segmentation can help, like live commerce, online education, e-conference, and more.
In live commerce, the capability helps replace the live stream background with product details, letting viewers conveniently learn about the product while watching a live stream.
In online education and e-conference, the capability lets users switch the video background with an icon, or an image of a classroom or meeting room. This makes online lessons and meetings feel more professional.
The capability is also ideal for video editing apps. Take my demo app for example. I used it to add myself to a vlog that my friend created, which made me feel like I was traveling with her.
I think the capability can also be used together with other video editing functions, to realize effects like copying an object, deleting an object, or even adjusting the timeline of an object. I'm sure you've also got some great ideas for using this capability. Let me know in the comments section.
ConclusionCutting out objects used to be a thing of people with editing experience, a process that requires the use of a green screen.
Luckily, things have changed thanks to the cutout function found in many mobile apps. It has become a basic function in mobile apps that support video/image editing and is essential for some advanced functions like background removal.
Object segmentation from Video Editor Kit is a straightforward way of implementing the cutout feature into your app. This capability leverages an elaborate AI algorithm and depends on the interactive segmentation method, delivering an ideal and highly accurate object cutout result.
Related
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I recently read an article that explained how we as human beings are hardwired to enter the fight-or-flight mode when we realize that we are being watched. This feeling is especially strong when somebody else is trying to take a picture of us, which is why many of us find it difficult to smile in photos. This effect is so strong that we've all had the experience of looking at a photo right after it was taken and noticing straight away that the photo needs to be retaken because our smile wasn't wide enough or didn't look natural. So, the next time someone criticizes my smile in a photo, I'm just going to them, "It's not my fault. It's literally an evolutionary trait!"
Or, instead of making such an excuse, what about turning to technology for help? Actually, I have tried using some photo editor apps to modify my portrait photos, making my facial expression look nicer by, for example, removing my braces, whitening my teeth, and erasing my smile lines. However, maybe it's because of my rusty image editing skills, the modified images often turn out to be strange.
My lack of success with photo editing made me wonder: Wouldn't it be great if there was a function specially designed for people like me, who find it difficult to smile naturally in photos and who aren't good at photo editing, which could automatically give us picture-perfect smiles?
I then suddenly remembered that I had heard about an interesting function called smile filter that has been going viral on different apps and platforms. A smile filter is an app feature which can automatically add a natural-looking smile to a face detected in an image. I have tried it before and was really amazed by the result. In light of my sudden recall, I decided to create a demo app with a similar function, in order to figure out the principle behind it.
To provide my app with a smile filter, I chose to use the auto-smile capability provided by HMS Core Video Editor Kit. This capability automatically detects people in an image and then lightens up the detected faces with a smile (either closed- or open-mouth) that perfectly blends in with each person's facial structure. With the help of such a capability, a mobile app can create the perfect smile in seconds and save users from the hassle of having to use a professional image editing program.
Check the result out for yourselves:
Looks pretty natural, right? This is the result offered by my demo app integrated with the auto-smile capability. The original image looks like this:
Next, I will explain how I integrated the auto-smile capability into my app and share the relevant source code from my demo app.
Integration ProcedurePreparations1, Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration1. Set the app authentication information. This can be done via an API key or an access token.
Using an API key: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, using an access token: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID, which must be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development
Code:
// Apply the auto-smile effect. Currently, this effect only supports image assets.
imageAsset.addFaceSmileAIEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Callback when the handling progress is received.
}
@Override
public void onSuccess() {
// Callback when the handling is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the handling failed.
}
});
// Stop applying the auto-smile effect.
imageAsset.interruptFaceSmile();
// Remove the auto-smile effect.
imageAsset.removeFaceSmileAIEffect();
And with that, I successfully integrated the auto-smile capability into my demo app, and now it can automatically add smiles to faces detected in the input image.
ConclusionResearch has demonstrated that it is normal for people to behave unnaturally when we are being photographed. Such unnaturalness becomes even more obvious when we try to smile. This explains why numerous social media apps and video/image editing apps have introduced smile filter functions, which allow users to easily and quickly add a naturally looking smile to faces in an image.
Among various solutions to such a function, HMS Core Video Editor Kit's auto-smile capability stands out by providing excellent, natural-looking results and featuring straightforward and quick integration.
What's better, the auto-smile capability can be used together with other capabilities from the same kit, to further enhance users' image editing experience. For example, when used in conjunction with the kit's AI color capability, you can add color to an old black-and-white photo and then use auto-smile to add smiles to the sullen expressions of the people in the photo. It's a great way to freshen up old and dreary photos from the past.
And that's just one way of using the auto-smile capability in conjunction with other capabilities. What ideas do you have? Looking forward to knowing your thoughts in the comments section.
ReferencesHow to Overcome Camera Shyness or Phobia
Introduction to Auto-Smile
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Have you ever watched a video of the northern lights? Mesmerizing light rays that swirl and dance through the star-encrusted sky. It's even more stunning when they are backdropped by crystal-clear waters that flow smoothly between and under ice crusts. Complementing each other, the moving sky and water compose a dynamic scene that reflects the constant rhythm of the mother nature.
Now imagine that the video is frozen into an image: It still looks beautiful, but lacks the dynamism of the video. Such a contrast between still and moving images shows how videos are sometimes better than still images when it comes to capturing majestic scenery, since the former can convey more information and thus be more engaging.
This may be the reason why we sometimes regret just taking photos instead of capturing a video when we encounter beautiful scenery or a memorable moment.
In addition to this, when we try to add a static image to a short video, we will find that the transition between the image and other segments of the video appears very awkward, since the image is the only static segment in the whole video.
If we want to turn a static image into a dynamic video by adding some motion effects to the sky and water, one way to do this is to use a professional PC program to modify the image. However, this process is often very complicated and time-consuming: It requires adjustment of the timeline, frames, and much more, which can be a daunting prospect for amateur image editors.
Luckily, there are now numerous AI-driven capabilities that can automatically create time-lapse videos for users. I chose to use the auto-timelapse capability provided by HMS Core Video Editor Kit. It can automatically detect the sky and water in an image and produce vivid dynamic effects for them, just like this:
The movement speed and angle of the sky and water are customizable.
Now let's take a look at the detailed integration procedure for this capability, to better understand how such a dynamic effect is created.
Integration ProcedurePreparations1. Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable the required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration1. Set the app authentication information. This can be done via an API key or an access token.
Set an API key via the setApiKey method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, set an access token by using the setAccessToken method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID. This ID should be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development
Code:
// Initialize the auto-timelapse engine.
imageAsset.initTimeLapseEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// When the initialization is successful, check whether there is sky or water in the image.
int motionType = -1;
imageAsset.detectTimeLapse(new HVETimeLapseDetectCallback() {
@Override
public void onResult(int state) {
// Record the state parameter, which is used to define a motion effect.
motionType = state;
}
});
// skySpeed indicates the speed at which the sky moves; skyAngle indicates the direction to which the sky moves; waterSpeed indicates the speed at which the water moves; waterAngle indicates the direction to which the water moves.
HVETimeLapseEffectOptions options =
new HVETimeLapseEffectOptions.Builder().setMotionType(motionType)
.setSkySpeed(skySpeed)
.setSkyAngle(skyAngle)
.setWaterAngle(waterAngle)
.setWaterSpeed(waterSpeed)
.build();
// Add the auto-timelapse effect.
imageAsset.addTimeLapseEffect(options, new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
}
@Override
public void onSuccess() {
}
@Override
public void onError(int errorCode, String errorMessage) {
}
});
// Stop applying the auto-timelapse effect.
imageAsset.interruptTimeLapse();
// Remove the auto-timelapse effect.
imageAsset.removeTimeLapseEffect();
Now, the auto-timelapse capability has been successfully integrated into an app.
ConclusionWhen capturing scenic vistas, videos, which can show the dynamic nature of the world around us, are often a better choice than static images. In addition, when creating videos with multiple shots, dynamic pictures deliver a smoother transition effect than static ones.
However, for users not familiar with the process of animating static images, if they try do so manually using computer software, they may find the results unsatisfying.
The good news is that there are now mobile apps integrated with capabilities such as Video Editor Kit's auto-timelapse feature that can create time-lapse effects for users. The generated effect appears authentic and natural, the capability is easy to use, and its integration is straightforward. With such capabilities in place, a video/image app can provide users with a more captivating user experience.
In addition to video/image editing apps, I believe the auto-timelapse capability can also be utilized by many other types of apps. What other kinds of apps do you think would benefit from such a feature? Let me know in the comments section.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Travel and life vlogs are popular among app users: Those videos are telling, covering all the most attractive parts in a journey or a day. To create such a video first requires great editing efforts to cut out the trivial and meaningless segments in the original video, which used to be a thing of video editing pros.
This is no longer the case. Now we have an array of intelligent mobile apps that can help us automatically extract highlights from a video, so we can focus more on spicing up the video by adding special effects, for example. I opted to use the highlight capability from HMS Core Video Editor Kit to create my own vlog editor.
How It WorksThis capability assesses how appealing video frames are and then extracts the most suitable ones. To this end, it is said that the capability takes into consideration the video properties most concerned by users, a conclusion that is drawn from survey and experience assessment from users. On the basis of this, the highlight capability develops a comprehensive frame assessment scheme that covers various aspects. For example:
Aesthetics evaluation. This aspect is a data set built upon composition, lighting, color, and more, which is the essential part of the capability.
Tags and facial expressions. They represent the frames that are detected and likely to be extracted by the highlight capability, such as frames that contain people, animals, and laughter.
Frame quality and camera movement mode. The capability discards low-quality frames that are blurry, out-of-focus, overexposed, or shaky, to ensure such frames will not impact the quality of the finished video. Amazingly, despite all of these, the highlight capability is able to complete the extraction process in just 2 seconds.
See for yourself how the finished video by the highlight capability compares with the original video.
Backing TechnologyThe highlight capability stands out from the crowd by adopting models and a frame assessment scheme that are iteratively optimized. Technically and specifically speaking:
The capability introduces AMediaCodec for hardware decoding and Open Graphics Library (OpenGL) for rendering frames and automatically adjusting the frame dimensions according to the screen dimensions. The capability algorithm uses multiple neural network models. In this way, the capability checks the device model where it runs and then automatically chooses to run on NPU, CPU, or GPU. Consequently, the capability delivers a higher running performance.
To provide the extraction result more quickly, the highlight capability uses the two-stage algorithm of sparse sampling to dense sampling, checks how content distributed among numerous videos, and adopts the frame buffer. All these contribute to a higher efficiency of determining the most attractive video frames. To ensure high performance of the algorithm, the capability adopts the thread pool scheduling and producer-consumer model, to ensure that the video decoder and models can run at the same time.
During the sparse sampling stage, the capability decodes and processes some (up to 15) key frames in a video. The interval between the key frames is no less than 2 seconds. During the dense sampling stage, the algorithm picks out the best key frame and then extracts frames before and after to further analyze the highlighted part of the video.
The extraction result is closely related to the key frame position. The processing result of the highlight capability will not be ideal when the sampling points are not dense enough because, for example, the video does not have enough key frames or the duration is too long (greater than 1 minute). For the capability to deliver optimal performance, it recommends that the duration of the input video be less than 60 seconds.
Let's now move on to how this capability can be integrated.
Integration ProcessPreparationsMake necessary preparations before moving on to the next part. Required steps include:
Configure the app information in AppGallery Connect.
Integrate the SDK of HMS Core.
Configure obfuscation scripts.
Declare necessary permissions.
Setting up the Video Editing Project1. Configure the app authentication information by using either an access token or API key.
Method 1: Call setAccessToken to set an access token, which is required only once during app startup.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Method 2: Call setApiKey to set an API key, which is required only once during app startup.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a License ID.
This ID is used to manage the usage quotas of Video Editor Kit and must be unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
Initialize the runtime environment of HuaweiVideoEditor.
When creating a video editing project, we first need to create an instance of HuaweiVideoEditor and initialize its runtime environment. When you exit the project, the instance shall be released.
Create an instance of HuaweiVideoEditor.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Determine the layout of the preview area.
Such an area renders video images, and this is implemented by SurfaceView within the fundamental capability SDK. Before the area is created, we need to specify its layout.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify a preview area.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Design the layout of the area.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If the license verification fails, LicenseException will be thrown.
After the HuaweiVideoEditor instance is created, it will not use any system resources, and we need to manually set the initialization time for the runtime environment. Then, the fundamental capability SDK will internally create necessary threads and timers.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Integrating the Highlight Capability
Code:
// Create an object that will be processed by the highlight capability.
HVEVideoSelection hveVideoSelection = new HVEVideoSelection();
// Initialize the engine of the highlight capability.
hveVideoSelection.initVideoSelectionEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// After the initialization is successful, extract the highlighted video. filePath indicates the video file path, and duration indicates the desired duration for the highlighted video.
hveVideoSelection.getHighLight(filePath, duration, new HVEVideoSelectionCallback() {
@Override
public void onResult(long start) {
// The highlighted video is successfully extracted.
}
});
// Release the highlight engine.
hveVideoSelection.releaseVideoSelectionEngine();
ConclusionThe vlog has been playing a vital part in this we-media era since its appearance. In the past, there were just a handful of people who could create a vlog, because the process of picking out the most interesting part from the original video could be so demanding.
Thanks to smart mobile app technology, even video editing amateurs can now create a vlog because much of the process can be completed automatically by an app with the function of highlighted video extraction.
The highlight capability from the Video Editor Kit is one such function. This capability introduces a set of features to deliver incredible results, such as AMediaCodec, OpenGL, neural networks, a two-stage algorithm (sparse sampling to dense sampling), and more. This capability can help create either a highlighted video extractor or build a highlighted video extraction feature in an app.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I dare say there are two types of people in this world: people who love Toy Story and people who have not watched it.
Well, this is just the opinion of a huge fan of the animation film. When I was a child, I always dreamed of having toys that could move and play with me, like my own Buzz Lightyear. Thanks to a fancy technique called rigging, I can now bring my toys to life, albeit I'm probably too old for them now.
What Is Rigging in 3D Animation and Why Do We Need It?Put simply, rigging is a process whereby a skeleton is created for a 3D model to make it move. In other words, rigging creates a set of connected virtual bones that are used to control a 3D model.
It paves the way for animation because it enables a model to be deformed, making it moveable, which is the very reason that rigging is necessary for 3D animation.
What Is Auto Rigging3D animation has been adopted by mobile apps in a number of fields (gaming, e-commerce, video, and more), to achieve more realistic animations than 2D.
However, this graphic technique has daunted many developers (like me) because rigging, one of its major prerequisites, is difficult and time-consuming for people who are unfamiliar with modeling. Specifically speaking, most high-performing rigging solutions have many requirements. An example of this is that the input model should be in a standard position, seven or eight key skeletal points should be added, as well as inverse kinematics which must be added to the bones, and more.
Luckily, there are solutions that can automatically complete rigging, such as the auto rigging solution from HMS Core 3D Modeling Kit.
This capability delivers a wholly automated rigging process, requiring just a biped humanoid model that is generated using images taken from a mobile phone camera. After the model is input, auto rigging uses AI algorithms for limb rigging and generates the model skeleton and skin weights (which determine the degree to which a bone can influence a part of the mesh). Then, the capability changes the orientation and position of the skeleton so that the model can perform a range of preset actions, like walking, running, and jumping. Besides, the rigged model can also be moved according to an action generated by using motion capture technology, or be imported into major 3D engines for animation.
Lower requirements do not compromise rigging accuracy. Auto rigging is built upon hundreds of thousands of 3D model rigging data records. Thanks to some fine-tuned data records, the capability delivers ideal algorithm accuracy and generalization.
I know that words alone are no proof, so check out the animated model I've created using the capability.
Movement is smooth, making the cute panda move almost like a real one. Now I'd like to show you how I created this model and how I integrated auto rigging into my demo app.
Integration ProcedurePreparationsBefore moving on to the real integration work, make necessary preparations, which include:
Configure app information in AppGallery Connect.
Integrate the HMS Core SDK with the app project, which includes Maven repository address configuration.
Configure obfuscation scripts.
Declare necessary permissions.
Capability Integration1. Set an access token or API key — which can be found in agconnect-services.json — during app initialization for app authentication.
Using the access token: Call setAccessToken to set an access token. This task is required only once during app initialization.
Code:
ReconstructApplication.getInstance().setAccessToken("your AccessToken");
Using the API key: Call setApiKey to set an API key. This key does not need to be set again.
Code:
ReconstructApplication.getInstance().setApiKey("your api_key");
The access token is recommended. And if you prefer the API key, it is assigned to the app when it is created in AppGallery Connect.
2. Create a 3D object reconstruction engine and initialize it. Then, create an auto rigging configurator.
Code:
// Create a 3D object reconstruction engine.
Modeling3dReconstructEngine modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(context);
// Create an auto rigging configurator.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
// Set the working mode of the engine to PICTURE.
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
// Set the task type to auto rigging.
.setTaskType(Modeling3dReconstructConstants.TaskType.AUTO_RIGGING)
.create();
3. Create a listener for the result of uploading images of an object.
Code:
private Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Callback when the upload progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
// Callback when the upload is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when the upload failed.
}
};
4. Use a 3D object reconstruction configurator to initialize the task, set an upload listener for the engine created in step 1, and upload images.
Code:
// Use the configurator to initialize the task, which should be done in a sub-thread.
Modeling3dReconstructInitResult modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
String taskId = modeling3dReconstructInitResult.getTaskId();
// Set an upload listener.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Call the uploadFile API of the 3D object reconstruction engine to upload images.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
5. Query the status of the auto rigging task.
Code:
// Initialize the task processing class.
Modeling3dReconstructTaskUtils modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(context);
// Call queryTask in a sub-thread to query the task status.
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(taskId);
// Obtain the task status.
int status = queryResult.getStatus();
6. Create a listener for the result of model file download.
Code:
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
// Callback when download progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
// Callback when download is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when download failed.
}
};
7. Pass the download listener to the 3D object reconstruction engine, to download the rigged model.
Code:
// Set download configurations.
Modeling3dReconstructDownloadConfig downloadConfig = new Modeling3dReconstructDownloadConfig.Factory()
// Set the model file format to OBJ or glTF.
.setModelFormat(Modeling3dReconstructConstants.ModelFormat.OBJ)
// Set the texture map mode to normal mode or PBR mode.
.setTextureMode(Modeling3dReconstructConstants.TextureMode.PBR)
.create();
// Set the download listener.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Call downloadModelWithConfig, passing the task ID, path to which the downloaded file will be saved, and download configurations, to download the rigged model.
modeling3dReconstructEngine.downloadModelWithConfig(taskId, fileSavePath, downloadConfig);
Where to UseAuto rigging is used in many scenarios, for example:
Gaming. The most direct way of using auto rigging is to create moveable characters in a 3D game. Or, I think we can combine it with AR to create animated models that can appear in the camera display of a mobile device, which will interact with users.
Online education. We can use auto rigging to animate 3D models of popular toys, and liven them up with dance moves, voice-overs, and nursery rhymes to create educational videos. These models can be used in educational videos to appeal to kids more.
E-commerce. Anime figurines look rather plain compared to how they behave in animes. To spice up the figurines, we can use auto rigging to animate 3D models that will look more engaging and dynamic.
Conclusion3D animation is widely used in mobile apps, because it presents objects in a more fun and interactive way.
A key technique for creating great 3D animations is rigging. Conventional rigging requires modeling know-how and expertise, which puts off many amateur modelers.
Auto rigging is the perfect solution to this challenge because its full-automated rigging process can produce highly accurate rigged models that can be easily animated using major engines available on the market. Auto rigging not only lowers the costs and requirements of 3D model generation and animation, but also helps 3D models look more appealing.
Changing an image/video background has always been a hassle, whereby the most tricky part is to extract the element other than the background.
Traditionally, it requires us to use a PC image-editing program that allows us to select the element, add a mask, replace the canvas, and more. If the element has an extremely uneven border, then the whole process can be very time-consuming.
Luckily, ML Kit from HMS Core offers a solution that streamlines the process: the image segmentation service, which supports both images and videos. This service draws upon a deep learning framework, as well as detection and recognition technology. The service can automatically recognize — within seconds — the elements and scenario of an image or a video, delivering a pixel-level recognition accuracy. By using a novel framework of semantic segmentation, image segmentation labels each and every pixel in an image and supports 11 element categories including humans, the sky, plants, food, buildings, and mountains.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
This service is a great choice for entertaining apps. For example, an image-editing app can use the service to realize swift background replacement. A photo-taking app can count on this service for optimization on different elements (for example, the green plant) to make them appear more attractive.
Below is an example showing how the service works in an app.
Cutout is another field where image segmentation plays a role. Most cutout algorithms, however, cannot delicately determine fine border details such as that of hair. The team behind ML Kit's image segmentation has been working on its algorithms designed for handling hair and highly hollowed-out subjects. As a result, the capability can now retain hair details during live-streaming and image processing, delivering a better cutout effect.
Development Procedure
Before app development, there are some necessary preparations. In addition, the Maven repository address should be configured for the SDK, and the SDK should be integrated into the app project.
The image segmentation service offers three capabilities: human body segmentation, multiclass segmentation, and hair segmentation.
Human body segmentation: supports videos and images. The capability segments the human body from its background and is ideal for those who only need to segment the human body and background. The return value of this capability contains the coordinate array of the human body, human body image with a transparent background, and gray-scale image with a white human body and black background. Based on the return value, your app can further process an image to, for example, change the video background or cut out the human body.
Multiclass segmentation: offers the return value of the coordinate array of each element. For example, when the image processed by the capability contains four elements (human body, sky, plant, and cat & dog), the return value is the coordinate array of the four elements. Your app can further process these elements, such as replacing the sky.
Hair segmentation: segments hair from the background, with only images supported. The return value is a coordinate array of the hair element. For example, when the image processed by the capability is a selfie, the return value is the coordinate array of the hair element. Your app can then further process the element by, for example, changing the hair color.
Static Image Segmentation
1. Create an image segmentation analyzer.
Integrate the human body segmentation model package.
Code:
// Method 1: Use default parameter settings to configure the image segmentation analyzer.
// The default mode is human body segmentation in fine mode. All segmentation results of human body segmentation are returned (pixel-level label information, human body image with a transparent background, gray-scale image with a white human body and black background, and an original image for segmentation).
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();
// Method 2: Use MLImageSegmentationSetting to customize the image segmentation analyzer.
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
// Set whether to use fine segmentation. true indicates yes, and false indicates no (fast segmentation).
.setExact(false)
// Set the segmentation mode to human body segmentation.
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
// Set the returned result types.
// MLImageSegmentationScene.ALL: All segmentation results are returned (pixel-level label information, human body image with a transparent background, gray-scale image with a white human body and black background, and an original image for segmentation).
// MLImageSegmentationScene.MASK_ONLY: Only pixel-level label information and an original image for segmentation are returned.
// MLImageSegmentationScene.FOREGROUND_ONLY: A human body image with a transparent background and an original image for segmentation are returned.
// MLImageSegmentationScene.GRAYSCALE_ONLY: A gray-scale image with a white human body and black background and an original image for segmentation are returned.
.setScene(MLImageSegmentationScene.FOREGROUND_ONLY)
.create();
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
Integrate the multiclass segmentation model package.
When the multiclass segmentation model package is used for processing an image, an image segmentation analyzer can be created only by using MLImageSegmentationSetting.
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting
.Factory()
// Set whether to use fine segmentation. true indicates yes, and false indicates no (fast segmentation).
.setExact(true)
// Set the segmentation mode to image segmentation.
.setAnalyzerType(MLImageSegmentationSetting.IMAGE_SEG)
.create();
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
Integrate the hair segmentation model package.
When the hair segmentation model package is used for processing an image, a hair segmentation analyzer can be created only by using MLImageSegmentationSetting.
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting
.Factory()
// Set the segmentation mode to hair segmentation.
.setAnalyzerType(MLImageSegmentationSetting.HAIR_SEG)
.create();
MLImageSegmentationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2. Create an MLFrame object by using android.graphics.Bitmap for the analyzer to detect images. JPG, JPEG, and PNG images are supported. It is recommended that the image size range from 224 x 224 px to 1280 x 1280 px.
Code:
// Create an MLFrame object using the bitmap, which is the image data in bitmap format.
MLFrame frame = MLFrame.fromBitmap(bitmap);
3. Call asyncAnalyseFrame for image segmentation.
Code:
// Create a task to process the result returned by the analyzer.
Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame);
// Asynchronously process the result returned by the analyzer.
task.addOnSuccessListener(new
OnSuccessListener<MLImageSegmentation>() {
@Override
public void onSuccess(MLImageSegmentation segmentation) {
// Callback when recognition is successful.
}})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Callback when recognition failed.
}});
4. Stop the analyzer and release the recognition resources when recognition ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
The asynchronous call mode is used in the preceding example. Image segmentation also supports synchronous call of the analyseFrame function to obtain the detection result:
Code:
SparseArray<MLImageSegmentation> segmentations = analyzer.analyseFrame(frame);
References
Home page of HMS Core ML Kit
Development Guide of HMS Core ML Kit