Ads Kit - Adding a Banner Ad - Huawei Developers

Banner ads are rectangular images that occupy a spot at the top, middle, or bottom within an app's layout. Banner ads refresh automatically at regular intervals. When a user taps a banner ad, the user is redirected to the advertiser's page in most cases.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
1. Adding a Banner Ad
Step 1: Adding BannerView
Add BannerView using either of the following methods to display banner ads:
Editing the XML layout file
Add BannerView to the XML layout file, and set the hwads:adId and hwads:bannerSize attributes to the ad slot ID and ad size respectively.
The following sample code shows how to add BannerView to an XML layout file:
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:hwads="http://schemas.android.com/apk/res-auto"
...
<com.huawei.hms.ads.banner.BannerView
android:id="@+id/hw_banner_view"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"
hwads:adId="testw6vs28auh3"
hwads:bannerSize="BANNER_SIZE_320_50"/>
</RelativeLayout>
The following sample code shows how to obtain BannerView:
BannerView bannerView=findViewById(R.id.hw_banner_view);
Coding
Add BannerView to the code, and set the ad slot ID and ad size.
Code:
BannerView bannerView = new BannerView(this);
// "testw6vs28auh3" is a dedicated test ad slot ID.
bannerView.setAdId("testw6vs28auh3");
bannerView.setBannerAdSize(BannerAdSize.BANNER_SIZE_320_50);
FrameLayout adFrameLayout = findViewById(R.id.ad_frame);
adFrameLayout.addView(bannerView);
Step 2: Loading an Ad
After adding BannerView, load an ad using the loadAd() method in the BannerView class.
Code:
...
import com.huawei.hms.ads.AdParam;
import com.huawei.hms.ads.BannerAdSize;
import com.huawei.hms.ads.banner.BannerView;
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// Obtain BannerView.
BannerView bannerView = findViewById(R.id.hw_banner_view);
// Set the ad slot ID and ad size,"testw6vs28auh3" is a dedicated test ad slot ID.
bannerView.setAdId("testw6vs28auh3");
bannerView.setBannerAdSize(BannerAdSize.BANNER_SIZE_320_50);
// Create an ad request to load an ad.
AdParam adParam = new AdParam.Builder().build();
bannerView.loadAd(adParam);
}
}
(Optional) Step 3: Listening for Ad Events
You can listen for ad events using the methods in the AdListener class.
Code:
bannerView.setAdListener(adListener);
private AdListener adListener = new AdListener() {
@Override
public void onAdLoaded() {
// Called when an ad is loaded successfully.
}
@Override
public void onAdFailed(int errorCode) {
// Called when an ad fails to be loaded.
}
@Override
public void onAdOpened() {
// Called when an ad is opened.
}
@Override
public void onAdClicked() {
// Called when a user taps an ad.
}
@Override
public void onAdLeave() {
// Called when a user has left the app.
}
@Override
public void onAdClosed() {
// Called when an ad is closed.
}
};
For details about the methods, please refer to AdListener in the API Reference.
2. Ad Sizes
The following table lists the sizes of standard banner ads applicable to phones.
3. Smart Banner
Smart banner ads are displayed in the same width as screens in any size and orientation on devices. When loading an ad, the HUAWEI Ads SDK creates an ad view in the same width as the screen according to its orientation on the device. The ad height is determined by the height of the screen in this orientation.
The following table lists the smart banner ad heights supported by different screen widths.
On a mobile phone, the height of a smart banner ad is 50 dp when the screen is in portrait mode, and 32 dp in landscape mode. On a tablet, the height of a smart banner ad is usually 90 dp, regardless of whether the screen is in portrait or landscape mode.
When an ad image is insufficient to occupy the allocated screen space, the system places the image in the middle and fills the remaining space on both left and right sides.
When using a smart banner ad, specify the ad size by BANNER_SIZE_SMART and the width by match_parent. In addition, you need to specify the ad height by wrap_content because the height may vary on different devices.
Caution:
Smart banner ads are not supported in the Chinese mainland, and users may not receive it.
4. Testing a Banner Ad
When testing banner ads, use the dedicated test ad slot ID to obtain test ads. This avoids invalid ad clicks during the test. The test ad slot ID is used only for function commissioning. Before releasing your app, apply for a formal ad slot ID and replace the test ad slot ID with the formal one.
The following table lists the dedicated ad slot ID for banner ad testing.
You can download the sample code for banner ads and execute the code. The following screen will be displayed.

Related

Scene Kit Puts AR Placement Apps at Your Fingertips

AR placement apps have enhanced daily life in a myriad of different ways, from AR furniture placement for home furnishing and interior decorating, AR fitting in retail, to AR-based 3D models in education which gives students an opportunity to take an in-depth look at the internal workings of objects.
From integrating HUAWEI Scene Kit, you're only eight steps away from launching an AR placement app of your own. ARView, a set of AR-oriented scenario-based APIs in Scene Kit, uses the plane detection capability of AR Engine, along with the graphics rendering capability in Scene Kit, to create a 3D materials loading and rendering capability for common AR scenes.
ARView Functions
With ARView, you'll be able to:
1. Load and render 3D materials in AR scenes.
2. Set whether to display the lattice plane (consisting of white lattice points) to help select a plane in a real-world view.
3. Tap an object placed on the lattice plane to select it. Once selected, the object will turn red. You can then move, resize, or rotate it as needed.
Development Procedure
Before using ARView, you'll need to integrate the Scene Kit SDK into your Android Studio project. For details, please refer to Integrating the HMS Core SDK.
ARView inherits from GLSurfaceView, and overrides lifecycle-related methods. It can facilitate the creation of an ARView-based app in just eight steps:
1. Create an ARViewActivity that inherits from Activity. Add a Button to load materials.
Code:
public class ARViewActivity extends Activity {
private ARView mARView;
private Button mButton;
private boolean isLoadResource = false;
}
2. Add an ARView to the layout.
Code:
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent">
</com.huawei.hms.scene.sdk.ARView>
Note: To achieve the desired experience offered by ARView, your app should not support screen orientation changes or split-screen mode; thus, add the following configuration in the AndroidManifest.xml file:
Code:
android:screenOrientation="portrait"
android:resizeableActivity="false"
3. Override the onCreate method of ARViewActivity and obtain the ARView.
Code:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_ar_view);
mARView = findViewById(R.id.ar_view);
mButton = findViewById(R.id.button);
}
4. Add a Switch button in the onCreate method to set whether or not to display the lattice plane.
Code:
Switch mSwitch = findViewById(R.id.show_plane_view);
mSwitch.setChecked(true);
mSwitch.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
mARView.enablePlaneDisplay(isChecked);
}
});
Note: Add the Switch button in the layout before using it.
Code:
<Switch
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/show_plane_view"
android:layout_alignParentTop="true"
android:layout_marginTop="15dp"
android:layout_alignParentEnd="true"
android:layout_marginEnd ="15dp"
android:layout_gravity="end"
android:text="@string/show_plane"
android:theme="@style/AppTheme"
tools:ignore="RelativeOverlap" />
5. Add a button callback method. Tapping the button once will load a material, and tapping it again will clear the material.
Code:
public void onBtnClearResourceClicked(View view) {
if (!isLoadResource) {
mARView.loadAsset("ARView/scene.gltf");
isLoadResource = true;
mButton.setText(R.string.btn_text_clear_resource);
} else {
mARView.clearResource();
mARView.loadAsset("");
isLoadResource = false;
mButton.setText(R.string.btn_text_load);
}
}
Note: The onBtnSceneKitDemoClicked method must be registered in the layout attribute onClick of the button, which is tapped to load or clear a material.
6. Override the onPause method of ARViewActivity and call the onPause method of ARView.
Code:
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
7. Override the onResume method of ARViewActivity and call the onResume method of ARView.
Code:
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
8. Override the onDestroy method for ARViewActivity and call the destroy method of ARView.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
9. (Optional) After the material is loaded, use setInitialPose to set its initial status (scale and rotation).
Code:
float[] scale = new float[] { 0.1f, 0.1f, 0.1f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.707f, 0.0f };
mARView.setInitialPose(scale, rotation);
Effects
You can develop a basic AR placement app, simply by calling ARView from Scene Kit, as detailed in the eight steps above. If you are interested in this implementation method, you can view the Scene Kit demo on GitHub.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The ARView capability can be used to do far more than just develop AR placement apps; it can also help you implement a range of engaging functions, such as AR games, virtual exhibitions, and AR navigation features.
GitHub to download demos and sample codes

AR Engine and Scene Kit Deliver Virtual Try-On of Glasses

Background​The ubiquity of the Internet and smart devices has made e-commerce the preferred choice for countless consumers. However, many longtime users have grown wary of the stagnant shopping model, and thus enhancing user experience is critical to stimulating further growth in e-commerce and attracting a broader user base. HMS Core offers intelligent graphics processing capabilities to identify a user's facial and physical features, which when combined with a new display paradigm, enables users to try on products virtually through their mobile phones, for a groundbreaking digital shopping experience.
Scenarios​AR Engine and Scene Kit allow users to virtually try on products found on shopping apps and shopping list sharing apps, which in turn will lead to greater customer satisfaction and fewer returns and replacements.
Effects​A user opens an shopping app, then the user taps a product's picture to view the 3D model of the product, which they can rotate, enlarge, and shrink for interactive viewing.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Getting Started​Configuring the Maven Repository Address for the HMS Core SDK
Open the project-level build.gradle file in your Android Studio project. Go to buildscript > repositories and allprojects > repositories to configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories{
...
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
...
maven { url 'http://developer.huawei.com/repo/'}
}
}
Adding Build Dependencies for the HMS Core SDK
Open the app-level build.gradle file of your project. Add build dependencies in the dependencies block and use the Full-SDK of Scene Kit and AR Engine SDK.
Code:
dependencies {
....
implementation 'com.huawei.scenekit:full-sdk:5.0.2.302'
implementation 'com.huawei.hms:arenginesdk:2.13.0.4'
}
For details about the preceding steps, please refer to the development guide for Scene Kit on HUAWEI Developers.
Adding Permissions in the AndroidManifest.xml File
Open the AndroidManifest.xml file in main directory and add the permission to use the camera above the <application line.
Code:
<!--Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
Development Procedure​Configuring MainActivity​Add two buttons to the layout configuration file of MainActivity. Set the background of the onBtnShowProduct button to the preview image of the product and add the text Try it on! to the onBtnTryProductOn button to guide the user to the feature.
Code:
<Button
android:layout_width="260dp"
android:layout_height="160dp"
android:background="@drawable/sunglasses"
android:onClick="onBtnShowProduct" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Try it on!"
android:textAllCaps="false"
android:textSize="24sp"
android:onClick="onBtnTryProductOn" />
If the user taps the onBtnShowProduct button, the 3D model of the product will be loaded. After tapping the onBtnTryProductOn button, the user will enter the AR fitting screen.
Configuring the 3D Model Display for a Product​1. Create a SceneSampleView inherited from SceneView.
Code:
public class SceneSampleView extends SceneView {
public SceneSampleView(Context context) {
super(context);
}
public SceneSampleView(Context context, AttributeSet attributeSet) {
super(context, attributeSet);
}
}
Override the surfaceCreated method to create and initialize SceneView. Then call loadScene to load the materials, which should be in the glTF or GLB format, to have them rendered and displayed. Call loadSkyBox to load skybox materials, loadSpecularEnvTexture to load specular maps, and loadDiffuseEnvTexture to load diffuse maps. These files should be in the DDS (cubemap) format.
All loaded materials are stored in the src > main > assets > SceneView folder.
Code:
@Override
public void surfaceCreated(SurfaceHolder holder) {
super.surfaceCreated(holder);
// Load the materials to be rendered.
loadScene("SceneView/sunglasses.glb");
// Call loadSkyBox to load skybox texture materials.
loadSkyBox("SceneView/skyboxTexture.dds");
// Call loadSpecularEnvTexture to load specular texture materials.
loadSpecularEnvTexture("SceneView/specularEnvTexture.dds");
// Call loadDiffuseEnvTexture to load diffuse texture materials.
loadDiffuseEnvTexture("SceneView/diffuseEnvTexture.dds");
}
2. Create a SceneViewActivity inherited from Activity.
Call setContentView using the onCreate method, and then pass SceneSampleView that you have created using the XML tag in the layout file to setContentView.
Code:
public class SceneViewActivity extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_sample);
}
}
Create SceneSampleView in the layout file as follows:
Code:
<com.huawei.scene.demo.sceneview.SceneSampleView
android:layout_width="match_parent"
android:layout_height="match_parent"/>
3. Create an onBtnShowProduct in MainActivity.
When the user taps the onBtnShowProduct button, SceneViewActivity is called to load, render, and finally display the 3D model of the product.
Code:
public void onBtnShowProduct(View view) {
startActivity(new Intent(this, SceneViewActivity.class));
}
Configuring AR Fitting for a Product​Product virtual try-on is easily accessible, thanks to the facial recognition, graphics rendering, and AR display capabilities offered by HMS Core.
1. Create a FaceViewActivity inherited from Activity, and create the corresponding layout file.
Create face_view in the layout file to display the try-on effect.
Code:
<com.huawei.hms.scene.sdk.FaceView
android:id="@+id/face_view"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:sdk_type="AR_ENGINE"></com.huawei.hms.scene.sdk.FaceView>
Create a switch. When the user taps it, they can check the difference between the appearances with and without the virtual glasses.
Code:
<Switch
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/switch_view"
android:layout_alignParentTop="true"
android:layout_marginTop="15dp"
android:layout_alignParentEnd="true"
android:layout_marginEnd ="15dp"
android:text="Try it on"
android:theme="@style/AppTheme"
tools:ignore="RelativeOverlap" />
2. Override the onCreate method in FaceViewActivity to obtain FaceView.
Code:
public class FaceViewActivity extends Activity {
private FaceView mFaceView;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_face_view);
mFaceView = findViewById(R.id.face_view);
}
}
3. Create a listener method for the switch. When the switch is enabled, the loadAsset method is called to load the 3D model of the product. Set the position for facial recognition in LandmarkType.
Code:
mSwitch.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
mFaceView.clearResource();
if (isChecked) {
// Load materials.
int index = mFaceView.loadAsset("FaceView/sunglasses.glb", LandmarkType.TIP_OF_NOSE);
}
}
});
Use setInitialPose to adjust the size and position of the model. Create the position, rotation, and scale arrays and pass values to them.
Code:
final float[] position = { 0.0f, 0.0f, -0.15f };
final float[] rotation = { 0.0f, 0.0f, 0.0f, 0.0f };
final float[] scale = { 2.0f, 2.0f, 0.3f };
Put the following code below the loadAsset line:
Code:
mFaceView.setInitialPose(index, position, scale, rotation);
4. Create an onBtnTryProductOn in MainActivity. When the user taps the onBtnTryProductOn button, the FaceViewActivity is called, enabling the user to view the try-on effect.
Code:
public void onBtnTryProductOn(View view) {
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(
this, new String[]{ Manifest.permission.CAMERA }, FACE_VIEW_REQUEST_CODE);
} else {
startActivity(new Intent(this, FaceViewActivity.class));
}
}
References​For more details, you can go to:
AR Engine official website; Scene Kit official website
Reddit to join our developer discussion
GitHub to download sample codes
Stack Overflow to solve any integration problems
Does it support Quick App?
Hello sir,
from where i can get the asset folder for above given code(virtual try-on glasses)
Mamta23 said:
Hello sir,
from where i can get the asset folder for above given code(virtual try-on glasses)
Click to expand...
Click to collapse
Hi,
You can get the demo here: https://github.com/HMS-Core/hms-AREngine-demo
Hope this helps!

Note on Developing a Person Tracking Function

Background​Videos are memories — so why not spend more time making them look better? Many mobile apps on the market simply offer basic editing functions, such as applying filters and adding stickers. That said, it is not enough for those who want to create dynamic videos, where a moving person stays in focus. Traditionally, this requires a keyframe to be added and the video image to be manually adjusted, which could scare off many amateur video editors.
I am one of those people and I've been looking for an easier way of implementing this kind of feature. Fortunately for me, I stumbled across the track person capability from HMS Core Video Editor Kit, which automatically generates a video that centers on a moving person, as the images below show.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Before using the capability
After using the capability​
Thanks to the capability, I can now confidently create a video with the person tracking effect.
Let's see how the function is developed.
Development Process​Preparations​Configure the app information in AppGallery Connect.
Project Configuration​1. Set the authentication information for the app via an access token or API key.
Use the setAccessToken method to set an access token during app initialization. This needs setting only once.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. The API key needs to be set only once.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
2. Set a unique License ID.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for HuaweiVideoEditor.
When creating a video editing project, first create a HuaweiVideoEditor object and initialize its runtime environment. Release this object when exiting a video editing project.
(1) Create a HuaweiVideoEditor object.​
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
(2) Specify the preview area position.
The area renders video images. This process is implemented via SurfaceView creation in the SDK. The preview area position must be specified before the area is created.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Configure the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
(3) Initialize the runtime environment. LicenseException will be thrown if license verification fails.
Creating the HuaweiVideoEditor object will not occupy any system resources. The initialization time for the runtime environment has to be manually set. Then, necessary threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
4. Add a video or an image.
Create a video lane. Add a video or an image to the lane using the file path.
Code:
// Obtain the HVETimeLine object.
HVETimeLine timeline = editor.getTimeLine();
// Create a video lane.
HVEVideoLane videoLane = timeline.appendVideoLane();
// Add a video to the end of the lane.
HVEVideoAsset videoAsset = videoLane.appendVideoAsset("test.mp4");
// Add an image to the end of the video lane.
HVEImageAsset imageAsset = videoLane.appendImageAsset("test.jpg");
Function Building​
Code:
// Initialize the capability engine.
visibleAsset.initHumanTrackingEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Initialization progress.
}
@Override
public void onSuccess() {
// The initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// The initialization failed.
}
});
// Track a person using the coordinates. Coordinates of two vertices that define the rectangle containing the person are returned.
List<Float> rects = visibleAsset.selectHumanTrackingPerson(bitmap, position2D);
// Enable the effect of person tracking.
visibleAsset.addHumanTrackingEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Handling progress.
}
@Override
public void onSuccess() {
// Handling successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Handling failed.
}
});
// Interrupt the effect.
visibleAsset.interruptHumanTracking();
// Remove the effect.
visibleAsset.removeHumanTrackingEffect();
References​The Importance of Visual Effects
Track Person

Why and How: Adding Templates to a Video Editor

Being creative is hard, but thinking of a fantastic idea is even more challenging. And once you've done that, the hardest part is expressing that idea in an attractive way.
This, I think, is the very reason why templates are gaining popularity in text, image, audio, and video editing, and more. Of all these templates, video templates are probably the most in demand by users. This is because a video is a lengthy creation which may also require high costs. And therefore, it is much more convenient to create a video using a template, rather than from scratch — which is particularly true for video editing amateurs.
The video template solution I have got for my app is a template capability of HMS Core Video Editor Kit. This capability comes preloaded with a library of templates that my users can use directly to quickly create a short video, make a vlog during their journey, create a product display video, generate a news video, and more.
On top of this, this capability comes with a platform where I can manage the templates easily, like this.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Template management platform — AppGallery Connect​To be honest, one of the things that I really like about the capability is that it's easy to integrate, thanks to its straightforward code, as well as a whole set of APIs and relevant description on how to use them. Below is the process I followed to integrate the capability into my app.
Development Procedure​Preparations​
Configure the app information.
Integrate the SDK.
Set up the obfuscation scripts.
Declare necessary permissions, including: device vibration permission, microphone use permission, storage read permission, and storage write permission.
​Project Configuration​Setting Authentication Information​Set the authentication information by using:
An access token. The setting is required only once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
Or, an API key, which also needs to be set only once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Configuring a License ID​Since this ID is used to manage the usage quotas of the service, the ID must be unique.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
Initialize the runtime environment for HuaweiVideoEditor.
During project configuration, an object of HuaweiVideoEditor must be created first, and its runtime environment must be initialized. When exiting the project, the object shall be released.
1. Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
2. Specify the preview area position. Such an area renders video images, which is implemented via SurfaceView created within the SDK. Before creating this area, specify its position first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify a preview area.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Set the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
3. Initialize the runtime environment. LicenseException will be thrown, if the license verification fails.
When a HuaweiVideoEditor object is created, no system resources have been used. Manually set the time when its runtime environment is initialized, and required threads and timers will be created in the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Capability Integration​In this part, I use HVETemplateManager to obtain the on-cloud template list, and then provide the list to my app users.
Code:
// Obtain the template column list.
final HVEColumnInfo[] column = new HVEColumnInfo[1];
HVETemplateManager.getInstance().getColumnInfos(new HVETemplateManager.HVETemplateColumnsCallback() {
@Override
public void onSuccess(List<HVEColumnInfo> result) {
// Called when the list is successfully obtained.
column[0] = result.get(0);
}
@Override
public void onFail(int error) {
// Called when the list failed to be obtained.
}
});
// Obtain the list details.
final String[] templateIds = new String[1];
// size indicates the number of the to-be-requested on-cloud templates. The size value must be greater than 0. Offset indicates the offset of the to-be-requested on-cloud templates. The offset value must be greater than or equal to 0. true indicates to forcibly obtain the data of the on-cloud templates.
HVETemplateManager.getInstance().getTemplateInfos(column[0].getColumnId(), size, offset, true, new HVETemplateManager.HVETemplateInfosCallback() {
@Override
public void onSuccess(List<HVETemplateInfo> result, boolean hasMore) {
// Called when the list details are successfully obtained.
HVETemplateInfo templateInfo = result.get(0);
// Obtain the template ID.
templateIds[0] = templateInfo.getId();
}
@Override
public void onFail(int errorCode) {
// Called when the list details failed to be obtained.
}
});
// Obtain the template ID when the list details are obtained.
String templateId = templateIds[0];
// Obtain a template project.
final List<HVETemplateElement>[] editableElementList = new ArrayList[1];;
HVETemplateManager.getInstance().getTemplateProject(templateId, new HVETemplateManager.HVETemplateProjectCallback() {
@Override
public void onSuccess(List<HVETemplateElement> editableElements) {
// Direct to the material selection screen when the project is successfully obtained. Update editableElements with the paths of the selected local materials.
editableElementList[0] = editableElements;
}
@Override
public void onProgress(int progress) {
// Called when the progress of obtaining the project is received.
}
@Override
public void onFail(int errorCode) {
// Called when the project failed to be obtained.
}
});
// Prepare a template project.
HVETemplateManager.getInstance().prepareTemplateProject(templateId, new HVETemplateManager.HVETemplateProjectPrepareCallback() {
@Override
public void onSuccess() {
// Called when the preparation is successful. Create an instance of HuaweiVideoEditor, for operations like playback, preview, and export.
}
@Override
public void onProgress(int progress) {
// Called when the preparation progress is received.
}
@Override
public void onFail(int errorCode) {
// Called when the preparation failed.
}
});
// Create an instance of HuaweiVideoEditor.
// Such an instance will be used for operations like playback and export.
HuaweiVideoEditor editor = HuaweiVideoEditor.create(templateId, editableElementList[0]);
try {
editor.initEnvironment();
} catch (LicenseException e) {
SmartLog.e(TAG, "editor initEnvironment ERROR.");
}
Once you've completed this process, you'll have created an app just like in the demo displayed below.
Conclusion​An eye-catching video, for all the good it can bring, can be difficult to create. But with the help of video templates, users can create great-looking videos in even less time, so they can spend more time creating more videos.
This article illustrates a video template solution for mobile apps. The template capability offers various out-of-the-box preset templates that can be easily managed on a platform. And what's better is that the whole integration process is easy. So easy in fact even I could create a video app with templates.
References​Tips for Using Templates to Create Amazing Videos
Integrating the Template Capability

How to Automatically Create a Scenic Timelapse Video

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Have you ever watched a video of the northern lights? Mesmerizing light rays that swirl and dance through the star-encrusted sky. It's even more stunning when they are backdropped by crystal-clear waters that flow smoothly between and under ice crusts. Complementing each other, the moving sky and water compose a dynamic scene that reflects the constant rhythm of the mother nature.
Now imagine that the video is frozen into an image: It still looks beautiful, but lacks the dynamism of the video. Such a contrast between still and moving images shows how videos are sometimes better than still images when it comes to capturing majestic scenery, since the former can convey more information and thus be more engaging.
This may be the reason why we sometimes regret just taking photos instead of capturing a video when we encounter beautiful scenery or a memorable moment.
In addition to this, when we try to add a static image to a short video, we will find that the transition between the image and other segments of the video appears very awkward, since the image is the only static segment in the whole video.
If we want to turn a static image into a dynamic video by adding some motion effects to the sky and water, one way to do this is to use a professional PC program to modify the image. However, this process is often very complicated and time-consuming: It requires adjustment of the timeline, frames, and much more, which can be a daunting prospect for amateur image editors.
Luckily, there are now numerous AI-driven capabilities that can automatically create time-lapse videos for users. I chose to use the auto-timelapse capability provided by HMS Core Video Editor Kit. It can automatically detect the sky and water in an image and produce vivid dynamic effects for them, just like this:
The movement speed and angle of the sky and water are customizable.
Now let's take a look at the detailed integration procedure for this capability, to better understand how such a dynamic effect is created.
Integration Procedure​Preparations​1. Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable the required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration​1. Set the app authentication information. This can be done via an API key or an access token.
Set an API key via the setApiKey method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, set an access token by using the setAccessToken method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID. This ID should be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development​
Code:
// Initialize the auto-timelapse engine.
imageAsset.initTimeLapseEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// When the initialization is successful, check whether there is sky or water in the image.
int motionType = -1;
imageAsset.detectTimeLapse(new HVETimeLapseDetectCallback() {
@Override
public void onResult(int state) {
// Record the state parameter, which is used to define a motion effect.
motionType = state;
}
});
// skySpeed indicates the speed at which the sky moves; skyAngle indicates the direction to which the sky moves; waterSpeed indicates the speed at which the water moves; waterAngle indicates the direction to which the water moves.
HVETimeLapseEffectOptions options =
new HVETimeLapseEffectOptions.Builder().setMotionType(motionType)
.setSkySpeed(skySpeed)
.setSkyAngle(skyAngle)
.setWaterAngle(waterAngle)
.setWaterSpeed(waterSpeed)
.build();
// Add the auto-timelapse effect.
imageAsset.addTimeLapseEffect(options, new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
}
@Override
public void onSuccess() {
}
@Override
public void onError(int errorCode, String errorMessage) {
}
});
// Stop applying the auto-timelapse effect.
imageAsset.interruptTimeLapse();
// Remove the auto-timelapse effect.
imageAsset.removeTimeLapseEffect();
Now, the auto-timelapse capability has been successfully integrated into an app.
Conclusion​When capturing scenic vistas, videos, which can show the dynamic nature of the world around us, are often a better choice than static images. In addition, when creating videos with multiple shots, dynamic pictures deliver a smoother transition effect than static ones.
However, for users not familiar with the process of animating static images, if they try do so manually using computer software, they may find the results unsatisfying.
The good news is that there are now mobile apps integrated with capabilities such as Video Editor Kit's auto-timelapse feature that can create time-lapse effects for users. The generated effect appears authentic and natural, the capability is easy to use, and its integration is straightforward. With such capabilities in place, a video/image app can provide users with a more captivating user experience.
In addition to video/image editing apps, I believe the auto-timelapse capability can also be utilized by many other types of apps. What other kinds of apps do you think would benefit from such a feature? Let me know in the comments section.

Categories

Resources