Immersive audio is becoming an increasingly important factor for enhancing user experience in the music, gaming, and audio/video editing fields. The spatial audio function is ideal for meetings, sports rehabilitation, and particularly for exhibitions, as it helps deliver a more immersive experience. For users who suffer from visual impairments, the function can serve as a helpful guide.
In this article, I am going to reuse the sample code on this GitHub repo .I will implement spatial audio function in my android app and delivers the 3D surround sound.
Development PracticePreparationsPrepare the audio for 2D-to-3D conversion, which is better a MP3 file. If not, follow the instructions specified later to convert the format to MP3 first. If the audio is part of a video file, just extract the audio first by referring to the instructions described later.
1. Configure the Maven repository address in the project-level build.gradle file.
Code:
buildscript {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
// Add the AppGallery Connect plugin configuration.
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
}
}
allprojects {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
}
Add the following configuration under the declaration in the file header:
Code:
apply plugin: 'com.huawei.agconnect'
2. Add the build dependency on the Audio Editor SDK in the app-level build.gradle file.
Code:
dependencies{
implementation 'com.huawei.hms:audio-editor-ui:{version}'
}
3. Apply for the following permissions in the AndroidManifest.xml file:
Code:
<!-- Vibrate -->
<uses-permission android:name="android.permission.VIBRATE" />
<!-- Microphone -->
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<!-- Write into storage -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!-- Read from storage -->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Connect to the Internet -->
<uses-permission android:name="android.permission.INTERNET" />
<!-- Obtain the network status -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<!-- Obtain the changed network connectivity state -->
<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" />
Code Development1. Create app's custom activity for selecting one or more audio files. Return their paths to the SDK.
Code:
// Return the audio file paths to the audio editing screen.
private void sendAudioToSdk() {
// Set filePath to the obtained audio file path.
String filePath = "/sdcard/AudioEdit/audio/music.aac";
ArrayList<String> audioList = new ArrayList<>();
audioList.add(filePath);
// Return the path to the audio editing screen.
Intent intent = new Intent();
// Use HAEConstant.AUDIO_PATH_LIST provided by the SDK.
intent.putExtra(HAEConstant.AUDIO_PATH_LIST, audioList);
// Use HAEConstant.RESULT_CODE provided by the SDK as the result code.
this.setResult(HAEConstant.RESULT_CODE, intent);
finish();
}
2. Register the activity in the AndroidManifest.xml file as described in the following code. When you choose to import the selected audio files, the SDK will send an intent whose action value is com.huawei.hms.audioeditor.chooseaudio to jump to the activity.
Code:
<activity android:name="Activity ">
<intent-filter>
<action android:name="com.huawei.hms.audioeditor.chooseaudio"/>
<category android:name="android.intent.category.DEFAULT"/>
</intent-filter>
</activity>
Launch the audio editing screen. When you tap Add audio, the SDK will automatically call the activity defined earlier. Then operations like editing and adding special effects can be performed on the audio. After such operations are complete, the edited audio can be exported.
Code:
HAEUIManager.getInstance().launchEditorActivity(this);
3. (Optional) Convert the file format to MP3.
Call transformAudioUseDefaultPath to convert the format and save the converted audio to the default directory.
Code:
// Convert the audio format.
HAEAudioExpansion.getInstance().transformAudioUseDefaultPath(context,inAudioPath, audioFormat, new OnTransformCallBack() {
// Callback when the progress is received. The value ranges from 0 to 100.
@Override
public void onProgress(int progress) {
}
// Callback when the conversion fails.
@Override
public void onFail(int errorCode) {
}
// Callback when the conversion succeeds.
@Override
public void onSuccess(String outPutPath) {
}
// Callback when the conversion is canceled.
@Override
public void onCancel() {
}
});
// Cancel format conversion.
HAEAudioExpansion.getInstance().cancelTransformAudio();
Call transformAudio to convert audio and save the converted audio to a specified directory.
Code:
// Convert the audio format.
HAEAudioExpansion.getInstance().transformAudio(context,inAudioPath, outAudioPath, new OnTransformCallBack(){
// Callback when the progress is received. The value ranges from 0 to 100.
@Override
public void onProgress(int progress) {
}
// Callback when the conversion fails.
@Override
public void onFail(int errorCode) {
}
// Callback when the conversion succeeds.
@Override
public void onSuccess(String outPutPath) {
}
// Callback when the conversion is canceled.
@Override
public void onCancel() {
}
});
// Cancel format conversion.
HAEAudioExpansion.getInstance().cancelTransformAudio();
(Optional) Call extractAudio to extract audio from a video to a specified directory.
Code:
// outAudioDir (optional): directory path for storing extracted audio.
// outAudioName (optional): name of extracted audio, which does not contain the file name extension.
HAEAudioExpansion.getInstance().extractAudio(context,inVideoPath,outAudioDir, outAudioName,new AudioExtractCallBack() {
@Override
public void onSuccess(String audioPath) {
Log.d(TAG, "ExtractAudio onSuccess : " + audioPath);
}
@Override
public void onProgress(int progress) {
Log.d(TAG, "ExtractAudio onProgress : " + progress);
}
@Override
public void onFail(int errCode) {
Log.i(TAG, "ExtractAudio onFail : " + errCode);
}
@Override
public void onCancel() {
Log.d(TAG, "ExtractAudio onCancel.");
}
});
// Cancel audio extraction.
HAEAudioExpansion.getInstance().cancelExtractAudio();
Call getInstruments and startSeparationTasks for audio source separation.
Code:
// Obtain the accompaniment ID using getInstruments and pass the ID to startSeparationTasks.
HAEAudioSeparationFile haeAudioSeparationFile = new HAEAudioSeparationFile();
haeAudioSeparationFile.getInstruments(new SeparationCloudCallBack<List<SeparationBean>>() {
@Override
public void onFinish(List<SeparationBean> response) {
// Callback when the separation data is received. The data includes the accompaniment ID.
}
@Override
public void onError(int errorCode) {
// Callback when the separation fails.
}
});
// Set the parameter for accompaniment separation.
List instruments = new ArrayList<>();
instruments.add("accompaniment ID");
haeAudioSeparationFile.setInstruments(instruments);
// Start separating.
haeAudioSeparationFile.startSeparationTasks(inAudioPath, outAudioDir, outAudioName, new AudioSeparationCallBack() {
@Override
public void onResult(SeparationBean separationBean) { }
@Override
public void onFinish(List<SeparationBean> separationBeans) {}
@Override
public void onFail(int errorCode) {}
@Override
public void onCancel() {}
});
// Cancel separating.
haeAudioSeparationFile.cancel();
Call applyAudioFile to apply spatial audio.
Code:
// Apply spatial audio.
// Fixed position mode.
HAESpaceRenderFile haeSpaceRenderFile = new HAESpaceRenderFile(SpaceRenderMode.POSITION);
haeSpaceRenderFile.setSpacePositionParams(
new SpaceRenderPositionParams(x, y, z));
// Dynamic rendering mode.
HAESpaceRenderFile haeSpaceRenderFile = new HAESpaceRenderFile(SpaceRenderMode.ROTATION);
haeSpaceRenderFile.setRotationParams( new SpaceRenderRotationParams(
x, y, z, surroundTime, surroundDirection));
// Extension.
HAESpaceRenderFile haeSpaceRenderFile = new HAESpaceRenderFile(SpaceRenderMode.EXTENSION);
haeSpaceRenderFile.setExtensionParams(new SpaceRenderExtensionParams(radiusVal, angledVal));
// Call the API.
haeSpaceRenderFile.applyAudioFile(inAudioPath, outAudioDir, outAudioName, callBack);
// Cancel applying spatial audio.
haeSpaceRenderFile.cancel();
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
After completing these steps, you can now implement the 2D-to-3D conversion effect for your app.
Utilize the function according to your needs. To find more about it, check out:
Official website of Audio Editor Kit
Development guide to the kit
Can this conversion will be done in offline?
does it support all audio format?
Which Audio api's we can use for volume management?
muraliameakula said:
Can this conversion will be done in offline?
Click to expand...
Click to collapse
This service cannot be converted offline. Some functions can be used offline: AI dubbing, spatial rendering, separation, and functions related to material.
ProManojKumar said:
does it support all audio format?
Click to expand...
Click to collapse
Support mp3 wav flac aac etc.
vivek_yadav said:
Which Audio api's we can use for volume management?
Click to expand...
Click to collapse
https://developer.huawei.com/consum...unctions-0000001224604517#section171179354277
Related
Introduction
React Native is a convenient tool for cross platform development, and though it has become more and more powerful through the updates, there are limits to it, for example its capability to interact with and using the native components. Bridging native code with Javascript is one of the most popular and effective ways to solve the problem. Best of both worlds!
Currently not all HMS Kits has official RN support yet, this article will walk you through how to create android native bridge to connect your RN app with HMS Kits, and Scan Kit will be used as an example here.
The tutorial is based on https://github.com/clementf-hw/rn_integration_demo/tree/4b2262aa2110041f80cb41ebd7caa1590a48528a, you can find more details about the sample project in this article: https://forums.developer.huawei.com...d=0201230857831870061&fid=0101187876626530001.
Prerequisites
Basic Android development
Basic React Native development
These areas have been covered immensely already on RN’s official site, this forum and other sources
HMS properly configured
You can also reference the above article for this matter
Major dependencies
RN Version: 0.62.2 (released on 9th April, 2020)
Gradle Version: 5.6.4
Gradle Plugin Version: 3.6.1
agcp: 1.2.1.301
This tutorial is broken into 3 parts:
Pt. 1: Create a simple native UI component as intro and warm up
Pt. 2: Bridging HMS Scan Kit into React Native
Pt. 3: Make Scan Kit into a stand alone React Native Module that you can import into other projects or even upload to npm.
Bridging HMS Scan Kit
Now we have some fundamental knowledge on how to bridge, let’s bridge something meaningful. We will bridge the Scan Kit Default View as a QR Code Scanner, and also learn how to communicate from Native side to React Native side.
First, we’ll have to configure the project following the guide to set Scan Kit up on the native side: https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/scan-preparation-4
Put agconnect-service.json in place
Add to allprojects > repositories in root level build.gradle
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Add to buildscript > repositories
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Add to buildscript > dependencies
Code:
buildscript{
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
}
Go to app/build.gradle and add this to header
Code:
apply plugin: 'com.huawei.agconnect'
Add this to dependencies
Code:
dependencies {
implementation 'com.huawei.hms:scanplus:1.1.3.300'
}
Add in proguard-rules.pro
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.hianalytics.android.**{*;}
-keep class com.huawei.**{*;}
Now do a gradle sync. Also you can try to build and run the app to see if everything’s ok even though we have not done any actual development yet.
Add these to AndroidManifest.xml
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
<application
…
<activity android:name="com.huawei.hms.hmsscankit.ScanKitActivity" />
</application>
So the basic setup/configuration is done. Similar to the warm up, we will create a Module file first. Note that for the sake of variance and wider adaptability of the end product, this time we’ll make it a plain Native Module instead of Native UI Component.
Code:
package com.cfdemo.d001rn;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
public class ReactNativeHmsScanModule extends ReactContextBaseJavaModule {
private static final String REACT_CLASS = "ReactNativeHmsScan";
private static ReactApplicationContext reactContext;
public ReactNativeHmsScanModule(ReactApplicationContext context) {
super(context);
reactContext = context;
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
}
We have seen how data flows from RN to native in the warm up (e.g. @reactProp of our button), There are also several ways for data to flow from native to RN. In Scan Kit, it utilizes startActivityForResult, therefore we need to implement its subsequent listeners.
Code:
package com.cfdemo.d001rn;
import android.app.Activity;
import android.content.Intent;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ActivityEventListener;
import com.facebook.react.bridge.BaseActivityEventListener;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
public class ReactNativeHmsScanModule extends ReactContextBaseJavaModule {
private static final String REACT_CLASS = "ReactNativeHmsScan";
private static ReactApplicationContext reactContext;
public ReactNativeHmsScanModule(ReactApplicationContext context) {
super(context);
reactContext = context;
reactContext.addActivityEventListener(mActivityEventListener);
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
private final ActivityEventListener mActivityEventListener = new BaseActivityEventListener() {
@Override
public void onActivityResult(Activity activity, int requestCode, int resultCode, Intent intent) {
}
};
}
There are couple small details we’ll need to add. First, React Native javascript side expects a Promise from the result.
Code:
private Promise mScannerPromise;
We also need to add a request code to identify that this is our scan kit activity. 567 here is just an example, the value is of your own discretion
Code:
private static final int REQUEST_CODE_SCAN = 567
There will be several error/reject conditions, let’s identify and declare their code first
Code:
private static final String E_ACTIVITY_DOES_NOT_EXIST = "E_ACTIVITY_DOES_NOT_EXIST";
private static final String E_SCANNER_CANCELLED = "E_SCANNER_CANCELLED";
private static final String E_FAILED_TO_SHOW_SCANNER = "E_FAILED_TO_SHOW_SCANNER";
private static final String E_INVALID_CODE = "E_INVALID_CODE";
At this moment, the module should look like this
Code:
package com.cfdemo.d001rn;
import android.app.Activity;
import android.content.Intent;
import androidx.annotation.NonNull;
import com.facebook.react.bridge.ActivityEventListener;
import com.facebook.react.bridge.BaseActivityEventListener;
import com.facebook.react.bridge.Promise;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.bridge.ReactContextBaseJavaModule;
public class ReactNativeHmsScanModule extends ReactContextBaseJavaModule {
private static final String REACT_CLASS = "ReactNativeHmsScan";
private static ReactApplicationContext reactContext;
private Promise mScannerPromise;
private static final int REQUEST_CODE_SCAN = 567;
private static final String E_ACTIVITY_DOES_NOT_EXIST = "E_ACTIVITY_DOES_NOT_EXIST";
private static final String E_SCANNER_CANCELLED = "E_SCANNER_CANCELLED";
private static final String E_FAILED_TO_SHOW_SCANNER = "E_FAILED_TO_SHOW_SCANNER";
private static final String E_INVALID_CODE = "E_INVALID_CODE";
public ReactNativeHmsScanModule(ReactApplicationContext context) {
super(context);
reactContext = context;
reactContext.addActivityEventListener(mActivityEventListener);
}
@NonNull
@Override
public String getName() {
return REACT_CLASS;
}
private final ActivityEventListener mActivityEventListener = new BaseActivityEventListener() {
@Override
public void onActivityResult(Activity activity, int requestCode, int resultCode, Intent intent) {
}
};
}
Now let’s implement the listener method
Code:
if (requestCode == REQUEST_CODE_SCAN) {
if (mScannerPromise != null) {
if (resultCode == Activity.RESULT_CANCELED) {
mScannerPromise.reject(E_SCANNER_CANCELLED, "Scanner was cancelled");
} else if (resultCode == Activity.RESULT_OK) {
Object obj = intent.getParcelableExtra(ScanUtil.RESULT);
if (obj instanceof HmsScan) {
if (!TextUtils.isEmpty(((HmsScan) obj).getOriginalValue())) {
mScannerPromise.resolve(((HmsScan) obj).getOriginalValue().toString());
} else {
mScannerPromise.reject(E_INVALID_CODE, "Invalid Code");
}
return;
}
}
}
}
Let’s walk through what this does
When the listener receives an activity result, it checks if this is our request by checking the request code.
Afterwards, it checks if the promise object is null. We will cover the promise object later, but briefly speaking this is passed from RN to native, and we rely on it to send the data back to RN.
Then, if the result is a CANCELED situation, we tell RN that the scanner is canceled, for example closed by user, by calling promise.reject()
If the result indicates OK, we’ll get the data by calling getParcelableExtra()
Now we’ll see if the resulting data matches our data type and is not empty, and then we’ll call promise.resolve()
Otherwise we will resolve a general rejection message. Of course here you can expand and give a more detailed breakdown and resolution if you wish
This is a lot of checking and validation, but one can never be too safe, right?
Cool, now we have finished the listener, let’s work on the caller! This is the method we’ll be calling in RN side, indicated by the @reactMethod annotation.
[CODE @reactMethod
public void startScan(final Promise promise) {
} [/CODE]
Give it some content
[CODE @reactMethod
public void startScan(final Promise promise) {
Activity currentActivity = getCurrentActivity();
if (currentActivity == null) {
promise.reject(E_ACTIVITY_DOES_NOT_EXIST, "Activity doesn't exist");
return;
}
// Store the promise to resolve/reject when picker returns data
mScannerPromise = promise;
try {
ScanUtil.startScan(currentActivity, REQUEST_CODE_SCAN, new HmsScanAnalyzerOptions.Creator().setHmsScanTypes(HmsScan.ALL_SCAN_TYPE).create());
} catch (Exception e) {
mScannerPromise.reject(E_FAILED_TO_SHOW_SCANNER, e);
mScannerPromise = null;
}
}[/CODE]
Let’s do a walk through again
First we get the current activity reference and check if it is valid
Then we take the input promise and assign it to mScannerPromise which we declared earlier, so we can refer and use it throughout the process
Now we call the Scan Kit! This part is same as a normal android implementation.
Of course we wrap it with a try-catch for safety purposes
At this point we have finished the Module, same as the warm up we’ll need to create a Package. This time it is a Native Module therefore we register it in createNativeModules() and also give createViewManagers() an empty list.
Code:
package com.cfdemo.d001rn;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import com.facebook.react.ReactPackage;
import com.facebook.react.bridge.NativeModule;
import com.facebook.react.bridge.ReactApplicationContext;
import com.facebook.react.uimanager.ViewManager;
public class ReactNativeHmsScanPackage implements ReactPackage {
@Override
public List<NativeModule> createNativeModules(ReactApplicationContext reactContext) {
return Arrays.<NativeModule>asList(new ReactNativeHmsScanModule(reactContext));
}
@Override
public List<ViewManager> createViewManagers(ReactApplicationContext reactContext) {
return Collections.emptyList();
}
}
Same as before, we’ll add the package to our MainApplication.java, import the Package, and add it in the getPackages() function
Code:
import com.cfdemo.d001rn.ReactNativeWarmUpPackage;
import com.cfdemo.d001rn.ReactNativeHmsScanPackage;
public class MainApplication extends Application implements ReactApplication {
...
@Override
protected List<ReactPackage> getPackages() {
@SuppressWarnings("UnnecessaryLocalVariable")
List<ReactPackage> packages = new PackageList(this).getPackages();
// Packages that cannot be autolinked yet can be added manually here, for example:
// packages.add(new MyReactNativePackage());
packages.add(new ReactNativeWarmUpPackage());
packages.add(new ReactNativeHmsScanPackage());
return packages;
}
All set! Let’s head back to RN side. This is our app from the warm up exercise(with a bit style change for the things we are going to add)
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s add a Button and set its onPress property as this.onScan() which we’ll implement after this
Code:
render() {
const { displayText, region } = this.state
return (
<View style={styles.container}>
<Text style={styles.textBox}>
{displayText}
</Text>
<RNWarmUpView
style={styles.nativeModule}
text={"Render in Javascript"}
/>
<Button
style={styles.button}
title={'Scan Button'}
onPress={() => this.onScan()}
/>
<MapView
style={styles.map}
region={region}
showCompass={true}
showsUserLocation={true}
showsMyLocationButton={true}
>
</MapView>
</View>
);
}
Reload and see the button
Similar to the one in the warm up, we can declare the Native Module using this simple way
Code:
const RNWarmUpView = requireNativeComponent('RCTWarmUpView')
const RNHMSScan = NativeModules.ReactNativeHmsScan
Now we’ll implement onScan() which uses the async/await syntax for asynchronous coding
Code:
async onScan() {
try {
const data = await RNHMSScan.startScan();
// handle your data here
} catch (e) {
console.log(e);
}
}
Important! Scan Kit requires CAMERA and READ_EXTERNAL_STORAGE permissions to function, make sure you have handled this beforehand. One of the recommended way to handle it is to use react-native-permissions library https://github.com/react-native-community/react-native-permissions. I will make another article regarding this topic, but for now you can refer to https://github.com/clementf-hw/rn_integration_demo if you are in need.
Now we click…TADA!
In this demo, this is what onScan() contains
Code:
async onScan() {
try {
const data = await RNHMSScan.startScan();
const qrcodeData = {
message: (JSON.parse(data)).message,
location: (JSON.parse(data)).location,
my_location: (JSON.parse(data)).my_location
}
this.handleData(qrcodeData)
} catch (e) {
console.log(e);
}
}
Note: one minor modification is needed if you are basing on the branch of this demo project mentioned before
Code:
onLocationReceived(locationData) {
const location = typeof locationData === "object" ? locationData : JSON.parse(locationData)
…
Now let’s try scan this
The actual data contained in the QR Code is
Code:
{"message": "Auckland", "location": {"lat": "-36.848461","lng": "174.763336"}}
Which bring us to Auckland!
Now your HMS Scan Kit in React Native is up and running!
Pt. 2 of this tutorial is done, please feel free to ask questions. You can also check out the repo of the sample project on github: https://github.com/clementf-hw/rn_integration_demo, and raise issue if you have any question or any update.
In the 3rd and final part of this tutorial, we'll go through how to make this RN HMS Scan Kit Bridge a standalone, downloadable and importable React Native Module, which you can use in multiple projects instead of creating the Native Module one by one, and you can even upload it to NPM to share with other fellow developers.
HMS Video Kit — 1
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this article, I will write about the features of Huawei’s Video Kit and we will develop a sample application that allows you to play streaming media from a third-party video address.
Why should we use it?
Nowadays, video apps are so popular. Due to the popularity of streaming media, many developers have introduced HD movie streaming apps for people who use devices such as tablets and smartphones for everyday purposes. With Video Kit WisePlayer SDK you can bring stable HD video experiences to your users.
Service Features
It provides a high definition video experience without any delay
Responds instantly to playback requests
Have intuitive controls and offer content on demand
It selects the most suitable bitrate for your app
URL anti-leeching , playback authentication, and other security mechanisms so your videos are completely secure
It supports streaming media in 3GP, MP4, or TS format and complies with HTTP/HTTPS, HLS, or DASH.
Integration Preparations
First of all, in order to start developing an app with most of the Huawei mobile services and the Video Kit as well, you need to integrate the HUAWEI HMS Core into your application.
Software Requirements
Android Studio 3.X
JDK 1.8 or later
HMS Core (APK) 5.0.0.300 or later
EMUI 3.0 or later
The integration flow will be like this :
For a detailed HMS core integration process, you can either refer to Preparations for Integrating HUAWEI HMS Core.
After creating the application on App Gallery Connect and completed the other steps that are required, please make sure that you copied the agconnect-services.json file to the app’s root directory of your Android Studio project.
Adding SDK dependencies
Add the AppGallery Connect plug-in and the Maven repository in the project-level build.gradle file.
Code:
buildscript {
repositories {
......
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
......
classpath 'com.huawei.agconnect:agcp:1.3.1.300' // HUAWEI agcp plugin
}
}
allprojects {
repositories {
......
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Open the build.gradle file in the app directory and add the AppGallery connect plug-in.
Code:
apply plugin: 'com.android.application'
// Add the following line
apply plugin: 'com.huawei.agconnect' // HUAWEI agconnect Gradle plugin
android {
......
}
3.Configure the Maven dependency in the app level build.gradle file
Code:
dependencies {
......
implementation "com.huawei.hms:videokit-player:1.0.1.300"
}
You can find all the version numbers of this kit in its Version Change History.
Click to expand...
Click to collapse
4.Configure the NDK in the app-level build.gradle file.
Code:
android {
defaultConfig {
......
ndk {
abiFilters "armeabi-v7a", "arm64-v8a"
}
}
......
}
Here, we have used the abiFilters in order to reduce the .apk size by selecting the desired CPU architectures.
5.Add permissons in the AndroidManifest.xml file.
Code:
<uses-permission
android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission
android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="com.huawei.permission.SECURITY_DIAGNOSE" />
Note : For Android 6.0 and later , Video Kit dynamically applies for the write permisson on external storage.
Click to expand...
Click to collapse
6.Lastly, add configurations to exclude the HMS Core SDK from obfuscation.
The obfuscation configuration file is proguard-rules.pro for Android Studio
Open the obfuscation configuration file of your Android Studio project and add the configurations.
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.hianalytics.android.**{*;}
-keep class com.huawei.updatesdk.**{*;}
-keep class com.huawei.hms.**{*;}
With these steps, we have terminated the integration part. Now, let's get our hands dirty with some code …
Initializing WisePlayer
In order to initialize the player, we need to create a class that inherits from Application.Application class is a base class of Android app containing components like Activities and Services.Application or its sub classes are instantiated before all the activities or any other application objects have been created in the Android app.
We can give additional introductions to the Application class by extending it. We call the initialization API WisePlayerFactory.initFactory() of the WisePlayer SDK in the onCreate() method.
Java:
public class VideoKitPlayApplication extends Application {
private static final String TAG = "VideoKitPlayApplication";
private static WisePlayerFactory wisePlayerFactory = null;
@Override
public void onCreate() {
super.onCreate();
initPlayer();
}
private void initPlayer() {
// DeviceId test is used in the demo, specific access to incoming deviceId after encryption
Log.d(TAG, "initPlayer: VideoKitPlayApplication");
WisePlayerFactoryOptions factoryOptions = new WisePlayerFactoryOptions.Builder().setDeviceId("xxx").build();
WisePlayerFactory.initFactory(this, factoryOptions, initFactoryCallback);
}
/**
* Player initialization callback
*/
private static InitFactoryCallback initFactoryCallback = new InitFactoryCallback() {
@Override
public void onSuccess(WisePlayerFactory wisePlayerFactory) {
Log.d(TAG, "init player factory success");
setWisePlayerFactory(wisePlayerFactory);
}
@Override
public void onFailure(int errorCode, String reason) {
Log.d(TAG, "init player factory fail reason :" + reason + ", errorCode is " + errorCode);
}
};
/**
* Get WisePlayer Factory
*
* @return WisePlayer Factory
*/
public static WisePlayerFactory getWisePlayerFactory() {
return wisePlayerFactory;
}
private static void setWisePlayerFactory(WisePlayerFactory wisePlayerFactory) {
VideoKitPlayApplication.wisePlayerFactory = wisePlayerFactory;
}
}
Playing a Video
We need to create a PlayActivity that inherits from AppCompatActivity and implement the Callback and SurfaceTextureListener APIs.Currently, WisePlayer supports SurfaceView and TextureView. Make sure that your app has a valid view for video display; otherwise, the playback will fail. So that In the layout file, we need to add SurfaceView or TextureView to be displayed in WisePlayer.PlayActivity also implements the OnPlayWindowListener and OnWisePlayerListener in order to get callbacks from the WisePlayer.
Java:
import android.view.SurfaceHolder.Callback;
import android.view.TextureView.SurfaceTextureListener;
import com.videokitnative.huawei.contract.OnPlayWindowListener;
import com.videokitnative.huawei.contract.OnWisePlayerListener;
public class PlayActivity extends AppCompatActivity implements Callback,SurfaceTextureListener,OnWisePlayerListener,OnPlayWindowListener{
...
}
WisePlayerFactory instance is returned when the initialization is complete in Application. We need to call createWisePlayer() to create WisePlayer.
Java:
WisePlayer player = Application.getWisePlayerFactory().createWisePlayer();
In order to make the code modular and understandable, I have created PlayControl.java class as in the official demo and created the Wiseplayer in that class. Since we create the object in our PlayActivity class through the constructor,wisePlayer will be created in the onCreate() method of our PlayActivity.
Note: Before calling createWisePlayer() to create WisePlayer, make sure that Application has successfully initialized the WisePlayer SDK.
Click to expand...
Click to collapse
Now, we need to Initialize the WisePlayer layout and add layout listeners. I have created the PlayView.java for creating the views and updating them. So that we can create the PlayView instance on onCreate() method of our PlayActivity.
Java:
/**
* init the layout
*/
private void initView() {
playView = new PlayView(this, this, this);
setContentView(playView.getContentView());
}
In the PlayView.java class I have created SurfaceView for displaying the video.
Java:
surfaceView = (SurfaceView) findViewById(R.id.surface_view); SurfaceHolder surfaceHolder = surfaceView.getHolder(); surfaceHolder.addCallback(this); surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
I will share the demo code that I have created. You can find the activity_play.xml layout and the PlayView.java files over there.
Click to expand...
Click to collapse
Registering WisePlayer listeners is another important step. Because the app will react based on listener callbacks. I have done this on PlayControl.java class with the method below.
Java:
/**
* Set the play listener
*/
private void setPlayListener() {
if (wisePlayer != null) {
wisePlayer.setErrorListener(onWisePlayerListener);
wisePlayer.setEventListener(onWisePlayerListener);
wisePlayer.setResolutionUpdatedListener(onWisePlayerListener);
wisePlayer.setReadyListener(onWisePlayerListener);
wisePlayer.setLoadingListener(onWisePlayerListener);
wisePlayer.setPlayEndListener(onWisePlayerListener);
wisePlayer.setSeekEndListener(onWisePlayerListener);
}
}
Here, onWisePlayerListener is an interface that extends required Wiseplayer interfaces.
Java:
public interface OnWisePlayerListener extends WisePlayer.ErrorListener, WisePlayer.ReadyListener,
WisePlayer.EventListener, WisePlayer.PlayEndListener, WisePlayer.ResolutionUpdatedListener,
WisePlayer.SeekEndListener, WisePlayer.LoadingListener, SeekBar.OnSeekBarChangeListener {
}
Now, we need to set URLs for our videos on our PlayControl.java class with the method below.
Java:
wisePlayer.setPlayUrl("http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4");
Since I have used CardViews on my MainActivity.java class , I have passed the Urls and movie names on click action through intent from MainActivity to PlayControl. You can check it out on my source code as well.
Click to expand...
Click to collapse
We’ve set a view to display the video with the code below. In my demo application I have used SurfaceView to display the video.
Java:
// SurfaceView listener callback
@Override
public void surfaceCreated(SurfaceHolder holder) { wisePlayer.setView(surfaceView); }
In order to prepare for the playback and start requesting data, we need the call wisePlayer.ready() method .
Lastly, we need to call wisePlayer.start() method to start the playback upon a successful response of the onReady callback method in this API.
Java:
@Override public void onReady(WisePlayer wisePlayer)
{
wisePlayer.start();
}
We have finished the development, lets pick a movie and enjoy it
Movie List
You can find the source code of the demo app here.
In this article, we developed a sample application using HUAWEI Video Kit. HMS Video Kit offers a lot of features, for the sake of simplicity we implemented a few of them. I will share another post to show more features of the video kit in the near future.
RESOURCES
Documentation
Video Kit Codelab
what is the minimum resolution video we can play ??
What should I do if the signature fails to be verified on the server side?
shikkerimath said:
what is the minimum resolution video we can play ??
Click to expand...
Click to collapse
The minimum resolution is 270p, and the maximum is 4K.
Very interesting.
This document describes how to integrate Analytics Kit using the official Unity asset. After the integration, your app can use the services of this Kit on HMS mobile phones.
For details about Analytics Kit, please visit HUAWEI Developers.
1.1 Preparations1.1.1 Importing Unity Assets1.1.2 Generating .gradle Files1. Enable project gradle.
Go to Edit > Project Settings > Player in Unity, click the Android icon, and go to Publishing Settings > Build.
Enable Custom Base Gradle Template.
Enable Custom Launcher Gradle Template.
Enable Custom Main Gradle Template.
Enable Custom Main Manifest.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2. Signature
You can use an existing keystore file or create a new one to sign your app.
Go to Edit > Project Settings > Player in Unity, click the Android icon, and go to Publishing Settings > Keystore Manager > Keystore... > Create New.
Enter the password when you open Unity. Otherwise, you cannot build the APK.
1.1.3 Configuring .gradle Files and the AndroidManifest.xml File1. Configure the BaseProjectTemplate.gradle file.
Code:
<p style="line-height: 1.5em;">Configure the Maven repository address.
buildscript {
repositories {**ARTIFACTORYREPOSITORY**
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
// If you are changing the Android Gradle Plugin version, make sure it is compatible with the Gradle version preinstalled with Unity.
// For the Gradle version preinstalled with Unity, please visit https://docs.unity3d.com/Manual/android-gradle-overview.html.
// For the official Gradle and Android Gradle Plugin compatibility table, please visit https://developer.android.com/studio/releases/gradle-plugin#updating-gradle.
// To specify a custom Gradle version in Unity, go do Preferences > External Tools, deselect Gradle Installed with Unity (recommended), and specify a path to a custom Gradle version.
classpath 'com.android.tools.build:gradle:3.4.0'
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
**BUILD_SCRIPT_DEPS**
}
repositories {**ARTIFACTORYREPOSITORY**
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
flatDir {
dirs "${project(':unityLibrary').projectDir}/libs"
}
}</p>
2. Configure the launcherTemplate.gradle file.
Code:
<p style="line-height: 1.5em;">// Generated by Unity. Remove this comment to prevent overwriting when exporting again.
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
dependencies {
implementation project(':unityLibrary')
implementation 'com.huawei.hms:hianalytics:5.1.0.300'
implementation 'com.android.support:appcompat-v7:28.0.0'
implementation 'com.huawei.agconnect:agconnect-core:1.2.0.300'
}</p>
3. Configure the mainTemplate.gradle file.
Code:
<p style="line-height: 1.5em;">apply plugin: 'com.android.library'
apply plugin: 'com.huawei.agconnect'
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.huawei.agconnect:agconnect-core:1.2.0.300'
implementation 'com.huawei.hms:hianalytics:5.0.0.301'
**DEPS**}</p>
4. Configure the AndroidManifest.xml file.
Code:
<p style="line-height: 1.5em;"><?xml version="1.0" encoding="utf-8"?>
<!-- Generated by Unity. Remove this comment to prevent overwriting when exporting again. -->
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="com.unity3d.player"
xmlns:tools="http://schemas.android.com/tools">
<application>
<activity android:name="com.hms.hms_analytic_activity.HmsAnalyticActivity"
android:theme="@style/UnityThemeSelector">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<intent-filter>
<action android:name="android.intent.action.VIEW" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE" />
<data
android:host="unity.cn"
android:scheme="https" />
</intent-filter>
<meta-data android:name="unityplayer.UnityActivity" android:value="true" />
</activity>
</application></p>
1.1.4 Adding the agconnect-services.json File1. Create an app by following instructions in Creating an AppGallery Connect Project and Adding an App to the Project.
Run keytool -list -v -keystore C:\TestApp.keyStore to generate the SHA-256 certificate fingerprint based on the keystore file of the app. Then, configure the fingerprint in AppGallery Connect.
2. Download the agconnect-services.json file and place it in the Assets/Plugins/Android directory of your Unity project.
1.1.5 Enabling HUAWEI AnalyticsFor details, please refer to the development guide.
1.1.6 Adding the HmsAnalyticActivity.java File1. Destination directory:
2. File content:
Code:
<p style="line-height: 1.5em;">package com.hms.hms_analytic_activity;
import android.os.Bundle;
import com.huawei.hms.analytics.HiAnalytics;
import com.huawei.hms.analytics.HiAnalyticsTools;
import com.unity3d.player.UnityPlayerActivity;
import com.huawei.agconnect.appmessaging.AGConnectAppMessaging;
import com.huawei.hms.aaid.HmsInstanceId;
import com.hw.unity.Agc.Auth.ThirdPartyLogin.LoginManager;
import android.content.Intent;
import java.lang.Boolean;
import com.unity3d.player.UnityPlayer;
import androidx.core.app.ActivityCompat;
public class HmsAnalyticActivity extends UnityPlayerActivity {
private AGConnectAppMessaging appMessaging;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
HiAnalyticsTools.enableLog();
HiAnalytics.getInstance(this);
appMessaging = AGConnectAppMessaging.getInstance();
if(appMessaging != null){
appMessaging.setFetchMessageEnable(true);
appMessaging.setDisplayEnable(true);
appMessaging.setForceFetch();
}
LoginManager.getInstance().initialize(this);
boolean pretendCallMain = false;
if(pretendCallMain == true){
main();
}
}
private static void callCrash() {
throwCrash();
}
private static void throwCrash() {
throw new NullPointerException();
}
public static void main(){
JavaCrash();
}
private static void JavaCrash(){
new Thread(new Runnable() {
@Override
public void run() { // Sub-thread.
UnityPlayer.currentActivity.runOnUiThread(new Runnable() {
@Override
public void run() {
callCrash();
}
});
}
}).start();
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
LoginManager.getInstance().onActivityResult(requestCode, resultCode, data);
}
}</p>
1.2 App Development with the Official Asset1.2.1 Sample Code
Code:
<p style="line-height: 1.5em;">using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HuaweiHms;
public class AnalyticTest : MonoBehaviour
{
private HiAnalyticsInstance instance;
private int level = 0;
// Start() is called before the first frame update.
void Start()
{
}
// Update() is called once per frame.
void Update()
{
}
public AnalyticTest()
{
// HiAnalyticsTools.enableLog();
// instance = HiAnalytics.getInstance(new Context());
}
public void AnalyticTestMethod()
{
HiAnalyticsTools.enableLog();
instance = HiAnalytics.getInstance(new Context());
instance.setAnalyticsEnabled(true);
Bundle b1 = new Bundle();
b1.putString("test", "123456");
instance.onEvent("debug", b1);
}
public void SetUserId()
{
instance.setUserId("unity test Id");
// Util.showToast("userId set");
}
public void SendProductId()
{
Bundle b1 = new Bundle();
b1.putString(HAParamType.PRODUCTID, "123456");
instance.onEvent(HAEventType.ADDPRODUCT2CART, b1);
// Util.showToast("product id set");
}
public void SendAnalyticEnable()
{
enabled = !enabled;
instance.setAnalyticsEnabled(enabled);
// TestTip.Inst.ShowText(enabled ? "ENABLED" : "DISABLED");
}
public void CreateClearCache()
{
instance.clearCachedData();
// Util.showToast("Clear Cache");
}
public void SetFavoriteSport()
{
instance.setUserProfile("favor_sport", "running");
// Util.showToast("set favorite");
}
public void SetPushToken()
{
instance.setPushToken("fffff");
// Util.showToast("set push token as ffff");
}
public void setMinActivitySessions()
{
instance.setMinActivitySessions(10000);
// Util.showToast("setMinActivitySessions 10000");
}
public void setSessionDuration()
{
instance.setSessionDuration(900000);
// Util.showToast("setMinActivitySessions 900000");
}
public void getUserProfiles()
{
getUserProfiles(false);
getUserProfiles(true);
}
public void getUserProfiles(bool preDefined)
{
var profiles = instance.getUserProfiles(preDefined);
var keySet = profiles.keySet();
var keyArray = keySet.toArray();
foreach (var key in keyArray)
{
// TestTip.Inst.ShowText($"{key}: {profiles.getOrDefault(key, "default")}");
}
}
public void pageStart()
{
instance.pageStart("page test", "page test");
// TestTip.Inst.ShowText("set page start: page test, page test");
}
public void pageEnd()
{
instance.pageEnd("page test");
// TestTip.Inst.ShowText("set page end: page test");
}
public void enableLog()
{
HiAnalyticsTools.enableLog(level + 3);
// TestTip.Inst.ShowText($"current level {level + 3}");
level = (level + 1) % 4;
}
}</p>
1.2.2 Testing the APK1. Generate the APK.
Go to File > Build Settings > Android, click Switch Platform, and then Build And Run.
2. Enable the debug mode.
3. Go to the real-time overview page of Analytics Kit in AppGallery Connect.
Sign in to AppGallery Connect and click My projects. Select one of your projects and go to HUAWEI Analytics > Overview > Real-time overview.
4. Call AnalyticTestMethod() to display analysis events reported.
Our official website
Demo for Analytics Kit
Our Development Documentation page, to find the documents you need:
Android SDK
Web SDK
Quick APP SDK
If you have any questions about HMS Core, you can post them in the community on the HUAWEI Developers website or submit a ticket online.
We’re looking forward to seeing what you can achieve with HUAWEI Analytics!
More Information
To join in on developer discussion forums
To download the demo app and sample code
For solutions to integration-related issues
Checkout in forum
"John, why the writing pad is missing again?"
John, programmer at Huawei, has a grandma who loves novelty, and lately she's been obsessed with online shopping. Familiarizing herself with major shopping apps and their functions proved to be a piece of cake, and she had thought that her online shopping experience would be effortless — unfortunately, however, she was hindered by product searching.
John's grandma tended to use handwriting input. When using it, she would often make mistakes, like switching to another input method she found unfamiliar, or tapping on undesired characters or signs.
Except for shopping apps, most mobile apps feature interface designs that are oriented to younger users — it's no wonder that elderly users often struggle to figure out how to use them.
John patiently helped his grandma search for products with handwriting input several times. But then, he decided to use his skills as a veteran coder to give his grandma the best possible online shopping experience. More specifically, instead of helping her adjust to the available input method, he was determined to create an input method that would conform to her usage habits.
Since his grandma tended to err during manual input, John developed an input method that converts speech into text. Grandma was enthusiastic about the new method, because it is remarkably easy to use. All she has to do is to tap on the recording button and say the product's name. The input method then recognizes what she has said, and converts her speech into text.
Actual Effects
Real-time speech recognition and speech to text are ideal for a broad range of apps, including:
Game apps (online): Real-time speech recognition comes to users' aid when they team up with others. It frees up users' hands for controlling the action, sparing them from having to type to communicate with their partners. It can also free users from any potential embarrassment related to voice chatting during gaming.
Work apps: Speech to text can play a vital role during long conferences, where typing to keep meeting minutes can be tedious and inefficient, with key details being missed. Using speech to text is much more efficient: during a conference, users can use this service to convert audio content into text; after the conference, they can simply retouch the text to make it more logical.
Learning apps: Speech to text can offer users an enhanced learning experience. Without the service, users often have to pause audio materials to take notes, resulting in a fragmented learning process. With speech to text, users can concentrate on listening intently to the material while it is being played, and rely on the service to convert the audio content into text. They can then review the text after finishing the entire course, to ensure that they've mastered the content.
How to Implement
Two services in HUAWEI ML Kit: automatic speech recognition (ASR) and audio file transcription, make it easy to implement the above functions.
ASR can recognize speech of up to 60s, and convert the input speech into text in real time, with recognition accuracy of over 95%. It currently supports Mandarin Chinese (including Chinese-English bilingual speech), English, French, German, Spanish, Italian, and Arabic.
l Real-time result output
l Available options: with and without speech pickup UI
l Endpoint detection: Start and end points can be accurately located.
l Silence detection: No voice packet is sent for silent portions.
l Intelligent conversion to digital formats: For example, the year 2021 is recognized from voice input.
Audio file transcription can convert an audio file of up to five hours into text with punctuation, and automatically segment the text for greater clarity. In addition, this service can generate text with timestamps, facilitating further function development. In this version, both Chinese and English are supported.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Procedures
1. Preparations
(1) Configure the Huawei Maven repository address, and put the agconnect-services.json file under the app directory.
Open the build.gradle file in the root directory of your Android Studio project.
Add the AppGallery Connect plugin and the Maven repository.
l Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
l Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
l If the agconnect-services.json file has been added to the app, go to buildscript > dependencies and add the AppGallery Connect plugin configuration.
Code:
<p style="line-height: 1.5em;">buildscript {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
classpath 'com.android.tools.build:gradle:3.5.4'
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
// NOTE: Do not place your app dependencies here; they belong
// in the individual module build.gradle files.
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}</p>
2) Add the build dependencies for the HMS Core SDK.
Code:
<p style="line-height: 1.5em;">dependencies {
//The audio file transcription SDK.
implementation 'com.huawei.hms:ml-computer-voice-aft:2.2.0.300'
// The ASR SDK.
implementation 'com.huawei.hms:ml-computer-voice-asr:2.2.0.300'
// Plugin of ASR.
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.2.0.300'
...
}
apply plugin: 'com.huawei.agconnect' // AppGallery Connect plugin.</p>
(3) Configure the signing certificate in the build.gradle file under the app directory.
Code:
<p style="line-height: 1.5em;">signingConfigs {
release {
storeFile file("xxx.jks")
keyAlias xxx
keyPassword xxxxxx
storePassword xxxxxx
v1SigningEnabled true
v2SigningEnabled true
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
debug {
signingConfig signingConfigs.release
debuggable true
}
}</p>
(4) Add permissions in the AndroidManifest.xml file.
Code:
<p style="line-height: 1.5em;"><uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<application
android:requestLegacyExternalStorage="true"
...
</application>
</p>
2. Integrating the ASR Service
(1) Dynamically apply for the permissions.
Code:
<p style="line-height: 1.5em;">if (ActivityCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
requestCameraPermission();
}
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.RECORD_AUDIO};
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.RECORD_AUDIO)) {
ActivityCompat.requestPermissions(this, permissions, Constants.AUDIO_PERMISSION_CODE);
return;
}
}
</p>
(2) Create an Intent to set parameters.
Code:
<p style="line-height: 1.5em;">// Set authentication information for your app.
MLApplication.getInstance().setApiKey(AGConnectServicesConfig.fromContext(this).getString("client/api_key"));
//// Use Intent for recognition parameter settings.
Intent intentPlugin = new Intent(this, MLAsrCaptureActivity.class)
// Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Example: "zh-CN": Chinese; "en-US": English.
.putExtra(MLAsrCaptureConstants.LANGUAGE, MLAsrConstants.LAN_EN_US)
// Set whether to display the recognition result on the speech pickup UI.
.putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX);
startActivityForResult(intentPlugin, "1");</p>
(3) Override the onActivityResult method to process the result returned by ASR.
Code:
<p style="line-height: 1.5em;">@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
String text = "";
if (null == data) {
addTagItem("Intent data is null.", true);
}
if (requestCode == "1") {
if (data == null) {
return;
}
Bundle bundle = data.getExtras();
if (bundle == null) {
return;
}
switch (resultCode) {
case MLAsrCaptureConstants.ASR_SUCCESS:
// Obtain the text information recognized from speech.
if (bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT);
}
if (text == null || "".equals(text)) {
text = "Result is null.";
Log.e(TAG, text);
} else {
// Display the recognition result in the search box.
searchEdit.setText(text);
goSearch(text, true);
}
break;
// MLAsrCaptureConstants.ASR_FAILURE: Recognition fails.
case MLAsrCaptureConstants.ASR_FAILURE:
// Check whether an error code is contained.
if (bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_CODE)) {
text = text + bundle.getInt(MLAsrCaptureConstants.ASR_ERROR_CODE);
// Troubleshoot based on the error code.
}
// Check whether error information is contained.
if (bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_MESSAGE)) {
String errorMsg = bundle.getString(MLAsrCaptureConstants.ASR_ERROR_MESSAGE);
// Troubleshoot based on the error information.
if (errorMsg != null && !"".equals(errorMsg)) {
text = "[" + text + "]" + errorMsg;
}
}
// Check whether a sub-error code is contained.
if (bundle.containsKey(MLAsrCaptureConstants.ASR_SUB_ERROR_CODE)) {
int subErrorCode = bundle.getInt(MLAsrCaptureConstants.ASR_SUB_ERROR_CODE);
// Troubleshoot based on the sub-error code.
text = "[" + text + "]" + subErrorCode;
}
Log.e(TAG, text);
break;
default:
break;
}
}
}
</p>
3. Integrating the Audio File Transcription Service
(1) Dynamically apply for the permissions.
Code:
<p style="line-height: 1.5em;">private static final int REQUEST_EXTERNAL_STORAGE = 1;
private static final String[] PERMISSIONS_STORAGE = {
Manifest.permission.READ_EXTERNAL_STORAGE,
Manifest.permission.WRITE_EXTERNAL_STORAGE };
public static void verifyStoragePermissions(Activity activity) {
// Check if the write permission has been granted.
int permission = ActivityCompat.checkSelfPermission(activity,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
if (permission != PackageManager.PERMISSION_GRANTED) {
// The permission has not been granted. Prompt the user to grant it.
ActivityCompat.requestPermissions(activity, PERMISSIONS_STORAGE,
REQUEST_EXTERNAL_STORAGE);
}
}
</p>
(2) Create and initialize an audio transcription engine, and create an audio file transcription configurator.
Code:
<p style="line-height: 1.5em;">// Set the API key.
MLApplication.getInstance().setApiKey(AGConnectServicesConfig.fromContext(getApplication()).getString("client/api_key"));
MLRemoteAftSetting setting = new MLRemoteAftSetting.Factory()
// Set the transcription language code, complying with the BCP 47 standard. Currently, Mandarin Chinese and English are supported.
.setLanguageCode("zh")
// Set whether to automatically add punctuations to the converted text. The default value is false.
.enablePunctuation(true)
// Set whether to generate the text transcription result of each audio segment and the corresponding audio time shift. The default value is false. (This parameter needs to be set only when the audio duration is less than 1 minute.)
.enableWordTimeOffset(true)
// Set whether to output the time shift of a sentence in the audio file. The default value is false.
.enableSentenceTimeOffset(true)
.create();
// Create an audio transcription engine.
MLRemoteAftEngine engine = MLRemoteAftEngine.getInstance();
engine.init(this);
// Pass the listener callback to the audio transcription engine created beforehand.
engine.setAftListener(aftListener);</p>
(3) Create a listener callback to process the audio file transcription result.
l Transcription of short audio files with a duration of 1 minute or shorter:
Code:
<p style="line-height: 1.5em;">private MLRemoteAftListener aftListener = new MLRemoteAftListener() {
public void onResult(String taskId, MLRemoteAftResult result, Object ext) {
// Obtain the transcription result notification.
if (result.isComplete()) {
// Process the transcription result.
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback upon a transcription error.
}
@Override
public void onInitComplete(String taskId, Object ext) {
// Reserved.
}
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Reserved.
}
@Override
public void onEvent(String taskId, int eventId, Object ext) {
// Reserved.
}
};
</p>
l Transcription of audio files with a duration longer than 1 minute:
Code:
<p style="line-height: 1.5em;">private MLRemoteAftListener asrListener = new MLRemoteAftListener() {
@Override
public void onInitComplete(String taskId, Object ext) {
Log.e(TAG, "MLAsrCallBack onInitComplete");
// The long audio file is initialized and the transcription starts.
start(taskId);
}
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
Log.e(TAG, " MLAsrCallBack onUploadProgress");
}
@Override
public void onEvent(String taskId, int eventId, Object ext) {
// Used for the long audio file.
Log.e(TAG, "MLAsrCallBack onEvent" + eventId);
if (MLAftEvents.UPLOADED_EVENT == eventId) { // The file is uploaded successfully.
// Obtain the transcription result.
startQueryResult(taskId);
}
}
@Override
public void onResult(String taskId, MLRemoteAftResult result, Object ext) {
Log.e(TAG, "MLAsrCallBack onResult taskId is :" + taskId + " ");
if (result != null) {
Log.e(TAG, "MLAsrCallBack onResult isComplete: " + result.isComplete());
if (result.isComplete()) {
TimerTask timerTask = timerTaskMap.get(taskId);
if (null != timerTask) {
timerTask.cancel();
timerTaskMap.remove(taskId);
}
if (result.getText() != null) {
Log.e(TAG, taskId + " MLAsrCallBack onResult result is : " + result.getText());
tvText.setText(result.getText());
}
List<MLRemoteAftResult.Segment> words = result.getWords();
if (words != null && words.size() != 0) {
for (MLRemoteAftResult.Segment word : words) {
Log.e(TAG, "MLAsrCallBack word text is : " + word.getText() + ", startTime is : " + word.getStartTime() + ". endTime is : " + word.getEndTime());
}
}
List<MLRemoteAftResult.Segment> sentences = result.getSentences();
if (sentences != null && sentences.size() != 0) {
for (MLRemoteAftResult.Segment sentence : sentences) {
Log.e(TAG, "MLAsrCallBack sentence text is : " + sentence.getText() + ", startTime is : " + sentence.getStartTime() + ". endTime is : " + sentence.getEndTime());
}
}
}
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
Log.i(TAG, "MLAsrCallBack onError : " + message + "errorCode, " + errorCode);
switch (errorCode) {
case MLAftErrors.ERR_AUDIO_FILE_NOTSUPPORTED:
break;
}
}
};
// Upload a transcription task.
private void start(String taskId) {
Log.e(TAG, "start");
engine.setAftListener(asrListener);
engine.startTask(taskId);
}
// Obtain the transcription result.
private Map<String, TimerTask> timerTaskMap = new HashMap<>();
private void startQueryResult(final String taskId) {
Timer mTimer = new Timer();
TimerTask mTimerTask = new TimerTask() {
@Override
public void run() {
getResult(taskId);
}
};
// Periodically obtain the long audio file transcription result every 10s.
mTimer.schedule(mTimerTask, 5000, 10000);
// Clear timerTaskMap before destroying the UI.
timerTaskMap.put(taskId, mTimerTask);
}
</p>
(4) Obtain an audio file and upload it to the audio transcription engine.
Code:
<p style="line-height: 1.5em;">// Obtain the URI of an audio file.
Uri uri = getFileUri();
// Obtain the audio duration.
Long audioTime = getAudioFileTimeFromUri(uri);
// Check whether the duration is longer than 60s.
if (audioTime < 60000) {
// uri indicates audio resources read from the local storage or recorder. Only local audio files with a duration not longer than 1 minute are supported.
this.taskId = this.engine.shortRecognize(uri, this.setting);
Log.i(TAG, "Short audio transcription.");
} else {
// longRecognize is an API used to convert audio files with a duration from 1 minute to 5 hours.
this.taskId = this.engine.longRecognize(uri, this.setting);
Log.i(TAG, "Long audio transcription.");
}
private Long getAudioFileTimeFromUri(Uri uri) {
Long time = null;
Cursor cursor = this.getContentResolver()
.query(uri, null, null, null, null);
if (cursor != null) {
cursor.moveToFirst();
time = cursor.getLong(cursor.getColumnIndexOrThrow(MediaStore.Video.Media.DURATION));
} else {
MediaPlayer mediaPlayer = new MediaPlayer();
try {
mediaPlayer.setDataSource(String.valueOf(uri));
mediaPlayer.prepare();
} catch (IOException e) {
Log.e(TAG, "Failed to read the file time.");
}
time = Long.valueOf(mediaPlayer.getDuration());
}
return time;
}</p>
For more details, you can go to:
l Reddit to join our developer discussion
l GitHub to download demos and sample codes
l Stack Overflow to solve any integration problems
| Orignal Source
Audio Editor Kit from HMS Core provides the audio source separation function, which allows you to separate human voices, human voices from accompaniments, and human voices from musical instrument sounds. The image below shows the accompaniment separated from Dream it Possible.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let's see how to implement this function.
Step 1 Prepare the File for Audio Source SeparationAn MP3 audio file is recommended. If this is not possible, follow the instructions in step 2 to convert your audio file to an MP3 file. What if the accompaniment to be separated is in a video file? No worries. Just extract the video's audio first by referring to the instructions in step 2.
Step 2 Integrate Audio Editor KitDevelopment PracticePreparations
1. Configure the Maven repository address in the project-level build.gradle file.
Code:
buildscript {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
// Add the AppGallery Connect plugin configuration.
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
}
}
allprojects {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Add the following configuration under the declaration in the file header.
Code:
apply plugin: 'com.huawei.agconnect'
3. Add the build dependency on the Audio Editor SDK in the app-level build.gradle file.
Code:
dependencies{
implementation 'com.huawei.hms:audio-editor-ui:{version}'
}
4. Apply for the following permissions in the AndroidManifest.xml file:
Code:
<!-- Vibrate -->
<uses-permission android:name="android.permission.VIBRATE" />
<!-- Microphone -->
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<!-- Write into storage -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!-- Read from storage -->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Connect to Internet -->
<uses-permission android:name="android.permission.INTERNET" />
<!-- Obtain the network status -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<!-- Obtain the changed network connectivity state -->
<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" />
Code Development1. Create your app's custom activity and use it for selecting one or more audio files. Return their paths to the Audio Editor SDK in the following way
Code:
// Return the audio file paths to the audio editing screen.
private void sendAudioToSdk() {
// Set filePath to the obtained audio file path.
String filePath = "/sdcard/AudioEdit/audio/music.aac";
ArrayList<String> audioList = new ArrayList<>();
audioList.add(filePath);
// Return the paths to the audio editing screen.
Intent intent = new Intent();
// Use HAEConstant.AUDIO_PATH_LIST provided by the Audio Editor SDK.
intent.putExtra(HAEConstant.AUDIO_PATH_LIST, audioList);
// Use HAEConstant.RESULT_CODE provided by the Audio Editor SDK as the result code.
this.setResult(HAEConstant.RESULT_CODE, intent);
finish();
}
2. Register the activity in the AndroidManifest.xml file as described in the following code. When you choose to import the selected audio files, the SDK will send an intent with the action value com.huawei.hms.audioeditor.chooseaudio to jump to the activity.
Code:
<activity android:name="Activity ">
<intent-filter>
<action android:name="com.huawei.hms.audioeditor.chooseaudio"/>
<category android:name="android.intent.category.DEFAULT"/>
</intent-filter>
</activity>
3. Launch the audio editing screen. When you tap Add audio, the SDK will automatically call the activity defined earlier. Then you will be able to edit the audio and add special effects to the audio. After such operations are complete, the edited audio can be exported.
Code:
HAEUIManager.getInstance().launchEditorActivity(this);
4. Convert the audio file format that is not MP3 to MP3 (Optional)
Call transformAudioUseDefaultPath to convert the audio format and save the converted audio to the default directory.
Code:
// Convert the audio format.
HAEAudioExpansion.getInstance().transformAudioUseDefaultPath(context,inAudioPath, audioFormat, new OnTransformCallBack() {
// Called to receive the progress which ranges from 0 to 100.
@Override
public void onProgress(int progress) {
}
// Called when the conversion fails.
@Override
public void onFail(int errorCode) {
}
// Called when the conversion succeeds.
@Override
public void onSuccess(String outPutPath) {
}
// Called when the conversion is canceled.
@Override
public void onCancel() {
}
});
// Cancel format conversion.
HAEAudioExpansion.getInstance().cancelTransformAudio();
Call transformAudio to convert the audio format and save the converted audio to a specified directory.
Code:
// Convert the audio format.
HAEAudioExpansion.getInstance().transformAudio(context,inAudioPath, outAudioPath, new OnTransformCallBack(){
// Called to receive the progress which ranges from 0 to 100.
@Override
public void onProgress(int progress) {
}
// Called when the conversion fails.
@Override
public void onFail(int errorCode) {
}
// Called when the conversion succeeds.
@Override
public void onSuccess(String outPutPath) {
}
// Called when the conversion is canceled.
@Override
public void onCancel() {
}
});
// Cancel format conversion.
HAEAudioExpansion.getInstance().cancelTransformAudio();
5. Call extractAudio to extract audio from a video, which contains the accompaniment to be separated, to a specified directory. (Optional)
Code:
// outAudioDir (optional): path of the directory for storing the extracted audio.
// outAudioName (optional): name of the extracted audio, which does not contain the file name extension.
HAEAudioExpansion.getInstance().extractAudio(context,inVideoPath,outAudioDir, outAudioName,new AudioExtractCallBack() {
@Override
public void onSuccess(String audioPath) {
Log.d(TAG, "ExtractAudio onSuccess : " + audioPath);
}
@Override
public void onProgress(int progress) {
Log.d(TAG, "ExtractAudio onProgress : " + progress);
}
@Override
public void onFail(int errCode) {
Log.i(TAG, "ExtractAudio onFail : " + errCode);
}
@Override
public void onCancel() {
Log.d(TAG, "ExtractAudio onCancel.");
}
});
// Cancel audio extraction.
HAEAudioExpansion.getInstance().cancelExtractAudio();
6. Call getInstruments and startSeparationTasks for audio source separation.
Code:
// Obtain the accompaniment ID using getInstruments and pass the ID to startSeparationTasks.
HAEAudioSeparationFile haeAudioSeparationFile = new HAEAudioSeparationFile();
haeAudioSeparationFile.getInstruments(new SeparationCloudCallBack<List<SeparationBean>>() {
@Override
public void onFinish(List<SeparationBean> response) {
// Called to receive the separation data including the accompaniment ID.
}
@Override
public void onError(int errorCode) {
// Called when an error occurs during separation.
}
});
// Set the parameter for separation.
List instruments = new ArrayList<>();
instruments.add ("accompaniment ID")
haeAudioSeparationFile.setInstruments(instruments);
// Start separating.
haeAudioSeparationFile.startSeparationTasks(inAudioPath, outAudioDir, outAudioName, new AudioSeparationCallBack() {
@Override
public void onResult(SeparationBean separationBean) { }
@Override
public void onFinish(List<SeparationBean> separationBeans) {}
@Override
public void onFail(int errorCode) {}
@Override
public void onCancel() {}
});
// Cancel audio source separation.
haeAudioSeparationFile.cancel();
After completing these steps, you can get the accompaniment you desire. To create something similar to the demo, you can use a video editing program to synthesize the accompaniment with images and lyrics.
ReferencesFor more details, you can go to:
Audio Editor Kit official website
Audio Editor Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download AudioEditor Kit sample codes
Stack Overflow to solve any integration problems
Thanks for sharing!!