Separate Audio Sources in a Tap with Audio Editor Kit - Huawei Developers

Audio Editor Kit from HMS Core provides the audio source separation function, which allows you to separate human voices, human voices from accompaniments, and human voices from musical instrument sounds. The image below shows the accompaniment separated from Dream it Possible.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let's see how to implement this function.
Step 1 Prepare the File for Audio Source Separation​An MP3 audio file is recommended. If this is not possible, follow the instructions in step 2 to convert your audio file to an MP3 file. What if the accompaniment to be separated is in a video file? No worries. Just extract the video's audio first by referring to the instructions in step 2.
Step 2 Integrate Audio Editor Kit​Development Practice​Preparations
1. Configure the Maven repository address in the project-level build.gradle file.
Code:
buildscript {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
// Add the AppGallery Connect plugin configuration.
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
}
}
allprojects {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Add the following configuration under the declaration in the file header.
Code:
apply plugin: 'com.huawei.agconnect'
3. Add the build dependency on the Audio Editor SDK in the app-level build.gradle file.
Code:
dependencies{
implementation 'com.huawei.hms:audio-editor-ui:{version}'
}
4. Apply for the following permissions in the AndroidManifest.xml file:
Code:
<!-- Vibrate -->
<uses-permission android:name="android.permission.VIBRATE" />
<!-- Microphone -->
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<!-- Write into storage -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!-- Read from storage -->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Connect to Internet -->
<uses-permission android:name="android.permission.INTERNET" />
<!-- Obtain the network status -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<!-- Obtain the changed network connectivity state -->
<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" />
Code Development​1. Create your app's custom activity and use it for selecting one or more audio files. Return their paths to the Audio Editor SDK in the following way
Code:
// Return the audio file paths to the audio editing screen.
private void sendAudioToSdk() {
// Set filePath to the obtained audio file path.
String filePath = "/sdcard/AudioEdit/audio/music.aac";
ArrayList<String> audioList = new ArrayList<>();
audioList.add(filePath);
// Return the paths to the audio editing screen.
Intent intent = new Intent();
// Use HAEConstant.AUDIO_PATH_LIST provided by the Audio Editor SDK.
intent.putExtra(HAEConstant.AUDIO_PATH_LIST, audioList);
// Use HAEConstant.RESULT_CODE provided by the Audio Editor SDK as the result code.
this.setResult(HAEConstant.RESULT_CODE, intent);
finish();
}
2. Register the activity in the AndroidManifest.xml file as described in the following code. When you choose to import the selected audio files, the SDK will send an intent with the action value com.huawei.hms.audioeditor.chooseaudio to jump to the activity.
Code:
<activity android:name="Activity ">
<intent-filter>
<action android:name="com.huawei.hms.audioeditor.chooseaudio"/>
<category android:name="android.intent.category.DEFAULT"/>
</intent-filter>
</activity>
3. Launch the audio editing screen. When you tap Add audio, the SDK will automatically call the activity defined earlier. Then you will be able to edit the audio and add special effects to the audio. After such operations are complete, the edited audio can be exported.
Code:
HAEUIManager.getInstance().launchEditorActivity(this);
4. Convert the audio file format that is not MP3 to MP3 (Optional)
Call transformAudioUseDefaultPath to convert the audio format and save the converted audio to the default directory.
Code:
// Convert the audio format.
HAEAudioExpansion.getInstance().transformAudioUseDefaultPath(context,inAudioPath, audioFormat, new OnTransformCallBack() {
// Called to receive the progress which ranges from 0 to 100.
@Override
public void onProgress(int progress) {
}
// Called when the conversion fails.
@Override
public void onFail(int errorCode) {
}
// Called when the conversion succeeds.
@Override
public void onSuccess(String outPutPath) {
}
// Called when the conversion is canceled.
@Override
public void onCancel() {
}
});
// Cancel format conversion.
HAEAudioExpansion.getInstance().cancelTransformAudio();
Call transformAudio to convert the audio format and save the converted audio to a specified directory.
Code:
// Convert the audio format.
HAEAudioExpansion.getInstance().transformAudio(context,inAudioPath, outAudioPath, new OnTransformCallBack(){
// Called to receive the progress which ranges from 0 to 100.
@Override
public void onProgress(int progress) {
}
// Called when the conversion fails.
@Override
public void onFail(int errorCode) {
}
// Called when the conversion succeeds.
@Override
public void onSuccess(String outPutPath) {
}
// Called when the conversion is canceled.
@Override
public void onCancel() {
}
});
// Cancel format conversion.
HAEAudioExpansion.getInstance().cancelTransformAudio();
5. Call extractAudio to extract audio from a video, which contains the accompaniment to be separated, to a specified directory. (Optional)
Code:
// outAudioDir (optional): path of the directory for storing the extracted audio.
// outAudioName (optional): name of the extracted audio, which does not contain the file name extension.
HAEAudioExpansion.getInstance().extractAudio(context,inVideoPath,outAudioDir, outAudioName,new AudioExtractCallBack() {
@Override
public void onSuccess(String audioPath) {
Log.d(TAG, "ExtractAudio onSuccess : " + audioPath);
}
@Override
public void onProgress(int progress) {
Log.d(TAG, "ExtractAudio onProgress : " + progress);
}
@Override
public void onFail(int errCode) {
Log.i(TAG, "ExtractAudio onFail : " + errCode);
}
@Override
public void onCancel() {
Log.d(TAG, "ExtractAudio onCancel.");
}
});
// Cancel audio extraction.
HAEAudioExpansion.getInstance().cancelExtractAudio();
6. Call getInstruments and startSeparationTasks for audio source separation.
Code:
// Obtain the accompaniment ID using getInstruments and pass the ID to startSeparationTasks.
HAEAudioSeparationFile haeAudioSeparationFile = new HAEAudioSeparationFile();
haeAudioSeparationFile.getInstruments(new SeparationCloudCallBack<List<SeparationBean>>() {
@Override
public void onFinish(List<SeparationBean> response) {
// Called to receive the separation data including the accompaniment ID.
}
@Override
public void onError(int errorCode) {
// Called when an error occurs during separation.
}
});
// Set the parameter for separation.
List instruments = new ArrayList<>();
instruments.add ("accompaniment ID")
haeAudioSeparationFile.setInstruments(instruments);
// Start separating.
haeAudioSeparationFile.startSeparationTasks(inAudioPath, outAudioDir, outAudioName, new AudioSeparationCallBack() {
@Override
public void onResult(SeparationBean separationBean) { }
@Override
public void onFinish(List<SeparationBean> separationBeans) {}
@Override
public void onFail(int errorCode) {}
@Override
public void onCancel() {}
});
// Cancel audio source separation.
haeAudioSeparationFile.cancel();
After completing these steps, you can get the accompaniment you desire. To create something similar to the demo, you can use a video editing program to synthesize the accompaniment with images and lyrics.
References​For more details, you can go to:
Audio Editor Kit official website
Audio Editor Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download AudioEditor Kit sample codes
Stack Overflow to solve any integration problems

Thanks for sharing!!

Related

How do You Equip Your App with Audio Playback Capabilities Using Audio Kit?

Unlike text or video, when users consume audio content, they can also do something else while they're listening. This is why users tend to choose audio, rather than text or video, when commuting or doing housework.
This makes audio playback a valuable addition for many apps. For example, fitness and health apps are more engaging when they have the ability to play music or audiobooks, while education apps are more effective when they provide useful audio courses, and ringtone apps need to be able to play a variety of ringtones.
So then, how do you build audio capabilities for your app?
The answer is, by using HUAWEI Audio Kit.
It provides you with a range of capabilities, including audio encoding and decoding at both the hardware level and system bottom layer.
Audio Kit provides the following functions:
l Play audio: apps can decode and play high-resolution audio files of up to 384 kHz/24 bit.
l Control playback: users can play, pause, play previous, play next, stop, and drag the progress bar.
l Adjust volume: users can increase or decrease the volume.
l Manage playlists: gives users the ability to view, save, and delete playlists, as well as add songs to a playlist.
l Manage play modes: apps can provide sequential playback, repeat a playlist, repeat a song, and shuffle songs).
l Users can save their playback progress and start from where they left off.
l Apps can cache and encrypt audio content.
Demo:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
You can find the demo source code on GitHub.
Development Practice
1. Integrate the HMS Core Audio SDK
1.1 Configure the Maven Repository Address for the Audio SDK
Step 1 Open the build.gradle file in the root directory of your Android Studio project.
Step 2 Configure the Maven repository address and add the gradle configuration.
l Go to allprojects > repositories and configure the Maven repository address for the Audio SDK.
l Go to buildscript > repositories and configure the Maven repository address for the Audio SDK.
l Go to buildscript > dependencies and add the gradle configuration.
Code:
<p style="line-height: 1.5em;">buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.4.2'
// NOTE: Do not place your app dependencies here; you need to put them
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
</p>
1.2 Add Build Dependencies
Step 1 Open the build.gradle file in the app directory.
Step 2 Add build dependencies in the dependencies section.
Code:
<p style="line-height: 1.5em;">dependencies {
implementation 'com.huawei.hms:audiokit-player:{version}'
}
</p>
1.3 Synchronize the Project
Once you have completed the configuration above, click the synchronization icon on the toolbar to synchronize the build.gradle file.
2 Configure Obfuscation Scripts
Before you build the APK, configure the obfuscation file to prevent the HMS Core SDK from being obfuscated.
The obfuscation configuration file is proguard-rules.pro for Android Studio.
Step 1 Open the obfuscation configuration file proguard-rules.pro in the app directory.
Step 2 Remove the HMS Core SDK from obfuscation.
Code:
<p style="line-height: 1.5em;">-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.huawei.hianalytics.**{*;}
-keep class com.huawei.updatesdk.**{*;}
-keep class com.huawei.hms.**{*;}
</p>
Step 3 If you are using AndResGuard, add its trustlist to the obfuscation configuration file.
Code:
<p style="line-height: 1.5em;">"R.string.hms*",
"R.string.connect_server_fail_prompt_toast",
"R.string.getting_message_fail_prompt_toast",
"R.string.no_available_network_prompt_toast",
"R.string.third_app_*",
"R.string.upsdk_*",
"R.layout.hms*",
"R.layout.upsdk_*",
"R.drawable.upsdk*",
"R.color.upsdk*",
"R.dimen.upsdk*",
"R.style.upsdk*",
"R.string.agc*"
</p>
3 Adding Permissions
The Audio SDK requires permissions to access the network, obtain the network status, operate SD cards, and read data from the Android media library. Declare these permissions in the Manifest file:
Code:
<p style="line-height: 1.5em;">// Permission to access the network.
<uses-permission android:name="android.permission.INTERNET" />
// Permission to obtain the network status.
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
//Permission to write data into the SD card.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
// Permission to read data from the SD card.
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
// Permission to read data from the Android media library.
<uses-permission android:name="android.permission.READ_MEDIA_STORAGE" />
</p>
4 Developing Your App
Step 1 Create an audio management instance by calling HwAudioManager; manage audio playback by calling HwAudioPlayerManager; manage audio queues by calling HwAudioQueueManager; manage audio configurations by calling HwAudioConfigManager.
Code:
<p style="line-height: 1.5em;">private HwAudioPlayerManager mHwAudioPlayerManager;
private HwAudioConfigManager mHwAudioConfigManager;
private HwAudioQueueManager mHwAudioQueueManager;
public void createHwAudioManager() {
// Create a configuration instance, including various playback-related configurations.
HwAudioPlayerConfig hwAudioPlayerConfig = new HwAudioPlayerConfig(MainActivity.this);
// Create a control instance.
HwAudioManagerFactory.createHwAudioManager(hwAudioPlayerConfig, new HwAudioConfigCallBack() {
// Return the control instance through callback.
@Override
public void onSuccess(HwAudioManager hwAudioManager) {
try {
Log.i(TAG, "createHwAudioManager onSuccess");
// Obtain the playback control instance.
mHwAudioPlayerManager = hwAudioManager.getPlayerManager();
// Obtain the configuration control instance.
mHwAudioConfigManager = hwAudioManager.getConfigManager();
// Obtain the queue control instance.
mHwAudioQueueManager = hwAudioManager.getQueueManager();
} catch (Exception e) {
Log.i(TAG, "player init fail");
}
}
@Override
public void onError(int errorCode) {
Log.w(TAG, "init err:" + errorCode);
}
});
}
</p>
Step 2 Create a playlist and play songs.
Code:
<p style="line-height: 1.5em;">public void play() {
if (mHwAudioPlayerManager != null) {
// Create a playlist.
String path = "https://lfmusicservice.hwcloudtest.cn:18084/HMS/audio/Taoge-chengshilvren.mp3";
// Create an audio object and write audio information into the object.
HwAudioPlayItem item = new HwAudioPlayItem();
// Set the audio title.
item.setAudioTitle("Playing input song");
// Set the audio ID.
item.setAudioId(String.valueOf(path.hashCode()));
// Set whether audio is online.
item.setOnline(1);
// Set the online audio URL.
item.setOnlinePath(path);
List<HwAudioPlayItem> playItemList = new ArrayList<>();
playItemList.add(item);
// Play songs.
mHwAudioPlayerManager.playList(playItemList, 0, 0);
}
}
</p>
Step 3 Use instances. The following are examples. For more details, see Audio Kit's Management APIs.
l Clear the playback cache.
Code:
<p style="line-height: 1.5em;">public void clearPlayCache() {
if (mHwAudioConfigManager != null) {
mHwAudioConfigManager.clearPlayCache();
}
}
l Check whether the current playback queue is empty.
public boolean isQueueEmpty() {
if (mHwAudioQueueManager != null) {
return mHwAudioQueueManager.isQueueEmpty();
}
return false;
}
</p>
And that's it! You've equipped your app with audio playback capabilities.

How Can I Quickly Integrate AppGallery Connect Cloud Storage into a Unity App?

HUAWEI AppGallery Connect Cloud Storage provides maintenance-free cloud storage functions.
In this post, we’ll walk you through the steps required for integrating this service in Unity. You can access the Cloud Storage sample code on GitHub.
1. Test Environment
SDK Version: agconnect-storage:1.3.1.100
Platform:Unity 2019.4.17f1c1
Test Device:HONOR Magic 2
AppGallery Connect:
https://developer.huawei.com/consumer/en/service/josp/agc/index.html
2. Applying for Cloud Storage
Currently, Cloud Storage is still in beta status. To use the service, you need to send an application by email. For details, check:
https://developer.huawei.com/consum...Gallery-connect-Guides/agc-cloudstorage-apply
3. Importing the Unity Package
Unity materials:
https://docs.unity.cn/cn/Packages-cn/[email protected]/manual/cloudstorage.html
1. Download Unity Hub here and install Unity Editor.
2. Configure the Android environment.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
3. Import the HuaweiServices package. Download the package from Unity Assets Store. In Unity Editor, choose Assets > Import package.
Select the required package and click Import.
4. Completing Configurations in AppGallery Connect
1. Sign in to AppGallery Connect, and click the created app.
2. Go to My projects > Build > Cloud Storage and click Enable now. Set the default storage instance as required.
To permit Cloud Storage to read and write data without authorization, add the following code:
XML:
agc.cloud.storage[
match: /{bucket}/{path=**} {
allow read, write: if true;
}
]
3. Go to Project settings > General information and download the latest agconnect-services.json file.
4. Save the file to the Assets\Plugins\Android directory of your Unity project.
If the downloaded JSON file does not contain the default_storage parameter under cloudstorage, you need to add it manually. Set its value to the storage instance you just configured.
5. Completing Project Information in Unity Editor
1. Choose Edit > Project Settings > Player > Publish Settings. In the Build area, select the items for Android according to your requirements.
2. In Unity Editor, choose Edit > Project Settings > Player > Other Settings, and set the package name to match the package name you set in AppGallery Connect.
3. Add the following code to the project-level baseProjectTmeplate.gradle file in the Assets\Plugins\Android directory:
XML:
allprojects {
buildscript {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.4.0'
classpath 'com.huawei.agconnect:agcp:1.4.2.301'
**BUILD_SCRIPT_DEPS**
}
}
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
4. Add the following code to the app-level LauncherTmeplate.gradle file in the Assets\Plugins\Android directory:
XML:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
dependencies {
implementation project(':unityLibrary')
implementation "com.huawei.agconnect:agconnect-storage:1.3.1.100"
implementation 'com.huawei.agconnect:agconnect-auth:1.4.2.301'
5. Configure the manifest file to add the corresponding permissions.
XML:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<application
android:allowBackup="false"
android:requestLegacyExternalStorage="true" >
6. Using Cloud Storage
1. Configure the UI layout.
In Unity Editor, choose GameObject > UI > Button and create a button. Click the button, click Add Component on the right to create a script file, and then create the corresponding method.
2. Initialize the storage instance and apply for the read and write permissions.
C#:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HuaweiService;
using HuaweiService.CloudStorage;
using System;
public delegate void SuccessCallBack<T>(T o);
public class HmsSuccessListener<T>:OnSuccessListener{
public SuccessCallBack<T> CallBack;
public HmsSuccessListener(SuccessCallBack<T> c){
CallBack = c;
}
public void onSuccess(T arg0)
{
Debug.Log("OnSuccessListener onSuccess");
if(CallBack != null)
{
CallBack.Invoke(arg0);
}
}
public override void onSuccess(AndroidJavaObject arg0){
Debug.Log("OnSuccessListener onSuccess");
if(CallBack !=null)
{
Type type = typeof(T);
IHmsBase ret = (IHmsBase)Activator.CreateInstance(type);
ret.obj = arg0;
CallBack.Invoke((T)ret);
}
}
}
public class testStorageDemo : MonoBehaviour
{
private AGCStorageManagement mAGCStorageManagement;
private string[] permissions =
{
"android.permission.WRITE_EXTERNAL_STORAGE",
"android.permission.READ_EXTERNAL_STORAGE",
};
// Start is called before the first frame update
void Start()
{
AndroidJavaClass javaUnityPlayer = new AndroidJavaClass("com.unity3d.player.UnityPlayer");
AndroidJavaObject currentActivity = javaUnityPlayer.GetStatic<AndroidJavaObject>("currentActivity");
Activity aaa = HmsUtil.GetHmsBase<Activity>(currentActivity);
ActivityCompat.requestPermissions(aaa, permissions, 1);
}
// Update is called once per frame
void Update()
{
}
public void initAGCStorageManagement() {
mAGCStorageManagement = AGCStorageManagement.getInstance("9105385871708601205-ffeon");
Debug.Log("Instance is: "+ mAGCStorageManagement);
}
public class MySuccessListener : OnSuccessListener
{
private string m_name;
public MySuccessListener(string name)
{
m_name = name;
}
public MySuccessListener()
{
m_name = "default";
}
public override void onSuccess(AndroidJavaObject ex)
{
Debug.Log("download success: " + m_name);
}
}
}
3. Upload a file.
C#:
public void uploadFile() {
if (mAGCStorageManagement == null){
initAGCStorageManagement();
}
string fileName = "testUnity.jpg";
StorageReference reference = mAGCStorageManagement.getStorageReference(fileName);
string FileFolder = "/storage/emulated/0/AGCSdk/";
string FilePath = FileFolder + fileName;
Debug.Log("FilePath = " + FilePath);
File file = new File(FilePath);
Debug.Log("UploadFile = " + file);
UploadTask task = reference.putFile(file);
task.addOnSuccessListener(new MySuccessListener());
Debug.Log("UploadFile done:");
}
4. Download a file.
C#:
public void downloadFile() {
if (mAGCStorageManagement == null){
initAGCStorageManagement();
}
StorageReference reference = mAGCStorageManagement.getStorageReference("test.jpg");
string FileFolder = "/storage/emulated/0/AGCSdk/";
string FilePath = FileFolder + "test.jpg";
Debug.Log("FilePath = " + FilePath);
File file = new File(FilePath);
Debug.Log("File = " + file);
DownloadTask task = reference.getFile(file);
task.addOnSuccessListener(new MySuccessListener("NormalListener"));
Debug.Log("DownloadTask Result:");
}
5. Delete a file.
C#:
public void deleteFile() {
if (mAGCStorageManagement == null){
initAGCStorageManagement();
}
StorageReference reference = mAGCStorageManagement.getStorageReference("testUnity.jpg");
reference.delete();
Debug.Log("DeleteFileTest success.");
}
6. Package and test the app.
After you have packaged and installed the test app, you can tap each button, and view related logs in the Android Studio Logcat.
You can view the files you upload to or download from AppGallery Connect.
7. Summary
You can use Cloud Storage to store your Unity game data on the cloud without the hassle of server building and O&M. With AppGallery Connect, a web console, you can easily manage your files on the cloud side.
In addition to file upload, download, and deletion, Cloud Storage also offers a metadata setting function.
For reference:
Cloud Storage development guide:
https://developer.huawei.com/consum...-connect-Guides/agc-cloudstorage-introduction
Unity materials:
https://docs.unity.cn/cn/Packages-cn/[email protected]/manual/cloudstorage.html
Cloud Storage codelab:
https://github.com/AppGalleryConnect/agc-demos/tree/main/Android/agc-cloudstorage-demo-java

Integrating HUAWEI Analytics Kit Using Unity

This document describes how to integrate Analytics Kit using the official Unity asset. After the integration, your app can use the services of this Kit on HMS mobile phones.
For details about Analytics Kit, please visit HUAWEI Developers.
1.1 Preparations​1.1.1 Importing Unity Assets​1.1.2 Generating .gradle Files​1. Enable project gradle.
Go to Edit > Project Settings > Player in Unity, click the Android icon, and go to Publishing Settings > Build.
Enable Custom Base Gradle Template.
Enable Custom Launcher Gradle Template.
Enable Custom Main Gradle Template.
Enable Custom Main Manifest.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2. Signature
You can use an existing keystore file or create a new one to sign your app.
Go to Edit > Project Settings > Player in Unity, click the Android icon, and go to Publishing Settings > Keystore Manager > Keystore... > Create New.
Enter the password when you open Unity. Otherwise, you cannot build the APK.
1.1.3 Configuring .gradle Files and the AndroidManifest.xml File​1. Configure the BaseProjectTemplate.gradle file.
Code:
<p style="line-height: 1.5em;">Configure the Maven repository address.
buildscript {
repositories {**ARTIFACTORYREPOSITORY**
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
// If you are changing the Android Gradle Plugin version, make sure it is compatible with the Gradle version preinstalled with Unity.
// For the Gradle version preinstalled with Unity, please visit https://docs.unity3d.com/Manual/android-gradle-overview.html.
// For the official Gradle and Android Gradle Plugin compatibility table, please visit https://developer.android.com/studio/releases/gradle-plugin#updating-gradle.
// To specify a custom Gradle version in Unity, go do Preferences > External Tools, deselect Gradle Installed with Unity (recommended), and specify a path to a custom Gradle version.
classpath 'com.android.tools.build:gradle:3.4.0'
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
**BUILD_SCRIPT_DEPS**
}
repositories {**ARTIFACTORYREPOSITORY**
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
flatDir {
dirs "${project(':unityLibrary').projectDir}/libs"
}
}</p>
2. Configure the launcherTemplate.gradle file.
Code:
<p style="line-height: 1.5em;">// Generated by Unity. Remove this comment to prevent overwriting when exporting again.
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
dependencies {
implementation project(':unityLibrary')
implementation 'com.huawei.hms:hianalytics:5.1.0.300'
implementation 'com.android.support:appcompat-v7:28.0.0'
implementation 'com.huawei.agconnect:agconnect-core:1.2.0.300'
}</p>
3. Configure the mainTemplate.gradle file.
Code:
<p style="line-height: 1.5em;">apply plugin: 'com.android.library'
apply plugin: 'com.huawei.agconnect'
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.huawei.agconnect:agconnect-core:1.2.0.300'
implementation 'com.huawei.hms:hianalytics:5.0.0.301'
**DEPS**}</p>
4. Configure the AndroidManifest.xml file.
Code:
<p style="line-height: 1.5em;"><?xml version="1.0" encoding="utf-8"?>
<!-- Generated by Unity. Remove this comment to prevent overwriting when exporting again. -->
<manifest
xmlns:android="http://schemas.android.com/apk/res/android"
package="com.unity3d.player"
xmlns:tools="http://schemas.android.com/tools">
<application>
<activity android:name="com.hms.hms_analytic_activity.HmsAnalyticActivity"
android:theme="@style/UnityThemeSelector">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
<intent-filter>
<action android:name="android.intent.action.VIEW" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.BROWSABLE" />
<data
android:host="unity.cn"
android:scheme="https" />
</intent-filter>
<meta-data android:name="unityplayer.UnityActivity" android:value="true" />
</activity>
</application></p>
1.1.4 Adding the agconnect-services.json File​1. Create an app by following instructions in Creating an AppGallery Connect Project and Adding an App to the Project.
Run keytool -list -v -keystore C:\TestApp.keyStore to generate the SHA-256 certificate fingerprint based on the keystore file of the app. Then, configure the fingerprint in AppGallery Connect.
2. Download the agconnect-services.json file and place it in the Assets/Plugins/Android directory of your Unity project.
1.1.5 Enabling HUAWEI Analytics​For details, please refer to the development guide.
1.1.6 Adding the HmsAnalyticActivity.java File​1. Destination directory:
2. File content:
Code:
<p style="line-height: 1.5em;">package com.hms.hms_analytic_activity;
import android.os.Bundle;
import com.huawei.hms.analytics.HiAnalytics;
import com.huawei.hms.analytics.HiAnalyticsTools;
import com.unity3d.player.UnityPlayerActivity;
import com.huawei.agconnect.appmessaging.AGConnectAppMessaging;
import com.huawei.hms.aaid.HmsInstanceId;
import com.hw.unity.Agc.Auth.ThirdPartyLogin.LoginManager;
import android.content.Intent;
import java.lang.Boolean;
import com.unity3d.player.UnityPlayer;
import androidx.core.app.ActivityCompat;
public class HmsAnalyticActivity extends UnityPlayerActivity {
private AGConnectAppMessaging appMessaging;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
HiAnalyticsTools.enableLog();
HiAnalytics.getInstance(this);
appMessaging = AGConnectAppMessaging.getInstance();
if(appMessaging != null){
appMessaging.setFetchMessageEnable(true);
appMessaging.setDisplayEnable(true);
appMessaging.setForceFetch();
}
LoginManager.getInstance().initialize(this);
boolean pretendCallMain = false;
if(pretendCallMain == true){
main();
}
}
private static void callCrash() {
throwCrash();
}
private static void throwCrash() {
throw new NullPointerException();
}
public static void main(){
JavaCrash();
}
private static void JavaCrash(){
new Thread(new Runnable() {
@Override
public void run() { // Sub-thread.
UnityPlayer.currentActivity.runOnUiThread(new Runnable() {
@Override
public void run() {
callCrash();
}
});
}
}).start();
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
LoginManager.getInstance().onActivityResult(requestCode, resultCode, data);
}
}</p>
1.2 App Development with the Official Asset​1.2.1 Sample Code​
Code:
<p style="line-height: 1.5em;">using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using HuaweiHms;
public class AnalyticTest : MonoBehaviour
{
private HiAnalyticsInstance instance;
private int level = 0;
// Start() is called before the first frame update.
void Start()
{
}
// Update() is called once per frame.
void Update()
{
}
public AnalyticTest()
{
// HiAnalyticsTools.enableLog();
// instance = HiAnalytics.getInstance(new Context());
}
public void AnalyticTestMethod()
{
HiAnalyticsTools.enableLog();
instance = HiAnalytics.getInstance(new Context());
instance.setAnalyticsEnabled(true);
Bundle b1 = new Bundle();
b1.putString("test", "123456");
instance.onEvent("debug", b1);
}
public void SetUserId()
{
instance.setUserId("unity test Id");
// Util.showToast("userId set");
}
public void SendProductId()
{
Bundle b1 = new Bundle();
b1.putString(HAParamType.PRODUCTID, "123456");
instance.onEvent(HAEventType.ADDPRODUCT2CART, b1);
// Util.showToast("product id set");
}
public void SendAnalyticEnable()
{
enabled = !enabled;
instance.setAnalyticsEnabled(enabled);
// TestTip.Inst.ShowText(enabled ? "ENABLED" : "DISABLED");
}
public void CreateClearCache()
{
instance.clearCachedData();
// Util.showToast("Clear Cache");
}
public void SetFavoriteSport()
{
instance.setUserProfile("favor_sport", "running");
// Util.showToast("set favorite");
}
public void SetPushToken()
{
instance.setPushToken("fffff");
// Util.showToast("set push token as ffff");
}
public void setMinActivitySessions()
{
instance.setMinActivitySessions(10000);
// Util.showToast("setMinActivitySessions 10000");
}
public void setSessionDuration()
{
instance.setSessionDuration(900000);
// Util.showToast("setMinActivitySessions 900000");
}
public void getUserProfiles()
{
getUserProfiles(false);
getUserProfiles(true);
}
public void getUserProfiles(bool preDefined)
{
var profiles = instance.getUserProfiles(preDefined);
var keySet = profiles.keySet();
var keyArray = keySet.toArray();
foreach (var key in keyArray)
{
// TestTip.Inst.ShowText($"{key}: {profiles.getOrDefault(key, "default")}");
}
}
public void pageStart()
{
instance.pageStart("page test", "page test");
// TestTip.Inst.ShowText("set page start: page test, page test");
}
public void pageEnd()
{
instance.pageEnd("page test");
// TestTip.Inst.ShowText("set page end: page test");
}
public void enableLog()
{
HiAnalyticsTools.enableLog(level + 3);
// TestTip.Inst.ShowText($"current level {level + 3}");
level = (level + 1) % 4;
}
}</p>
1.2.2 Testing the APK​1. Generate the APK.
Go to File > Build Settings > Android, click Switch Platform, and then Build And Run.
2. Enable the debug mode.
3. Go to the real-time overview page of Analytics Kit in AppGallery Connect.
Sign in to AppGallery Connect and click My projects. Select one of your projects and go to HUAWEI Analytics > Overview > Real-time overview.
4. Call AnalyticTestMethod() to display analysis events reported.
Our official website
Demo for Analytics Kit
Our Development Documentation page, to find the documents you need:
Android SDK
Web SDK
Quick APP SDK
If you have any questions about HMS Core, you can post them in the community on the HUAWEI Developers website or submit a ticket online.
We’re looking forward to seeing what you can achieve with HUAWEI Analytics!
More Information
To join in on developer discussion forums
To download the demo app and sample code
For solutions to integration-related issues
Checkout in forum

How a Programmer Used 300 Lines of Code to Help His Grandma Shop Online with Voice Input

"John, why the writing pad is missing again?"
John, programmer at Huawei, has a grandma who loves novelty, and lately she's been obsessed with online shopping. Familiarizing herself with major shopping apps and their functions proved to be a piece of cake, and she had thought that her online shopping experience would be effortless — unfortunately, however, she was hindered by product searching.
John's grandma tended to use handwriting input. When using it, she would often make mistakes, like switching to another input method she found unfamiliar, or tapping on undesired characters or signs.
Except for shopping apps, most mobile apps feature interface designs that are oriented to younger users — it's no wonder that elderly users often struggle to figure out how to use them.
John patiently helped his grandma search for products with handwriting input several times. But then, he decided to use his skills as a veteran coder to give his grandma the best possible online shopping experience. More specifically, instead of helping her adjust to the available input method, he was determined to create an input method that would conform to her usage habits.
Since his grandma tended to err during manual input, John developed an input method that converts speech into text. Grandma was enthusiastic about the new method, because it is remarkably easy to use. All she has to do is to tap on the recording button and say the product's name. The input method then recognizes what she has said, and converts her speech into text.
Actual Effects
Real-time speech recognition and speech to text are ideal for a broad range of apps, including:
Game apps (online): Real-time speech recognition comes to users' aid when they team up with others. It frees up users' hands for controlling the action, sparing them from having to type to communicate with their partners. It can also free users from any potential embarrassment related to voice chatting during gaming.
Work apps: Speech to text can play a vital role during long conferences, where typing to keep meeting minutes can be tedious and inefficient, with key details being missed. Using speech to text is much more efficient: during a conference, users can use this service to convert audio content into text; after the conference, they can simply retouch the text to make it more logical.
Learning apps: Speech to text can offer users an enhanced learning experience. Without the service, users often have to pause audio materials to take notes, resulting in a fragmented learning process. With speech to text, users can concentrate on listening intently to the material while it is being played, and rely on the service to convert the audio content into text. They can then review the text after finishing the entire course, to ensure that they've mastered the content.
How to Implement
Two services in HUAWEI ML Kit: automatic speech recognition (ASR) and audio file transcription, make it easy to implement the above functions.
ASR can recognize speech of up to 60s, and convert the input speech into text in real time, with recognition accuracy of over 95%. It currently supports Mandarin Chinese (including Chinese-English bilingual speech), English, French, German, Spanish, Italian, and Arabic.
l Real-time result output
l Available options: with and without speech pickup UI
l Endpoint detection: Start and end points can be accurately located.
l Silence detection: No voice packet is sent for silent portions.
l Intelligent conversion to digital formats: For example, the year 2021 is recognized from voice input.
Audio file transcription can convert an audio file of up to five hours into text with punctuation, and automatically segment the text for greater clarity. In addition, this service can generate text with timestamps, facilitating further function development. In this version, both Chinese and English are supported.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Procedures
1. Preparations
(1) Configure the Huawei Maven repository address, and put the agconnect-services.json file under the app directory.
Open the build.gradle file in the root directory of your Android Studio project.
Add the AppGallery Connect plugin and the Maven repository.
l Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
l Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
l If the agconnect-services.json file has been added to the app, go to buildscript > dependencies and add the AppGallery Connect plugin configuration.
Code:
<p style="line-height: 1.5em;">buildscript {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
classpath 'com.android.tools.build:gradle:3.5.4'
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
// NOTE: Do not place your app dependencies here; they belong
// in the individual module build.gradle files.
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}</p>
2) Add the build dependencies for the HMS Core SDK.
Code:
<p style="line-height: 1.5em;">dependencies {
//The audio file transcription SDK.
implementation 'com.huawei.hms:ml-computer-voice-aft:2.2.0.300'
// The ASR SDK.
implementation 'com.huawei.hms:ml-computer-voice-asr:2.2.0.300'
// Plugin of ASR.
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.2.0.300'
...
}
apply plugin: 'com.huawei.agconnect' // AppGallery Connect plugin.</p>
(3) Configure the signing certificate in the build.gradle file under the app directory.
Code:
<p style="line-height: 1.5em;">signingConfigs {
release {
storeFile file("xxx.jks")
keyAlias xxx
keyPassword xxxxxx
storePassword xxxxxx
v1SigningEnabled true
v2SigningEnabled true
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
debug {
signingConfig signingConfigs.release
debuggable true
}
}</p>
(4) Add permissions in the AndroidManifest.xml file.
Code:
<p style="line-height: 1.5em;"><uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<application
android:requestLegacyExternalStorage="true"
...
</application>
</p>
2. Integrating the ASR Service
(1) Dynamically apply for the permissions.
Code:
<p style="line-height: 1.5em;">if (ActivityCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
requestCameraPermission();
}
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.RECORD_AUDIO};
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.RECORD_AUDIO)) {
ActivityCompat.requestPermissions(this, permissions, Constants.AUDIO_PERMISSION_CODE);
return;
}
}
</p>
(2) Create an Intent to set parameters.
Code:
<p style="line-height: 1.5em;">// Set authentication information for your app.
MLApplication.getInstance().setApiKey(AGConnectServicesConfig.fromContext(this).getString("client/api_key"));
//// Use Intent for recognition parameter settings.
Intent intentPlugin = new Intent(this, MLAsrCaptureActivity.class)
// Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Example: "zh-CN": Chinese; "en-US": English.
.putExtra(MLAsrCaptureConstants.LANGUAGE, MLAsrConstants.LAN_EN_US)
// Set whether to display the recognition result on the speech pickup UI.
.putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX);
startActivityForResult(intentPlugin, "1");</p>
(3) Override the onActivityResult method to process the result returned by ASR.
Code:
<p style="line-height: 1.5em;">@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
String text = "";
if (null == data) {
addTagItem("Intent data is null.", true);
}
if (requestCode == "1") {
if (data == null) {
return;
}
Bundle bundle = data.getExtras();
if (bundle == null) {
return;
}
switch (resultCode) {
case MLAsrCaptureConstants.ASR_SUCCESS:
// Obtain the text information recognized from speech.
if (bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT);
}
if (text == null || "".equals(text)) {
text = "Result is null.";
Log.e(TAG, text);
} else {
// Display the recognition result in the search box.
searchEdit.setText(text);
goSearch(text, true);
}
break;
// MLAsrCaptureConstants.ASR_FAILURE: Recognition fails.
case MLAsrCaptureConstants.ASR_FAILURE:
// Check whether an error code is contained.
if (bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_CODE)) {
text = text + bundle.getInt(MLAsrCaptureConstants.ASR_ERROR_CODE);
// Troubleshoot based on the error code.
}
// Check whether error information is contained.
if (bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_MESSAGE)) {
String errorMsg = bundle.getString(MLAsrCaptureConstants.ASR_ERROR_MESSAGE);
// Troubleshoot based on the error information.
if (errorMsg != null && !"".equals(errorMsg)) {
text = "[" + text + "]" + errorMsg;
}
}
// Check whether a sub-error code is contained.
if (bundle.containsKey(MLAsrCaptureConstants.ASR_SUB_ERROR_CODE)) {
int subErrorCode = bundle.getInt(MLAsrCaptureConstants.ASR_SUB_ERROR_CODE);
// Troubleshoot based on the sub-error code.
text = "[" + text + "]" + subErrorCode;
}
Log.e(TAG, text);
break;
default:
break;
}
}
}
</p>
3. Integrating the Audio File Transcription Service
(1) Dynamically apply for the permissions.
Code:
<p style="line-height: 1.5em;">private static final int REQUEST_EXTERNAL_STORAGE = 1;
private static final String[] PERMISSIONS_STORAGE = {
Manifest.permission.READ_EXTERNAL_STORAGE,
Manifest.permission.WRITE_EXTERNAL_STORAGE };
public static void verifyStoragePermissions(Activity activity) {
// Check if the write permission has been granted.
int permission = ActivityCompat.checkSelfPermission(activity,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
if (permission != PackageManager.PERMISSION_GRANTED) {
// The permission has not been granted. Prompt the user to grant it.
ActivityCompat.requestPermissions(activity, PERMISSIONS_STORAGE,
REQUEST_EXTERNAL_STORAGE);
}
}
</p>
(2) Create and initialize an audio transcription engine, and create an audio file transcription configurator.
Code:
<p style="line-height: 1.5em;">// Set the API key.
MLApplication.getInstance().setApiKey(AGConnectServicesConfig.fromContext(getApplication()).getString("client/api_key"));
MLRemoteAftSetting setting = new MLRemoteAftSetting.Factory()
// Set the transcription language code, complying with the BCP 47 standard. Currently, Mandarin Chinese and English are supported.
.setLanguageCode("zh")
// Set whether to automatically add punctuations to the converted text. The default value is false.
.enablePunctuation(true)
// Set whether to generate the text transcription result of each audio segment and the corresponding audio time shift. The default value is false. (This parameter needs to be set only when the audio duration is less than 1 minute.)
.enableWordTimeOffset(true)
// Set whether to output the time shift of a sentence in the audio file. The default value is false.
.enableSentenceTimeOffset(true)
.create();
// Create an audio transcription engine.
MLRemoteAftEngine engine = MLRemoteAftEngine.getInstance();
engine.init(this);
// Pass the listener callback to the audio transcription engine created beforehand.
engine.setAftListener(aftListener);</p>
(3) Create a listener callback to process the audio file transcription result.
l Transcription of short audio files with a duration of 1 minute or shorter:
Code:
<p style="line-height: 1.5em;">private MLRemoteAftListener aftListener = new MLRemoteAftListener() {
public void onResult(String taskId, MLRemoteAftResult result, Object ext) {
// Obtain the transcription result notification.
if (result.isComplete()) {
// Process the transcription result.
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback upon a transcription error.
}
@Override
public void onInitComplete(String taskId, Object ext) {
// Reserved.
}
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Reserved.
}
@Override
public void onEvent(String taskId, int eventId, Object ext) {
// Reserved.
}
};
</p>
l Transcription of audio files with a duration longer than 1 minute:
Code:
<p style="line-height: 1.5em;">private MLRemoteAftListener asrListener = new MLRemoteAftListener() {
@Override
public void onInitComplete(String taskId, Object ext) {
Log.e(TAG, "MLAsrCallBack onInitComplete");
// The long audio file is initialized and the transcription starts.
start(taskId);
}
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
Log.e(TAG, " MLAsrCallBack onUploadProgress");
}
@Override
public void onEvent(String taskId, int eventId, Object ext) {
// Used for the long audio file.
Log.e(TAG, "MLAsrCallBack onEvent" + eventId);
if (MLAftEvents.UPLOADED_EVENT == eventId) { // The file is uploaded successfully.
// Obtain the transcription result.
startQueryResult(taskId);
}
}
@Override
public void onResult(String taskId, MLRemoteAftResult result, Object ext) {
Log.e(TAG, "MLAsrCallBack onResult taskId is :" + taskId + " ");
if (result != null) {
Log.e(TAG, "MLAsrCallBack onResult isComplete: " + result.isComplete());
if (result.isComplete()) {
TimerTask timerTask = timerTaskMap.get(taskId);
if (null != timerTask) {
timerTask.cancel();
timerTaskMap.remove(taskId);
}
if (result.getText() != null) {
Log.e(TAG, taskId + " MLAsrCallBack onResult result is : " + result.getText());
tvText.setText(result.getText());
}
List<MLRemoteAftResult.Segment> words = result.getWords();
if (words != null && words.size() != 0) {
for (MLRemoteAftResult.Segment word : words) {
Log.e(TAG, "MLAsrCallBack word text is : " + word.getText() + ", startTime is : " + word.getStartTime() + ". endTime is : " + word.getEndTime());
}
}
List<MLRemoteAftResult.Segment> sentences = result.getSentences();
if (sentences != null && sentences.size() != 0) {
for (MLRemoteAftResult.Segment sentence : sentences) {
Log.e(TAG, "MLAsrCallBack sentence text is : " + sentence.getText() + ", startTime is : " + sentence.getStartTime() + ". endTime is : " + sentence.getEndTime());
}
}
}
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
Log.i(TAG, "MLAsrCallBack onError : " + message + "errorCode, " + errorCode);
switch (errorCode) {
case MLAftErrors.ERR_AUDIO_FILE_NOTSUPPORTED:
break;
}
}
};
// Upload a transcription task.
private void start(String taskId) {
Log.e(TAG, "start");
engine.setAftListener(asrListener);
engine.startTask(taskId);
}
// Obtain the transcription result.
private Map<String, TimerTask> timerTaskMap = new HashMap<>();
private void startQueryResult(final String taskId) {
Timer mTimer = new Timer();
TimerTask mTimerTask = new TimerTask() {
@Override
public void run() {
getResult(taskId);
}
};
// Periodically obtain the long audio file transcription result every 10s.
mTimer.schedule(mTimerTask, 5000, 10000);
// Clear timerTaskMap before destroying the UI.
timerTaskMap.put(taskId, mTimerTask);
}
</p>
(4) Obtain an audio file and upload it to the audio transcription engine.
Code:
<p style="line-height: 1.5em;">// Obtain the URI of an audio file.
Uri uri = getFileUri();
// Obtain the audio duration.
Long audioTime = getAudioFileTimeFromUri(uri);
// Check whether the duration is longer than 60s.
if (audioTime < 60000) {
// uri indicates audio resources read from the local storage or recorder. Only local audio files with a duration not longer than 1 minute are supported.
this.taskId = this.engine.shortRecognize(uri, this.setting);
Log.i(TAG, "Short audio transcription.");
} else {
// longRecognize is an API used to convert audio files with a duration from 1 minute to 5 hours.
this.taskId = this.engine.longRecognize(uri, this.setting);
Log.i(TAG, "Long audio transcription.");
}
private Long getAudioFileTimeFromUri(Uri uri) {
Long time = null;
Cursor cursor = this.getContentResolver()
.query(uri, null, null, null, null);
if (cursor != null) {
cursor.moveToFirst();
time = cursor.getLong(cursor.getColumnIndexOrThrow(MediaStore.Video.Media.DURATION));
} else {
MediaPlayer mediaPlayer = new MediaPlayer();
try {
mediaPlayer.setDataSource(String.valueOf(uri));
mediaPlayer.prepare();
} catch (IOException e) {
Log.e(TAG, "Failed to read the file time.");
}
time = Long.valueOf(mediaPlayer.getDuration());
}
return time;
}</p>
For more details, you can go to:
l Reddit to join our developer discussion
l GitHub to download demos and sample codes
l Stack Overflow to solve any integration problems
| Orignal Source

How To Convert Audio from 2D to 3D

Immersive audio is becoming an increasingly important factor for enhancing user experience in the music, gaming, and audio/video editing fields. The spatial audio function is ideal for meetings, sports rehabilitation, and particularly for exhibitions, as it helps deliver a more immersive experience. For users who suffer from visual impairments, the function can serve as a helpful guide.
In this article, I am going to reuse the sample code on this GitHub repo .I will implement spatial audio function in my android app and delivers the 3D surround sound.
Development Practice​Preparations​Prepare the audio for 2D-to-3D conversion, which is better a MP3 file. If not, follow the instructions specified later to convert the format to MP3 first. If the audio is part of a video file, just extract the audio first by referring to the instructions described later.
1. Configure the Maven repository address in the project-level build.gradle file.
Code:
buildscript {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
// Add the AppGallery Connect plugin configuration.
classpath 'com.huawei.agconnect:agcp:1.4.2.300'
}
}
allprojects {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
}
Add the following configuration under the declaration in the file header:
Code:
apply plugin: 'com.huawei.agconnect'
2. Add the build dependency on the Audio Editor SDK in the app-level build.gradle file.
Code:
dependencies{
implementation 'com.huawei.hms:audio-editor-ui:{version}'
}
3. Apply for the following permissions in the AndroidManifest.xml file:
Code:
<!-- Vibrate -->
<uses-permission android:name="android.permission.VIBRATE" />
<!-- Microphone -->
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<!-- Write into storage -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<!-- Read from storage -->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Connect to the Internet -->
<uses-permission android:name="android.permission.INTERNET" />
<!-- Obtain the network status -->
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<!-- Obtain the changed network connectivity state -->
<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE" />
Code Development​1. Create app's custom activity for selecting one or more audio files. Return their paths to the SDK.
Code:
// Return the audio file paths to the audio editing screen.
private void sendAudioToSdk() {
// Set filePath to the obtained audio file path.
String filePath = "/sdcard/AudioEdit/audio/music.aac";
ArrayList<String> audioList = new ArrayList<>();
audioList.add(filePath);
// Return the path to the audio editing screen.
Intent intent = new Intent();
// Use HAEConstant.AUDIO_PATH_LIST provided by the SDK.
intent.putExtra(HAEConstant.AUDIO_PATH_LIST, audioList);
// Use HAEConstant.RESULT_CODE provided by the SDK as the result code.
this.setResult(HAEConstant.RESULT_CODE, intent);
finish();
}
2. Register the activity in the AndroidManifest.xml file as described in the following code. When you choose to import the selected audio files, the SDK will send an intent whose action value is com.huawei.hms.audioeditor.chooseaudio to jump to the activity.
Code:
<activity android:name="Activity ">
<intent-filter>
<action android:name="com.huawei.hms.audioeditor.chooseaudio"/>
<category android:name="android.intent.category.DEFAULT"/>
</intent-filter>
</activity>
Launch the audio editing screen. When you tap Add audio, the SDK will automatically call the activity defined earlier. Then operations like editing and adding special effects can be performed on the audio. After such operations are complete, the edited audio can be exported.
Code:
HAEUIManager.getInstance().launchEditorActivity(this);
3. (Optional) Convert the file format to MP3.
Call transformAudioUseDefaultPath to convert the format and save the converted audio to the default directory.
Code:
// Convert the audio format.
HAEAudioExpansion.getInstance().transformAudioUseDefaultPath(context,inAudioPath, audioFormat, new OnTransformCallBack() {
// Callback when the progress is received. The value ranges from 0 to 100.
@Override
public void onProgress(int progress) {
}
// Callback when the conversion fails.
@Override
public void onFail(int errorCode) {
}
// Callback when the conversion succeeds.
@Override
public void onSuccess(String outPutPath) {
}
// Callback when the conversion is canceled.
@Override
public void onCancel() {
}
});
// Cancel format conversion.
HAEAudioExpansion.getInstance().cancelTransformAudio();
Call transformAudio to convert audio and save the converted audio to a specified directory.
Code:
// Convert the audio format.
HAEAudioExpansion.getInstance().transformAudio(context,inAudioPath, outAudioPath, new OnTransformCallBack(){
// Callback when the progress is received. The value ranges from 0 to 100.
@Override
public void onProgress(int progress) {
}
// Callback when the conversion fails.
@Override
public void onFail(int errorCode) {
}
// Callback when the conversion succeeds.
@Override
public void onSuccess(String outPutPath) {
}
// Callback when the conversion is canceled.
@Override
public void onCancel() {
}
});
// Cancel format conversion.
HAEAudioExpansion.getInstance().cancelTransformAudio();
(Optional) Call extractAudio to extract audio from a video to a specified directory.
Code:
// outAudioDir (optional): directory path for storing extracted audio.
// outAudioName (optional): name of extracted audio, which does not contain the file name extension.
HAEAudioExpansion.getInstance().extractAudio(context,inVideoPath,outAudioDir, outAudioName,new AudioExtractCallBack() {
@Override
public void onSuccess(String audioPath) {
Log.d(TAG, "ExtractAudio onSuccess : " + audioPath);
}
@Override
public void onProgress(int progress) {
Log.d(TAG, "ExtractAudio onProgress : " + progress);
}
@Override
public void onFail(int errCode) {
Log.i(TAG, "ExtractAudio onFail : " + errCode);
}
@Override
public void onCancel() {
Log.d(TAG, "ExtractAudio onCancel.");
}
});
// Cancel audio extraction.
HAEAudioExpansion.getInstance().cancelExtractAudio();
Call getInstruments and startSeparationTasks for audio source separation.
Code:
// Obtain the accompaniment ID using getInstruments and pass the ID to startSeparationTasks.
HAEAudioSeparationFile haeAudioSeparationFile = new HAEAudioSeparationFile();
haeAudioSeparationFile.getInstruments(new SeparationCloudCallBack<List<SeparationBean>>() {
@Override
public void onFinish(List<SeparationBean> response) {
// Callback when the separation data is received. The data includes the accompaniment ID.
}
@Override
public void onError(int errorCode) {
// Callback when the separation fails.
}
});
// Set the parameter for accompaniment separation.
List instruments = new ArrayList<>();
instruments.add("accompaniment ID");
haeAudioSeparationFile.setInstruments(instruments);
// Start separating.
haeAudioSeparationFile.startSeparationTasks(inAudioPath, outAudioDir, outAudioName, new AudioSeparationCallBack() {
@Override
public void onResult(SeparationBean separationBean) { }
@Override
public void onFinish(List<SeparationBean> separationBeans) {}
@Override
public void onFail(int errorCode) {}
@Override
public void onCancel() {}
});
// Cancel separating.
haeAudioSeparationFile.cancel();
Call applyAudioFile to apply spatial audio.
Code:
// Apply spatial audio.
// Fixed position mode.
HAESpaceRenderFile haeSpaceRenderFile = new HAESpaceRenderFile(SpaceRenderMode.POSITION);
haeSpaceRenderFile.setSpacePositionParams(
new SpaceRenderPositionParams(x, y, z));
// Dynamic rendering mode.
HAESpaceRenderFile haeSpaceRenderFile = new HAESpaceRenderFile(SpaceRenderMode.ROTATION);
haeSpaceRenderFile.setRotationParams( new SpaceRenderRotationParams(
x, y, z, surroundTime, surroundDirection));
// Extension.
HAESpaceRenderFile haeSpaceRenderFile = new HAESpaceRenderFile(SpaceRenderMode.EXTENSION);
haeSpaceRenderFile.setExtensionParams(new SpaceRenderExtensionParams(radiusVal, angledVal));
// Call the API.
haeSpaceRenderFile.applyAudioFile(inAudioPath, outAudioDir, outAudioName, callBack);
// Cancel applying spatial audio.
haeSpaceRenderFile.cancel();
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
After completing these steps, you can now implement the 2D-to-3D conversion effect for your app.
Utilize the function according to your needs. To find more about it, check out:
Official website of Audio Editor Kit
Development guide to the kit
Can this conversion will be done in offline?
does it support all audio format?
Which Audio api's we can use for volume management?
muraliameakula said:
Can this conversion will be done in offline?
Click to expand...
Click to collapse
This service cannot be converted offline. Some functions can be used offline: AI dubbing, spatial rendering, separation, and functions related to material.
ProManojKumar said:
does it support all audio format?
Click to expand...
Click to collapse
Support mp3 wav flac aac etc.
vivek_yadav said:
Which Audio api's we can use for volume management?
Click to expand...
Click to collapse
https://developer.huawei.com/consum...unctions-0000001224604517#section171179354277

Categories

Resources