Sound Event Detection using ML kit | JAVA - Huawei Developers

Introduction
Sound detection service can detect sound events. Automatic environmental sound classification is a growing area of research with real world applications.
Steps
1. Create App in Android
2. Configure App in AGC
3. Integrate the SDK in our new Android project
4. Integrate the dependencies
5. Sync project
Use case
This service we will use in day to day life, it will detect different types of sounds such as Baby crying, laugher, snoring, running water, alarm sounds, doorbell, etc.! Currently this service will detect only one sound at a time, so multiple sound detection is not supporting this service. Default interval at least 2 seconds for each sound detection.
ML Kit Configuration.
1. Login into AppGallery Connect, select MlKitSample in My Project list.
2. Enable Ml Kit, Choose My Projects > Project settings > Manage APIs
Integration
Create Application in Android Studio.
App level gradle dependencies.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Gradle dependencies
Code:
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-sdk:2.0.3.300'
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-model:2.0.3.300'
Root level gradle dependencies
Code:
maven {url 'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
Add the below permissions in Android Manifest file
Code:
<manifest xlmns:android...>
...
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>
</manifest>
1. Create Instance for Sound Detection in onCreate.
Code:
MLSoundDector soundDector = MLSoundDector.createSoundDector();
2. Check Run time permissions.
Code:
private void getRuntimePermissions() {
List<String> allNeededPermissions = new ArrayList<>();
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
allNeededPermissions.add(permission);
}
}
if (!allNeededPermissions.isEmpty()) {
ActivityCompat.requestPermissions(
this, allNeededPermissions.toArray(new String[0]), PERMISSION_REQUESTS);
}
}
private boolean allPermissionsGranted() {
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
return false;
}
}
return true;
}
private static boolean isPermissionGranted(Context context, String permission) {
if (ContextCompat.checkSelfPermission(context, permission)
== PackageManager.PERMISSION_GRANTED) {
Log.i(TAG, "Permission granted: " + permission);
return true;
}
Log.i(TAG, "Permission NOT granted: " + permission);
return false;
}
private String[] getRequiredPermissions() {
try {
PackageInfo info = this.getPackageManager().getPackageInfo(this.getPackageName(), PackageManager.GET_PERMISSIONS);
String[] ps = info.requestedPermissions;
if (ps != null && ps.length > 0) {
return ps;
} else {
return new String[0];
}
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
return new String[0];
}
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if ((permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE)
&& grantResults[i] != PackageManager.PERMISSION_GRANTED)
|| (permissions[i].equals(Manifest.permission.CAMERA)
&& permissions[i].equals(Manifest.permission.RECORD_AUDIO)
&& grantResults[i] != PackageManager.PERMISSION_GRANTED)) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName()));
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}
}
3. Create sound detection result callback, this callback will detect the sound results.
Code:
MLSoundDectListener listener = new MLSoundDectListener() {
@Override
public void onSoundSuccessResult(Bundle result) {
int soundType = result.getInt(MLSoundDector.RESULTS_RECOGNIZED);
String soundName = hmap.get(soundType);
textView.setText("Successfully sound has been detected : " + soundName);
}
@Override
public void onSoundFailResult(int errCode) {
textView.setText("Failure" + errCode);
}
};
soundDector.setSoundDectListener(listener);
soundDector.start(this);
4. Once sound detection obtained call notification service.
Code:
serviceIntent = new Intent(MainActivity.this, NotificationService.class);
serviceIntent.putExtra("response", soundName);
ContextCompat.startForegroundService(MainActivity.this, serviceIntent);
5. If you want to stop sound detection call onStop()
Code:
soundDector.stop();
6. Below are the sound type results.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Result
Conclusion
This article will help you to detect Real time streaming sounds, sound detection service will help you to notify sounds to users in daily life.
Thank you for reading and if you have enjoyed this article, I would suggest you implement this and provide your experience.
Reference
ML Kit – Sound Detection
Refer the URL

Related

Ultra-simple integration with the ML kit to implement word broadcasting

More articles like this, you can visit HUAWEI Developer Forum and Medium​
Background
I believe that we all start to learn a language when we have done dictation, now when the primary school students learn the language is an important after - school work is dictating the text of the new words, many parents have this experience. However, on the one hand, the pronunciation is relatively simple. On the other hand, parents' time is very precious. Now, there are many dictation voices in the market. These broadcasters record the dictation words in the language teaching materials after class for parents to download. However, this kind of recording is not flexible enough, if the teacher leaves a few extra words today that are not part of the after-school problem set, the recordings won't meet the needs of parents and children. This document describes how to use the general text recognition and speech synthesis functions of the ML kit to implement the automatic voice broadcast app. You only need to take photos of dictation words or texts, and then the text in the photos can be automatically played. The tone color and tone of the voice can be adjusted.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Preparations
Open the project-level build.gradle file
Choose allprojects > repositories and configure the Maven repository address of HMS SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Configure the Maven repository address of HMS SDK in buildscript->repositories.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Choose buildscript > dependencies and configure the AGC plug-in.
Code:
dependencies {
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
Adding Compilation Dependencies
Open the application levelbuild.gradle file.
SDK integration
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-voice-tts:1.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr:1.0.4.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:1.0.4.300'
}
Add the ACG plug-in to the file header.
Code:
apply plugin: 'com.huawei.agconnect'
Specify permissions and features: Declare them in AndroidManifest.xml.
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Key Development Steps
There are two main functions. One is to identify the operation text, and the other is to read the operation. The OCR+TTS mode is used to read the operation. After taking a photo, click the play button to read the operation.
Dynamic permission application
Code:
private static final int PERMISSION_REQUESTS = 1;
@Override
public void onCreate(Bundle savedInstanceState) {
// Checking camera permission
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
2. Start the reading interface.
Code:
public void takePhoto(View view) {
Intent intent = new Intent(MainActivity.this, ReadPhotoActivity.class);
startActivity(intent);
}
3.Invoke createLocalTextAnalyzer() in the onCreate() method to create a device-side text recognizer.
Code:
private void createLocalTextAnalyzer() {
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("zh")
.create();
this.textAnalyzer = MLAnalyzerFactory.getInstance().getLocalTextAnalyzer(setting);
}
4.Invoke createLocalTextAnalyzer() in the onCreate() method to create a device-side text recognizer.
Code:
private void createTtsEngine() {
MLTtsConfig mlConfigs = new MLTtsConfig()
.setLanguage(MLTtsConstants.TTS_ZH_HANS)
.setPerson(MLTtsConstants.TTS_SPEAKER_FEMALE_ZH)
.setSpeed(0.2f)
.setVolume(1.0f);
this.mlTtsEngine = new MLTtsEngine(mlConfigs);
MLTtsCallback callback = new MLTtsCallback() {
@Override
public void onError(String taskId, MLTtsError err) {
}
@Override
public void onWarn(String taskId, MLTtsWarn warn) {
}
@Override
public void onRangeStart(String taskId, int start, int end) {
}
@Override
public void onEvent(String taskId, int eventName, Bundle bundle) {
if (eventName == MLTtsConstants.EVENT_PLAY_STOP) {
if (!bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED)) {
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_finish, Toast.LENGTH_SHORT).show();
}
}
}
};
mlTtsEngine.setTtsCallback(callback);
}
5.Set the buttons for reading photos, taking photos, and reading aloud.
Code:
this.relativeLayoutLoadPhoto.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ReadPhotoActivity.this.selectLocalImage(ReadPhotoActivity.this.REQUEST_CHOOSE_ORIGINPIC);
}
});
this.relativeLayoutTakePhoto.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ReadPhotoActivity.this.takePhoto(ReadPhotoActivity.this.REQUEST_TAKE_PHOTO);
}
});
6.Start TextAnalyzer() during the callback of photographing and reading photos.
Code:
private void startTextAnalyzer() {
if (this.isChosen(this.originBitmap)) {
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Task<MLText> task = this.textAnalyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText mlText) {
// Transacting logic for segment success.
if (mlText != null) {
ReadPhotoActivity.this.remoteDetectSuccess(mlText);
} else {
ReadPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Transacting logic for segment failure.
ReadPhotoActivity.this.displayFailure();
return;
}
});
} else {
Toast.makeText(this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show();
return;
}
}
7.After the recognition is successful, click the play button to start the playback.
Code:
this.relativeLayoutRead.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (ReadPhotoActivity.this.sourceText == null) {
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.please_select_picture, Toast.LENGTH_SHORT).show();
} else {
ReadPhotoActivity.this.mlTtsEngine.speak(sourceText, MLTtsEngine.QUEUE_APPEND);
Toast.makeText(ReadPhotoActivity.this.getApplicationContext(), R.string.read_start, Toast.LENGTH_SHORT).show();
}
}
});
Demo
Any questions about this process, you can visit HUAWEI Developer Forum.
Seems quite simple and useful. I will try.
sanghati said:
Hi,
Nice article. Can you use ML kit for scanning product and find that product online to buy.
Thanks
Click to expand...
Click to collapse
Hi, if you want to scan products which you want to buy, you can use scan kit. Refer the document and acquire help from HUAWEI Developer Forum.

Make your music player with HMS Audio Kit: Part 2

More information like this, you can visit HUAWEI Developer Forum​
Original article link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202327658275700021&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
If you followed the part one here, you know where we left off. In this part of the tutorial, we will be implemeting onCreate in detail, including button clicks and listeners which is a crucial part in playback control.
Let’s start by implemeting add/remove listeners to control the listener attachment. Then implement a small part of onCreate method.
Code:
public void addListener(HwAudioStatusListener listener) {
if (mHwAudioManager != null) {
try {
mHwAudioManager.addPlayerStatusListener(listener);
} catch (RemoteException e) {
Log.e("TAG", "TAG", e);
}
} else {
mTempListeners.add(listener);
}
}
public void removeListener(HwAudioStatusListener listener) { //will be called in onDestroy() method
if (mHwAudioManager != null) {
try {
mHwAudioManager.removePlayerStatusListener(listener);
} catch (RemoteException e) {
Log.e("TAG", "TAG", e);
}
}
mTempListeners.remove(listener);
}
@Override
protected void onDestroy() {
if(mHwAudioPlayerManager!= null && isReallyPlaying){
isReallyPlaying = false;
mHwAudioPlayerManager.stop();
removeListener(mPlayListener);
}
super.onDestroy();
}
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
binding = ActivityMainBinding.inflate(getLayoutInflater());
View view = binding.getRoot();
setContentView(view);
//I set the MainActivity cover image as a placeholder image to cover for those audio files which do not have a cover image.
binding.albumPictureImageView.setImageDrawable(getDrawable(R.drawable.ic_launcher_foreground));
initializeManagerAndGetPlayList(this); //I call my method to set my playlist
addListener(mPlayListener); //I add my listeners
}
If you followed the first part closely, you already have the mTempListeners variable. So, in onCreate, we first initialize everything and then attach our listener to it.
Now that our playlist is programatically ready, if you try to run your code, you will not be able to control what you want because we have not touched our views yet.
Let’s implement our basic buttons first.
Code:
final Drawable drawablePlay = getDrawable(R.drawable.btn_playback_play_normal);
final Drawable drawablePause = getDrawable(R.drawable.btn_playback_pause_normal);
binding.playButtonImageView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if(binding.playButtonImageView.getDrawable().getConstantState().equals(drawablePlay.getConstantState())){
if (mHwAudioPlayerManager != null){
mHwAudioPlayerManager.play();
binding.playButtonImageView.setImageDrawable(getDrawable(R.drawable.btn_playback_pause_normal));
isReallyPlaying = true;
}
}
else if(binding.playButtonImageView.getDrawable().getConstantState().equals(drawablePause.getConstantState())){
if (mHwAudioPlayerManager != null) {
mHwAudioPlayerManager.pause();
binding.playButtonImageView.setImageDrawable(getDrawable(R.drawable.btn_playback_play_normal));
isReallyPlaying = false;
}
}
}
});
binding.nextSongImageView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if (mHwAudioPlayerManager != null) {
mHwAudioPlayerManager.playNext();
isReallyPlaying = true;
}
}
});
binding.previousSongImageView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if (mHwAudioPlayerManager != null) {
mHwAudioPlayerManager.playPre();
isReallyPlaying = true;
}
}
});
Do these in your onCreate(…) method. isReallyPlaying is a global boolean variable that I created to keep track of playback. I update it everytime the playback changes. You do not have to have it, as I also tested it with online playlists. I still keep there in case you also want to test your app with online playlists, as in the sample apps provided by Huawei.
For button visual changes, I devised a method like above and I am not asserting that it is the best. You can use your own method but onClicks should rougly be like this.
Now we should implement our listener so that we can track the change in playback. It will also include updating the UI elements, so that the app tracks the audio file changes and seekbar updates. Put this listener outside onCreate(…) method, it will be called anytime it is required. What we should do is to implement necessary methods to respond this calling.
Code:
HwAudioStatusListener mPlayListener = new HwAudioStatusListener() {
@Override
public void onSongChange(HwAudioPlayItem hwAudioPlayItem) {
setSongDetails(hwAudioPlayItem);
if(mHwAudioPlayerManager.getOffsetTime() != -1 && mHwAudioPlayerManager.getDuration() != -1)
updateSeekBar(mHwAudioPlayerManager.getOffsetTime(), mHwAudioPlayerManager.getDuration());
}
@Override
public void onQueueChanged(List list) {
if (mHwAudioPlayerManager != null && list.size() != 0 && !isReallyPlaying) {
mHwAudioPlayerManager.play();
isReallyPlaying = true;
binding.playButtonImageView.setImageDrawable(getDrawable(R.drawable.btn_playback_pause_normal));
}
}
@Override
public void onBufferProgress(int percent) {
}
@Override
public void onPlayProgress(final long currentPosition, long duration) {
updateSeekBar(currentPosition, duration);
}
@Override
public void onPlayCompleted(boolean isStopped) {
if (mHwAudioPlayerManager != null && isStopped) {
mHwAudioPlayerManager.playNext();
}
isReallyPlaying = !isStopped;
}
@Override
public void onPlayError(int errorCode, boolean isUserForcePlay) {
Toast.makeText(MainActivity.this, "We cannot play this!!", Toast.LENGTH_LONG).show();
}
@Override
public void onPlayStateChange(boolean isPlaying, boolean isBuffering) {
if(isPlaying || isBuffering){
binding.playButtonImageView.setImageDrawable(getDrawable(R.drawable.btn_playback_pause_normal));
isReallyPlaying = true;
}
else{
binding.playButtonImageView.setImageDrawable(getDrawable(R.drawable.btn_playback_play_normal));
isReallyPlaying = false;
}
}
};
Now let’s understand the snippet above. Status listener instance wants us to implement some methods shown. They are not required but they will make our lives easier. I will explain the methods inside of them later.
Understanding HwAudioStatusListener
onSongChange is called upon a song change in the queue, i.e. whenever the you skip a song for example, it will be called. Also, it is called for the first time you set the playlist because technically your song changed, from nothing to your index 0 audio file.
onQueueChanged is called upon the queue is altered. I implemented some code just in case but since we will not be dealing with adding/removing items to the playlist and change the queues, it is not very important.
onBufferProgress returns the progress as percentage when you are using online audio.
onPlayProgress is one of the most important method here, everytime the playback is changed, it is called. That is, even the audio files proceed one second (i.e. plays) it will be called. So, it is wise to call UI updates here, rather than dealing with heavy-load fragments, in my opinion.
onPlayCompleted lets you decide the behaviour when the playlist finishes playing. You can restart the playlist, just stop or do something else (like notifying the user that the playlist has ended etc.). It is totally up to you. I restart the playlist in the sample code.
onPlayError is called whenever there is an error occurred in the playback. It could, for example, be that the buffered song is not loaded correctly, format not supported, audio file is corrupted etc. You can handle the result here or you can just notify the user that the file cannot be played; just like I did.
And finally, onPlayStateChange is called whenever the music is paused/re-played. Thus, you can handle the changes here in general. I update isReallyPlaying variable and play/pause button views inside and outside of this method just to be sure. You can update it just here if you want. It should work in theory.
Extra Methods
This listener contains two extra custom methods that I wrote to update the UI and set/render the details of the song on the screen. They are called setSongDetails(…) and updateSeekBar(…). Let’s see how they are implemented.
Code:
public void updateSeekBar(final long currentPosition, long duration){
//seekbar
binding.musicSeekBar.setMax((int) (duration / 1000));
if(mHwAudioPlayerManager != null){
int mCurrentPosition = (int) (currentPosition / 1000);
binding.musicSeekBar.setProgress(mCurrentPosition);
setProgressText(mCurrentPosition);
}
binding.musicSeekBar.setOnSeekBarChangeListener(new SeekBar.OnSeekBarChangeListener() {
@Override
public void onStopTrackingTouch(SeekBar seekBar) {
//Log.i("ONSTOPTRACK", "STOP TRACK TRIGGERED.");
}
@Override
public void onStartTrackingTouch(SeekBar seekBar) {
//Log.i("ONSTARTTRACK", "START TRACK TRIGGERED.");
}
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
if(mHwAudioPlayerManager != null && fromUser){
mHwAudioPlayerManager.seekTo(progress*1000);
}
if(!isReallyPlaying){
setProgressText(progress); //when the song is not playing and user updates the seekbar, seekbar still should be updated
}
}
});
}
public void setProgressText(int progress){
String progressText = String.format(Locale.US, "d:d",
TimeUnit.MILLISECONDS.toMinutes(progress*1000),
TimeUnit.MILLISECONDS.toSeconds(progress*1000) -
TimeUnit.MINUTES.toSeconds(TimeUnit.MILLISECONDS.toMinutes(progress*1000))
);
binding.progressTextView.setText(progressText);
}
public void setSongDetails(HwAudioPlayItem currentItem){
if(currentItem != null){
getBitmapOfCover(currentItem);
if(!currentItem.getAudioTitle().equals(""))
binding.songNameTextView.setText(currentItem.getAudioTitle());
else{
binding.songNameTextView.setText("Try choosing a song");
}
if(!currentItem.getSinger().equals(""))
binding.artistNameTextView.setText(currentItem.getSinger());
else
binding.artistNameTextView.setText("From the playlist");
binding.albumNameTextView.setText("Album Unknown"); //there is no field assinged to this in HwAudioPlayItem
binding.progressTextView.setText("00:00"); //initial progress of every song
long durationTotal = currentItem.getDuration();
String totalDurationText = String.format(Locale.US, "d:d",
TimeUnit.MILLISECONDS.toMinutes(durationTotal),
TimeUnit.MILLISECONDS.toSeconds(durationTotal) -
TimeUnit.MINUTES.toSeconds(TimeUnit.MILLISECONDS.toMinutes(durationTotal))
);
binding.totalDurationTextView.setText(totalDurationText);
}
else{
//Here is measure to prevent the bad first opening of the app. After user
//selects a song from the playlist, here should never be executed again.
binding.songNameTextView.setText("Try choosing a song");
binding.artistNameTextView.setText("From the playlist");
binding.albumNameTextView.setText("It is at right-top of the screen");
}
}
Remarks
I format the time everytime I receive it because currently AudioKit sends them as miliseconds and the user needs to see in 00:00 format.
I divide by 1000 and while seeking to progress, multiply by 1000 to keep the changes visible on screen. It is also more convenient and accustomed way of implementing seekbars.
I take measure by if checks in case they return null. It should not happen except the first opening but still, as developers, we should be careful.
Please follow comments in the code for additional information.
That’s it for this part. However, we are not done yet. As I mentioned earlier we should also implement the playlist to choose songs from and implement advanced playback controls "); background-size: 1px 1px; background-position: 0px calc(1em + 1px); box-sizing: inherit; font-family: medium-content-serif-font, Georgia, Cambria, "Times New Roman", Times, serif; font-weight: 700; font-size: 18px; text-decoration: underline;">in part 3. See you there!
HTML:
sujith.e said:
Well explained,do we need login ?
Click to expand...
Click to collapse
Yeah, you need to log in

Developing Smile Photographing based on HUAWEI MLkit

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201257887466590240&fid=0101187876626530001
Introductions
Richard Yu introduced Huawei HMS Core 4.0 to you at the launch event a while ago. Please check the launch event information:
What does the global release of HMS Core 4.0 mean?
Machine Learning Kit (MLKit) is one of the most important services.
What can MLKIT do? Which of the following problems can be solved during application development?
Today, let’s take face detection as an example to show you the powerful functions of MLKIT and the convenience it provides for developers.
1.1 Capabilities Provided by MLKIT Face Detection
First, let’s look at the face detection capability of Huawei Machine Learning Service (MLKIT).
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
As shown in the animation, facial recognition can recognize the face direction, detect facial expressions (such as happy, disgusted, surprised, sad, angry, and angry), detect facial attributes (such as gender, age, and wearable), and detect whether to open or close eyes, supports coordinate detection of features such as faces, noses, eyes, lips, and eyebrows. In addition, multiple faces can be detected at the same time.
Tips: This function is free of charge and covers all Android models.
2 Development of the Multi-Face Smile Photographing Function
Today, I will use the multi-facial recognition and expression detection capabilities of MLKIT to write a small demo for smiling snapshot and perform a practice.
To download the Github demo source code, click here (the project directory is Smile-Camera).
2.1 Development Preparations
The preparations for developing the kit of Huawei HMS are similar. The only difference is that the Maven dependency is added and the SDK is introduced.
1. Add the Huawei Maven repository to the project-level gradle.
Incrementally add the following Maven addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
}
}
2. Add the SDK dependency to the build.gradle file at the application level.
Introduce the facial recognition SDK and basic SDK.
Code:
dependencies{
// Introduce the basic SDK.
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
// Introduce the face detection capability package.
implementation 'com.huawei.hms:ml-computer-vision-face-recognition-model:1.0.2.300'
}
3. The model is added to the AndroidManifest.xml file in incremental mode for automatic download.
This is mainly used to update the model. After the algorithm is optimized, the model can be automatically downloaded to the mobile phone for update.
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
</application>
</manifest>
4. Apply for camera and storage permissions in the AndroidManifest.xml file.
Code:
<!-Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Use the storage permission.-->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2.2 Code development
1. Create a face analyzer and take photos when a smile is detected.
Photos taken after detection:
1) Analyzer parameter configuration
2) Sends analyzer parameter settings to the analyzer.
3) In analyzer.setTransacto, transactResult is rewritten to process the content after facial recognition. After facial recognition, a confidence level (smiling probability) is returned. You only need to set the confidence level to a certain value.
Code:
private MLFaceAnalyzer analyzer;
private void createFaceAnalyzer() {
MLFaceAnalyzerSetting setting =
new MLFaceAnalyzerSetting.Factory()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
.setKeyPointType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_KEYPOINTS)
.setMinFaceProportion(0.1f)
.setTracingAllowed(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
this.analyzer.setTransactor(new MLAnalyzer.MLTransactor<MLFace>() {
@Override
public void destroy() {
}
Code:
@Override
public void transactResult(MLAnalyzer.Result<MLFace> result) {
SparseArray<MLFace> faceSparseArray = result.getAnalyseList();
int flag = 0;
for (int i = 0; i < faceSparseArray.size(); i++) {
MLFaceEmotion emotion = faceSparseArray.valueAt(i).getEmotions();
if (emotion.getSmilingProbability() > smilingPossibility) {
flag++;
}
}
if (flag > faceSparseArray.size() * smilingRate && safeToTakePicture) {
safeToTakePicture = false;
mHandler.sendEmptyMessage(TAKE_PHOTO);
}
}
});
}
Photographing and storage:
Code:
private void takePhoto() {
this.mLensEngine.photograph(null,
new LensEngine.PhotographListener() {
@Override
public void takenPhotograph(byte[] bytes) {
mHandler.sendEmptyMessage(STOP_PREVIEW);
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
saveBitmapToDisk(bitmap);
}
});
}
2. Create a visual engine to capture dynamic video streams from cameras and send the streams to the analyzer.
Code:
private void createLensEngine() {
Context context = this.getApplicationContext();
// Create LensEngine
this.mLensEngine = new LensEngine.Creator(context, this.analyzer).setLensType(this.lensType)
.applyDisplayDimension(640, 480)
.applyFps(25.0f)
.enableAutomaticFocus(true)
.create();
}
3. Dynamic permission application, attaching the analyzer and visual engine creation code3. Dynamic permission application, attaching the analyzer and visual engine creation code
Code:
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
this.setContentView(R.layout.activity_live_face_analyse);
if (savedInstanceState! = null) {
this.lensType = savedInstanceState.getInt("lensType");
}
this.mPreview = this.findViewById(R.id.preview);
this.createFaceAnalyzer();
this.findViewById(R.id.facingSwitch).setOnClickListener(this);
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
} else {
this.requestCameraPermission();
}
}
Code:
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE};
Code:
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE);
return;
}
}
Code:
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
if (requestCode != LiveFaceAnalyseActivity.CAMERA_PERMISSION_CODE) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
return;
}
if (grantResults.length != 0 && grantResults[0] == PackageManager.PERMISSION_GRANTED) {
this.createLensEngine();
return;
}
}
3 Conclusion
Is the development process very simple? A new feature can be developed in 30 minutes. Let’s experience the effect of the multi-faced smile capture.
Multi-person smiling face snapshot:
Based on the face detection capability, which functions can be done? Open your brain hole! Here are a few hints:
1. Add interesting decorative effects by identifying the locations of facial features such as ears, eyes, nose, mouth, and eyebrows.
2. Identify facial contours and stretch the contours to generate interesting portraits or develop facial beautification functions for contour areas.
3. Develop some parental control functions based on age identification and children’s infatuation with electronic products.
4. Develop the eye comfort feature by detecting the duration of eyes staring at the screen.
5. Implements liveness detection through random commands (such as shaking the head, blinking the eyes, and opening the mouth).
6. Recommend offerings to users based on their age and gender.
For details about the development guide, visit the HUAWEI Developers

Introduction to AI-Empowered Image Segmentation

Image segmentation technology is gathering steam thanks to the development of multiple fields. Take the autonomous vehicle as an example, which has been developing rapidly since last year and become a showpiece for both well-established companies and start-ups. Most of them use computer vision, which includes image segmentation, as the technical basis for self-driving cars, and it is image segmentation that allows a car to understand the situation on the road and to tell the road from the people.
Image segmentation is not only applied to autonomous vehicles, but is also used in a number of different fields, including:
Medical imaging, where it helps doctors make diagnosis and perform tests
Satellite image analysis, where it helps analyze tons of data
Media apps, where it cuts people from video to prevent bullet comments from obstructing them.
It is a widespread application. I myself am also a fan of this technology. Recently, I've tried an image segmentation service from HMS Core ML Kit, which I found outstanding. This service has an original framework for semantic segmentation, which labels each and every pixel in an image, so the service can clearly, completely cut out something as delicate as a hair. The service also excels at processing images with different qualities and dimensions. It uses algorithms of structured learning to prevent white borders — which is a common headache of segmentation algorithms — so that the edges of the segmented image appear more natural.
I'm delighted to be able to share my experience of implementing this service here.
Preparations​First, configure the Maven repository and integrate the SDK of the service. I followed the instructions here to complete all these.
1. Configure the Maven repository address
Java:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Add build dependencies
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.1.0.301'
// Import the package of the human body segmentation model.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.1.0.303'
}
3. Add the permission in the AndroidManifest.xml file.
Java:
// Permission to write to external storage.
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Development Procedure​1. Dynamically request the necessary permissions
Java:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
private boolean allPermissionsGranted() {
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
return false;
}
}
return true;
}
private void getRuntimePermissions() {
List<String> allNeededPermissions = new ArrayList<>();
for (String permission : getRequiredPermissions()) {
if (!isPermissionGranted(this, permission)) {
allNeededPermissions.add(permission);
}
}
if (!allNeededPermissions.isEmpty()) {
ActivityCompat.requestPermissions(
this, allNeededPermissions.toArray(new String[0]), PERMISSION_REQUESTS);
}
}
private static boolean isPermissionGranted(Context context, String permission) {
if (ContextCompat.checkSelfPermission(context, permission) == PackageManager.PERMISSION_GRANTED) {
return true;
}
return false;
}
private String[] getRequiredPermissions() {
try {
PackageInfo info =
this.getPackageManager()
.getPackageInfo(this.getPackageName(), PackageManager.GET_PERMISSIONS);
String[] ps = info.requestedPermissions;
if (ps != null && ps.length > 0) {
return ps;
} else {
return new String[0];
}
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
return new String[0];
}
}
2. Create an image segmentation analyzer
Java:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
// Set the segmentation mode to human body segmentation.
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
3. Use android.graphics.Bitmap to create an MLFrame object for the analyzer to detect images
Java:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
4. Call asyncAnalyseFrame for image segmentation
Java:
// Create a task to process the result returned by the analyzer.
Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
// Asynchronously process the result returned by the analyzer.
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override
public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {.
if (mlImageSegmentationResults != null) {
// Obtain the human body segment cut out from the image.
foreground = mlImageSegmentationResults.getForeground();
preview.setImageBitmap(MainActivity.this.foreground);
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
return;
}
});
5. Change the image background
Java:
// Obtain an image from the album.
backgroundBitmap = Utils.loadFromPath(this, id, targetedSize.first, targetedSize.second);
BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
preview.setBackground(drawable);
preview.setImageBitmap(this.foreground);
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
Result​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
To learn more, please visit:
>> HUAWEI Developers official website
>> Development Guide
>> Reddit to join developer discussions
>> GitHub to download the sample code
>> Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.

Client Server Messaging App Using Socket in Android With Huawei Account Kit for Easy Login

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we will learn how to integrate Huawei Account kit in Android application. Account Kit provides you with simple, secure, and quick sign-in and authorization functions. Instead of entering accounts and passwords and waiting for authentication, users can just tap the button to quickly and securely sign in to your app with their HUAWEI IDs. It helps app user seamless login functionality to the app with large user base.
Supported Devices
Development Overview
You need to install Android Studio IDE and I assume that you have prior knowledge of Android application development.
Hardware Requirements
A computer (desktop or laptop) running Windows 10.
Android phone (with the USB cable), which is used for debugging.
Software Requirements
Java JDK 1.8 or later.
Android Studio software installed.
HMS Core (APK) 4.X or later
Integration steps
Step 1. Huawei developer account and complete identity verification in Huawei developer website, refer to register Huawei ID.
Step 2. Create project in AppGallery Connect
Step 3. Adding HMS Core SDK
Let's start coding
How do I call sign in method?
[/B]
private void signInWithHuaweiID() {
AccountAuthParams authParams = new AccountAuthParamsHelper(AccountAuthParams.DEFAULT_AUTH_REQUEST_PARAM).setAuthorizationCode().createParams();
service = AccountAuthManager.getService(ClientActivity.this, authParams);
startActivityForResult(service.getSignInIntent(), 1212);
}
[B]
How do I get sign in result?
[/B][/B]
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
// Process the authorization result to obtain the authorization code from AuthAccount.
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == 1212) {
Task<AuthAccount> authAccountTask = AccountAuthManager.parseAuthResultFromIntent(data);
if (authAccountTask.isSuccessful()) {
// The sign-in is successful, and the user's ID information and authorization code are obtained.
AuthAccount authAccount = authAccountTask.getResult();
Log.i("TAG", "serverAuthCode:" + authAccount.getAuthorizationCode());
userName = authAccount.getDisplayName();
makeConnect();
} else {
// The sign-in failed.
Log.e("TAG", "sign in failed:" + ((ApiException) authAccountTask.getException()).getStatusCode());
}
}
}
[B][B]
How do I start server?
[/B][/B][/B]
wManager = (WifiManager) getSystemService(WIFI_SERVICE);
serverIP = Formatter.formatIpAddress(wManager.getConnectionInfo().getIpAddress());
ip_txt.setText(serverIP);
class ServerThread implements Runnable {
@Override
public void run() {
try {
while (true) {
serverSocket = new ServerSocket(POST_NUMBER);
socket = serverSocket.accept();
output = new PrintWriter(socket.getOutputStream());
input = new BufferedReader(new InputStreamReader(socket.getInputStream()));
Log.d("TAG", " here ");
runOnUiThread(new Runnable() {
@Override
public void run() {
tv_status.setText("Waiting for conn at " + POST_NUMBER);
}
});
handler.post(new Runnable() {
@Override
public void run() {
tv_status.setText("Connected");
}
});
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
[B][B][B]
How do I send message using socket?
class SendMessage implements Runnable {
private String message;
SendMessage(String message) {
this.message = message;
}
@Override
public void run() {
output.write(message+"\r");
output.flush();
runOnUiThread(new Runnable() {
@Override
public void run() {
tv_chat.append("\n New Message: " + message);
ed_message.setText("");
}
});
Thread.interrupted();
}
}
How do I receive message using socket?
[/B]
private class ReadMessage implements Runnable {
@Override
public void run() {
while (true) {
try {
// Log.d("TAG","Server: Listening for message");
if(input!=null){
final String message = input.readLine();
if (message != null) {
handler.post(new Runnable() {
@Override
public void run() {
tv_chat.append("\n" + message );
}
});
}
}
} catch (IOException e) {
// Log.e("TAG","Error while receiving message");
e.printStackTrace();
}
}
}
}
[B]
Close the Socket and other connections
[/B][/B]
@Override
protected void onPause() {
super.onPause();
if (socket != null) {
try {
output.close();
input.close();
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
[B][B]
How do I revoke auth permission?
[/B][/B][/B]
if(service!=null){
// service indicates the AccountAuthService instance generated using the getService method during the sign-in authorization.
service.cancelAuthorization().addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(Task<Void> task) {
if (task.isSuccessful()) {
// Processing after a successful authorization cancellation.
Log.i("TAG", "onSuccess: ");
} else {
// Handle the exception.
Exception exception = task.getException();
if (exception instanceof ApiException){
int statusCode = ((ApiException) exception).getStatusCode();
Log.i("TAG", "onFailure: " + statusCode);
}
}
}
});
}
[B][B][B]
Result
Tricks and Tips
Makes sure that agconnect-services.json file added.
Make sure required dependencies are added
Make sure that service is enabled in AGC
Add required permissions
Conclusion
In this article, we have learnt how to integrate Huawei Account kit in Client Server messaging using Socket in Android application. You can check the desired result in the result section. Hoping Huawei Analytics kit capabilities are helpful to you as well, like this sample, you can make use of Huawei kits as per your requirement.
Thank you so much for reading. I hope this article helps you to understand the integration of Huawei Account kit in Android application.
Reference
Huawei Account Kit – Training video
Checkout in forum

Categories

Resources