Quickly Recognize Fake Faces Using ML Kit's Liveness Detection Capability - Huawei Developers

More information like this, you can visit HUAWEI Developer Forum​
Introduction
Have you ever wondered whether unlocking your phone with facial recognition is really safe? What if someone masqueraded as you by using photos or videos of you, could your phone detect that it's not you in front of the camera? Well, thanks to ML Kit's liveness detection capability, it can! This feature accurately distinguishes between real faces and fake ones. Whether it’s a photo, video, or mask, liveness detection can immediately expose those fake faces!
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Application Scenarios
Liveness detection is generally used to perform a face match. First, it will determine whether the person in front of the camera is a real person, instead of a person holding a photo or a mask. Then, face match will compare the current face to the one it has on record, to see if they are the same person. Liveness detection is useful in a huge range of situations. For example, it can prevent people from unlocking your phone and accessing your personal information.
It can be used for real-name authentication. It determines whether the person in front of the camera is a real person, and then compares their face with the photo on the ID card, to confirm that the person handling the service is the same person on the ID card.
And ML Kit's liveness detection also supports silent detection, where it can determine whether the user’s face is real without them having to do anything. Pretty convenient, right? Now, I'll show you how to quickly integrate liveness detection.
Liveness Detection Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the combined liveness detection package.
implementation 'com.huawei.hms:ml-computer-vision-livenessdetection:2.0.2.300'
}
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "livenessdetection"/>
1.5 Apply for Camera Permission
For detailed procedures about how to apply for camera permission, please refer to HUAWEI Developers-Assigning Permissions.
2. Code Development
2.1 Create a Liveness Detection Result Callback to Obtain the Detection Results
Code:
private MLLivenessCapture.Callback callback = new MLLivenessCapture.Callback() {
@Override
public void onSuccess(MLLivenessCaptureResult result) {
// Processing logic when the detection succeeds. The detection results indicate whether the face belongs to a real person.
}
@Override
public void onFailure(int errorCode) {
// Processing logic when the detection fails. For example, if the camera is abnormal (CAMERA_ERROR).
}
};
2.2 Create a Liveness Detection Instance and Start the Detection
Code:
MLLivenessCapture capture = MLLivenessCapture.getInstance();
capture.startDetect(activity, callback);
Demo Effect
Below, you can see how the liveness detection capability differentiates between a real face and face mask. Isn't it great?
Github source code
To find out more, take a look at our official website: HUAWEI ML Kit.

Related

How to Integrate HUAWEI ML Kit's Hand Keypoint Detection Capability

Introduction
In the previous post, we looked at how to use HUAWEI ML Kit's skeleton detection capability to detect points such as the head, neck, shoulders, knees and ankles. But as well as skeleton detection, ML Kit also provides a hand keypoint detection capability, which can locate 21 hand keypoints, such as fingertips, joints, and wrists.
Application Scenarios
Hand keypoint detection is useful in a huge range of situations. For example, short video apps can generate some cute and funny special effects based on hand keypoints, to add more fun to short videos.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Or, if smart home devices are integrated with hand keypoint detection, users could control them from a remote distance using customized gestures, so they could do things like activate a robot vacuum cleaner while they’re out.
Hand Keypoint Detection Development
Now, we’re going to see how to quickly integrate ML Kit's hand keypoint detection feature. Let’s take video stream detection as an example.
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add SDK Dependencies to the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.2.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Automatically Update
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "handkeypoint"/>
1.5 Apply for Camera Permission and Local File Reading Permission
Code:
<!--Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Read permission-->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2. Code Development
2.1 Create a Hand Keypoint Analyzer
Code:
MLHandKeypointAnalyzerSetting setting = new MLHandKeypointAnalyzerSetting.Factory()
// MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
// Set the maximum number of hand regions that can be detected within an image. A maximum of 10 hand regions can be detected by default.
setMaxHandResults(1)
create();
MLHandKeypointAnalyzer analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting);
2.2 Create the HandKeypointTransactor Class for Processing Detection Results
This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this class to obtain the detection results and implement specific services. In addition to coordinate information for each hand keypoint, the detection results include a confidence value for the palm and each of the keypoints. Palm and hand keypoints which are incorrectly detected can be filtered out based on the confidence values. You can set a threshold based on misrecognition tolerance.
Code:
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = result.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.3 Set the Detection Result Processor to Bind the Analyzer to the Result Processor
Code:
analyzer.setTransactor(new HandKeypointTransactor());
2.4 Create an Instance of the LensEngine Class
The LensEngine Class is provided by the HMS Core ML SDK to capture dynamic camera streams and pass these streams to the analyzer. The camera display size should be set to a value between 320 x 320 px and 1920 x 1920 px.
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
setLensType(LensEngine.BACK_LENS)
applyDisplayDimension(1280, 720)
applyFps(20.0f)
enableAutomaticFocus(true)
create();
2.5 Call the run Method to Start the Camera and Read Camera Streams for Detection
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
Demo Effect
And that's it! We can now see hand keypoints appear when making different gestures. Remember that you can expand this capability if you need to.

Implement Eye-Enlarging and Face-Shaping Functions with ML Kit's Detection Capability

Introduction
Sometimes, we can't help taking photos to keep our unforgettable moments in life. But most of us are not professional photographers or models, so our photographs can end up falling short of our expectations. So, how can we produce more impressive snaps? If you have an image processing app on your phone, it can automatically detect faces in a photo, and you can then adjust the image until you're happy with it. So, after looking around online, I found HUAWEI ML Kit's face detection capability. By integrating this capability, you can add beautification functions to your apps. Have a try!
Application Scenarios
ML Kit's face detection capability detects up to 855 facial keypoints and returns the coordinates for the face's contour, eyebrows, eyes, nose, mouth, and ears, as well as the angle of the face. Once you've integrated this capability, you can quickly create beauty apps and enable users to add fun facial effects and features to their images.
Face detection also detects whether the subject's eyes are open, whether they're wearing glasses or a hat, whether they have a beard, and even their gender and age. This is useful if you want to add a parental control function to your app which prevents children from getting too close to their phone, or staring at the screen for too long.
In addition, face detection can detect up to seven facial expressions, including smiling, neutral, angry, disgusted, frightened, sad, and surprised faces. This is great if you want to create apps such as smile-cameras.
You can integrate any of these capabilities as needed. At the same time, face detection supports image and video stream detection, cross-frame face tracking, and multi-face detection. It really is powerful! Now, let's see how to integrate this capability.
Face Detection Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process. Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add Configurations to the File Header
After integrating the SDK, add the following configuration to the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.3 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Import the contour and keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
// Import the facial expression detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
// Import the facial feature detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
...
</manifest>
1.5 Apply for Camera Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
2. Code Development
2.1 Create a Face Analyzer by Using the Default Parameter Configurations
Code:
analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
2.2 Create an MLFrame Object by Using the android.graphics.Bitmap for the Analyzer to Detect Images
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncAnalyseFrame Method to Perform Face Detection
Code:
Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {
@Override
public void onSuccess(List<MLFace> faces) {
// Detection success. Obtain the face keypoints.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
2.4 Use the Progress Bar to Process the Face in the Image
Call the magnifyEye and smallFaceMesh methods to implement the eye-enlarging algorithm and face-shaping algorithm.
Code:
private SeekBar.OnSeekBarChangeListener onSeekBarChangeListener = new SeekBar.OnSeekBarChangeListener() {
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
switch (seekBar.getId()) {
case R.id.seekbareye: // When the progress bar of the eye enlarging changes, ...
case R.id.seekbarface: // When the progress bar of the face shaping changes, ...
}
}
2.5 Release the Analyzer After the Detection is Complete
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
Log.e(TAG, "e=" + e.getMessage());
}
Demo
Now, let's see what it can do. Pretty cool, right?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}

Benefit from a Brand-New and Easy-to-Use Security Paradigm

Overview
Recently, an AI model developed at MIT has proven capable of distinguishing COVID-19 infected individuals from non-infected individuals, based solely on cough recording analysis.
Indeed, AI technology can seem so remarkable at times, that it appears to have magical powers.
These same high-level AI sound detection capabilities have recently been applied to bolster security systems.
Let’s look at how Huawei’s sound detection service takes security to the next level.
Service Introduction
Huawei’s sound detection service detects sound events in an environment by recording online in real time, and uses the data from detected events to perform subsequent operations. For example, the service notifies users of ongoing events in mobile apps, and reminds them to respond in a timely manner.
Currently, the service can detect events corresponding to the following 13 sound types:
ü Laughter
ü Baby crying
ü Snoring
ü Sneezing
ü Yelling
ü Cat meowing
ü Dog barking
ü Sound of water (such as faucet water, streams, and waves)
ü Car horn honking
ü Doorbell ringing
ü Door knocks
ü Fire alarms (sound emitted by fire or smoke alarms)
ü Other alarm sounds (sound emitted by fire trucks, ambulances, police cars, and civil defense sirens)
Real-world Applications
Huawei’s sound detection service has a wide range of applications, including assisting the hearing impaired, collecting health-related data, and caring for infants, offering users newfound functionality and peace of mind.
For example, a hearing impaired user can use the service to learn more about their surroundings, and react appropriately in real time in response to emergencies, such as fires, alarms, screams, or floods.
Parents can keep close tabs on their babies by using the sound detection service. After receiving a notification of baby crying from a mobile app, parents can then care for their child immediately.
This service can also detect and record a user’s snores and sneezes in real time, and provide the user with high-level health data.
The sound detection service of Huawei includes APIs and SDK packages, which means that can simply call the APIs or integrate the SDK, to proceed with development.
Development Procedure
1 Configuring App Information in AppGallery Connect
Before you start developing an app, configure app information in AppGallery Connect.
For details, please visit the following link:
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/config-agc-0000001050990353-V5
2 Configuring the Maven Repository Address of the HMS Core SDK and Integrating the Service SDK
2.1 Opening the build.gradle File in the root Directory of Your Android Studio Project
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2.2 Adding the AppGallery Connect Plug-in and the Maven Repository
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
If the agconnect-services.json file has been added to the app, go to buildscript > dependencies and add the AppGallery Connect plug-in configuration.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
3 Creating a Sound Detection Instance
Code:
MLSoundDector soundDector = MLSoundDector.createSoundDector();
4 Creating a Sound Detection Result Callback to Obtain the Detection Result and Passing the Callback to the Sound Detection Instance
Code:
private MLSoundDectListener listener = new MLSoundDectListener() {
@Override
public void onSoundSuccessResult(Bundle result) {
// Processing logic after successful detection. The detection result may be an integer from 0 to 12 (corresponding to thirteen sound types whose names are started with SOUND_EVENT_TYPE in the MLSoundDectConstants.java file).
int soundType = result.getInt(MLSoundDector.RESULTS_RECOGNIZED);
}
@Override
public void onSoundFailResult(int errCode) {
// Detection failed. The cause may be that the microphone permission (Manifest.permission.RECORD_AUDIO) is not assigned.
}
};
soundDector.setSoundDectListener(listener);
5 Starting Sound Detection
Code:
boolean isStarted = soundDector.start(context);
// If the value of isStared is true, the detection is successfully started; if the value of isStared is false, the detection fails to be started. (The possible cause is that the microphone is occupied by the system or another app.)
6 Stopping Sound Detection
Code:
soundDector.stop();
7 Releasing Resources After Detection
Code:
soundDector.destroy();
Demo
GitHub Demo
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow

ML Kit's Scene Detection Service Brings Enhanced Effects to Images with Newfound Ease

1. Overview
Camera functions on today's phones are so powerful that they now involve advanced imaging processing algorithms, as well as the camera hardware itself. But even so, users often need to continually adjust the image parameters, a process that can be extremely frustrating, even when it results in an ideal shot. For the most of us who are not professional photographers, images often end up falling woefully short of expectations. Given these universal challenges, there's incredible demand for apps that can enhance image effects automatically, by accounting for the user's surroundings. HUAWEI ML Kit's scene detection service helps bring such apps to life, by detecting 102 visual features, including beaches, sky scenes, food, night scenes, plants, and common buildings. The service can automatically adjust the parameters corresponding to the image matrix, helping build proactive, intelligent, and efficient apps. Let's take a look at how it does this.
2. Effects
For a city nightscape, the service will accurately detect the night scene, and enhance the brightness in bright areas, as well as the darkness in dark areas, rendering a highly-pleasing contrast.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For a sky image, the service will brighten the sky with an enhanced matrix.
The effects are similar for photos of flowers and plants.
The effects tend to vary, depending on the specific image. Users are free to apply their favorite effects, or mix-and-max them with filters.
Now, let's take a look at how the service can be integrated.
1. Development Process
3.1 Create a scene detection analyzer instance.
Code:
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
3.2 Create an MLFrame object by using android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are currently supported.
Code:
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
3.3 Scene detection
Code:
Task<List<MLSceneDetection>> task = this.analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
@Override
public void onSuccess(List<MLSceneDetection> sceneInfos) {
// Processing logic for scene detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
if (e instanceof MLException) {
MLException exception = (MLException) e;
// Obtain the result code.
int errCode = exception.getErrCode();
// Obtain the error information.
String message = exception.getMessage();
} else {
// Other errors.
}
}
});
3.4 Stop the analyzer to release detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
The Maven repository address.
buildscript {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
allprojects {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
Import the SDK.
Code:
dependencies {
// Scene detection SDK.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection:2.0.3.300'
// Scene detection model.
implementation 'com.huawei.hms:ml-computer-vision-scenedetection-model:2.0.3.300'
}
Manifest files
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="1" />
...
</manifest>
Permissions
Code:
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.autofocus" />
Submit a dynamic permission application.
Code:
if (!(ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED)) {
requestCameraPermission();
}
4. Learn More
ML Kit's scene detection service also comes equipped with intelligent features, such as smart album management and image searching, which allows users to find and sort images in a refreshingly natural and intuitive way.
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow
Is there any restriction using the Kit ?
Very nice

A Quick Introduction about How to Implement Sound Detection

For some apps, it's necessary to have a function called sound detection that can recognize sounds like knocks on the door, rings of the doorbell, and car horns. Developing such a function can be costly for small- and medium-sized developers, so what should they do in this situation?
There's no need to worry about if you have the sound detection service in HUAWEI ML Kit. Integrating its SDK into your app is simple, and you can equip it with the sound detection function that can work well even when the device does not connect to the network.
Introduction to Sound Detection in HUAWEI ML Kit
This service can detect sound events online by real-time recording. The detected sound events can help you perform subsequent actions. Currently, the following types of sound events are supported: laughter, child crying, snoring, sneezing, shouting, cat meowing, dog barking, running water (such as from taps, streams, and ocean waves), car horns, doorbells, knocking on doors, fire alarms (including smoke alarms), and sirens (such as those from fire trucks, ambulances, police cars, and air defenses).
Preparations
Configuring the Development Environment
Create an app in AppGallery Connect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For details, see Getting Started with Android.
Enable ML Kit.
Click here to get more information.
After the app is created, an agconnect-services.json file will be automatically generated. Download it and copy it to the root directory of your project.
Configure the Huawei Maven repository address.
To learn more, click here.
Integrate the sound detection SDK.
It is recommended to integrate the SDK in full SDK mode. Add build dependencies for the SDK in the app-level build.gradle file.
Code:
// Import the sound detection package.
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-sdk:2.1.0.300'
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-model:2.1.0.300'
Add the AppGallery Connect plugin configuration as needed using either of the following methods:
Method 1: Add the following information under the declaration in the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block:
Code:
plugins {
id 'com.android.application'
id 'com.huawei.agconnect'
}
Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file. After a user installs your app from HUAWEI AppGallery, the machine learning model is automatically updated to the user's device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "sounddect"/>
For details, go to Integrating the Sound Detection SDK.
Development Procedure​
Obtain the microphone permission. If the app does not have this permission, error 12203 will be reported.
(Mandatory) Apply for the static permission.
Code:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
(Mandatory) Apply for the dynamic permission.
ActivityCompat.requestPermissions(
this, new String[]{Manifest.permission.RECORD_AUDIO
}, 1);
Create an MLSoundDector object.
Code:
private static final String TAG = "MLSoundDectorDemo";
// Object of sound detection.
private MLSoundDector mlSoundDector;
// Create an MLSoundDector object and configure the callback.
private void initMLSoundDector(){
mlSoundDector = MLSoundDector.createSoundDector();
mlSoundDector.setSoundDectListener(listener);
}
Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
// Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
Code:
private MLSoundDectListener listener = new MLSoundDectListener() {
@Override
public void onSoundSuccessResult(Bundle result) {
// Processing logic when the detection is successful. The detection result ranges from 0 to 12, corresponding to the 13 sound types whose names start with SOUND_EVENT_TYPE. The types are defined in MLSoundDectConstants.java.
int soundType = result.getInt(MLSoundDector.RESULTS_RECOGNIZED);
Log.d(TAG,"Detection success:"+soundType);
}
@Override
public void onSoundFailResult(int errCode) {
// Processing logic for detection failure. The possible cause is that your app does not have the microphone permission (Manifest.permission.RECORD_AUDIO).
Log.d(TAG,"Detection failure"+errCode);
}
};
Note: The code above prints the type of the detected sound as an integer. In the actual situation, you can convert the integer into a data type that users can understand.
Definition for the types of detected sounds:
Code:
<string-array name="sound_dect_voice_type">
<item>laughter</item>
<item>baby crying sound</item>
<item>snore</item>
<item>sneeze</item>
<item>shout</item>
<item>cat's meow</item>
<item>dog's bark</item>
<item>running water</item>
<item>car horn sound</item>
<item>doorbell sound</item>
<item>knock</item>
<item>fire alarm sound</item>
<item>alarm sound</item>
</string-array>
Start and stop sound detection.
Code:
@Override
public void onClick(View v) {
switch (v.getId()){
case R.id.btn_start_detect:
if (mlSoundDector != null){
boolean isStarted = mlSoundDector.start(this); // context: Context.
// If the value of isStared is true, the detection is successfully started. If the value of isStared is false, the detection fails to be started. (The possible cause is that the microphone is occupied by the system or another app.)
if (isStarted){
Toast.makeText(this,"The detection is successfully started.", Toast.LENGTH_SHORT).show();
}
}
break;
case R.id.btn_stop_detect:
if (mlSoundDector != null){
mlSoundDector.stop();
}
break;
}
}
Call destroy() to release resources when the sound detection page is closed.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
if (mlSoundDector != null){
mlSoundDector.destroy();
}
}
Testing the App​
Using the knock as an example, the output result of sound detection is expected to be 10.
Tap Start detecting and simulate a knock on the door. If you get logs as follows in Android Studio's console, they indicates that the integration of the sound detection SDK is successful.
More Information​
Sound detection belongs to one of the six capability categories of ML Kit, which are related to text, language/voice, image, face/body, natural language processing, and custom model.
Sound detection is a service in the language/voice-related category.
Interested in other categories? Feel free to have a look at the HUAWEI ML Kit document.
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source

Categories

Resources