How to Develop a QR Code Scanner for Paying Parking - Huawei Developers

Background​One afternoon, many weeks ago when I tried to exit a parking lot, I was — once again — battling with technology as I tried to pay the parking fee. I opened an app and used it to scan the QR payment code on the wall, but it just wouldn't recognize the code because it was too far away. Thankfully, a parking lot attendant came out to help me complete the payment, sparing me from the embarrassment of the cars behind me beeping their horns in frustration. This made me want to create a QR code scanning app that could save me from such future pain.
The first demo app I created was, truth to be told, a failure. First, the distance between my phone and a QR code had to be within 30 cm, otherwise the app would fail to recognize the code. However, in most cases, this distance is not ideal for a parking lot.
Another problem was that the app could not recognize a hard-to-read QR code. As no one in a parking lot is responsible for managing QR codes, the codes will gradually wear out and become damaged. Moreover, poor lighting also affects the camera's ability to recognize the QR code.
Third, the app could not recognize the correct QR code when it was displayed alongside other codes. Although this type of situation in a parking lot is rare to come by, I still don't want to take the risk.
And lastly, the app could not recognize a tilted or distorted QR code. Scanning a code face on has a high accuracy rate, but we cannot expect this to be possible every time we exit a parking lot. On top of that, even when we can scan a code face on, chances are there is something obstructing the view, such as a pillar for example. In this case, the code becomes distorted and therefore cannot be recognized by the app.
Solution I Found​Now that I had identified the challenges, I now had to find a QR code scanning solution. Luckily, I came across Scan Kit from HMS Core, which was able to address every problem that my first demo app encountered.
Specifically speaking, the kit has a pre-identification function in its scanning process, which allows it to automatically zoom in on a code from far away. The kit adopts multiple computer vision technologies so that it can recognize a QR code that is unclear or incomplete. For scenarios when there are multiple codes, the kit offers a mode that can simultaneously recognize 5 codes of varying formats. On top of these, the kit can automatically detect and adjust a QR code that is inclined or distorted, so that it can be recognized more quickly.
Demo Illustration​Using this kit, I managed to create a QR code I want, as shown in the image below.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
You see it? It enlarges and recognizes the QR code 2-meter away from it, automatically and swiftly. Now let's see how this useful gadget is developed.
Development Procedure​Preparations​1. Download and install Android Studio.
2. Add a Maven repository to the project-level build.gradle file.
Add the following Maven repository addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
}
3. Add build dependencies on the Scan SDK in the app-level build.gradle file.
The Scan SDK comes in two versions: Scan SDK-Plus and Scan SDK. The former performs better but it is a little bigger (about 3.1 MB, and the size of the Scan SDK is about 1.1 MB). For my demo app, I chose the plus version:
Code:
dependencies{
implementation 'com.huawei.hms:scanplus:1.1.1.301'
}
4. Note that the version number is of the latest SDK.
Configure obfuscation scripts.
Open this file in the app directory and then add configurations to exclude the HMS Core SDK from obfuscation.
Code:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.hianalytics.android.**{*;}
-keep class com.huawei.**{*;}
5. Declare necessary permissions.
Open the AndroidManifest.xml file. Apply for static permissions and features.
Code:
<!-- Camera permission -->
<uses-permission android:name="android.permission.CAMERA" />
<!-- File read permission -->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Feature -->
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Add the declaration on the scanning activity to the application tag.
Code:
<!-- Declaration on the scanning activity -->
<activity android:name="com.huawei.hms.hmsscankit.ScanKitActivity" />
Code Development​1. Apply for dynamic permissions when the scanning activity is started.
Code:
public void loadScanKitBtnClick(View view) {
requestPermission(CAMERA_REQ_CODE, DECODE);
}
private void requestPermission(int requestCode, int mode) {
ActivityCompat.requestPermissions(
this,
new String[]{Manifest.permission.CAMERA, Manifest.permission.READ_EXTERNAL_STORAGE},
requestCode);
}
2. Start the scanning activity in the permission application callback.
In the code below, setHmsScanTypes specifies QR code as the code format. If you need your app to support other formats, you can use this method to specify them.
Code:
@Override
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
if (permissions == null || grantResults == null) {
return;
}
if (grantResults.length < 2 || grantResults[0] != PackageManager.PERMISSION_GRANTED || grantResults[1] != PackageManager.PERMISSION_GRANTED) {
return;
}
if (requestCode == CAMERA_REQ_CODE) {
ScanUtil.startScan(this, REQUEST_CODE_SCAN_ONE, new HmsScanAnalyzerOptions.Creator().setHmsScanTypes(HmsScan.QRCODE_SCAN_TYPE).create());
}
}
3. Obtain the code scanning result in the activity callback.
Code:
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode != RESULT_OK || data == null) {
return;
}
if (requestCode == REQUEST_CODE_SCAN_ONE) {
HmsScan obj = data.getParcelableExtra(ScanUtil.RESULT);
if (obj != null) {
this.textView.setText(obj.originalValue);
}
}
}
And just like that, the demo is created. Actually, Scan Kit offers four modes: Default View mode, Customized View mode, Bitmap mode, and MultiProcessor mode, of which the first two are very similar. Their similarity means that Scan Kit controls the camera to implement capabilities such as zoom control and auto focus. The only difference is that Customized View supports customization of the scanning UI. For those who want to customize the scanning process and control the camera, the Bitmap mode is a better choice. The MultiProcessor mode, on the other hand, lets your app scan multiple codes simultaneously. I believe one of them can meet your requirements for developing a code scanner.
Takeaway​Scan-to-pay is a convenient function in parking lots, but may fail when, for example, the distance between a code and phone is too far, the QR code is blurred or incomplete, or the code is scanned at an angle.
HMS Core Scan Kit is a great tool for helping alleviate these issues. What's more, to cater to different scanning requirements, the kit offers four modes that can be used to call its services (Default View mode, Customized View mode, Bitmap mode, and MultiProcessor mode) as well as two SDK versions (Scan SDK-Plus and Scan SDK). All of them can be integrated with just a few lines of code, making integration straightforward, which makes the kit ideal for developing a code scanner that can deliver an outstanding and personalized user experience.

Related

Develop an ID Photo DIY Applet with HMS MLKit SDK in 30 Min

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201257812100840239&fid=0101187876626530001
It’s an application level development and we won’t go through the algorithm of image segmentation. Use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don’t know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don’t have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn’t meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
1.2 Add SDK Dependency in Application Level build.gradle
Introducing SDK and basic SDK of face recognition:
Code:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user’s device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--Uses storage permissions--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // Open the corresponding configuration page based on the package name.
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
The image segmentation detector can be created through the image segmentation detection configurator “mlimagesegmentation setting".
Code:
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create “mlframe” Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator “MLImageSegmentationSetting".
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call “asyncanalyseframe” Method for Image Segmentation
Code:
// Create a task to process the result returned by the image segmentation detector. Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // Asynchronously processing the result returned by the image segmentation detector Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let’s see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-MLKit/HUAWEI-HMS-MLKit-Sample=
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
People’s portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4
Previous link:
NO. 1:One article to understand Huawei HMS ML Kit text recognition, bank card recognition, general card identification
NO.2: Integrating MLkit and publishing ur app on Huawei AppGallery
NO.3.: Comparison Between Zxing and Huawei HMS Scan Kit
NO.4: How to use Huawei HMS MLKit service to quickly develop a photo translation app

Implement Eye-Enlarging and Face-Shaping Functions with ML Kit's Detection Capability

Introduction
Sometimes, we can't help taking photos to keep our unforgettable moments in life. But most of us are not professional photographers or models, so our photographs can end up falling short of our expectations. So, how can we produce more impressive snaps? If you have an image processing app on your phone, it can automatically detect faces in a photo, and you can then adjust the image until you're happy with it. So, after looking around online, I found HUAWEI ML Kit's face detection capability. By integrating this capability, you can add beautification functions to your apps. Have a try!
Application Scenarios
ML Kit's face detection capability detects up to 855 facial keypoints and returns the coordinates for the face's contour, eyebrows, eyes, nose, mouth, and ears, as well as the angle of the face. Once you've integrated this capability, you can quickly create beauty apps and enable users to add fun facial effects and features to their images.
Face detection also detects whether the subject's eyes are open, whether they're wearing glasses or a hat, whether they have a beard, and even their gender and age. This is useful if you want to add a parental control function to your app which prevents children from getting too close to their phone, or staring at the screen for too long.
In addition, face detection can detect up to seven facial expressions, including smiling, neutral, angry, disgusted, frightened, sad, and surprised faces. This is great if you want to create apps such as smile-cameras.
You can integrate any of these capabilities as needed. At the same time, face detection supports image and video stream detection, cross-frame face tracking, and multi-face detection. It really is powerful! Now, let's see how to integrate this capability.
Face Detection Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process. Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add Configurations to the File Header
After integrating the SDK, add the following configuration to the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.3 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Import the contour and keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
// Import the facial expression detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
// Import the facial feature detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
...
</manifest>
1.5 Apply for Camera Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
2. Code Development
2.1 Create a Face Analyzer by Using the Default Parameter Configurations
Code:
analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
2.2 Create an MLFrame Object by Using the android.graphics.Bitmap for the Analyzer to Detect Images
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncAnalyseFrame Method to Perform Face Detection
Code:
Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {
@Override
public void onSuccess(List<MLFace> faces) {
// Detection success. Obtain the face keypoints.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
2.4 Use the Progress Bar to Process the Face in the Image
Call the magnifyEye and smallFaceMesh methods to implement the eye-enlarging algorithm and face-shaping algorithm.
Code:
private SeekBar.OnSeekBarChangeListener onSeekBarChangeListener = new SeekBar.OnSeekBarChangeListener() {
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
switch (seekBar.getId()) {
case R.id.seekbareye: // When the progress bar of the eye enlarging changes, ...
case R.id.seekbarface: // When the progress bar of the face shaping changes, ...
}
}
2.5 Release the Analyzer After the Detection is Complete
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
Log.e(TAG, "e=" + e.getMessage());
}
Demo
Now, let's see what it can do. Pretty cool, right?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}

Intermediate: How to integrate Huawei Awareness kit Barrier API with Local Notification into fitness app (Flutter)

Introduction
In this article, we will learn how to implement Huawei Awareness kit features with Local Notification service to send notification when certain condition met. We can create our own conditions to be met and observe them and notify to the user even when the application is not running.
The Awareness Kit is quite a comprehensive detection and event SDK. If you're developing a context-aware app for Huawei devices, this is definitely the library for you.
        
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What can we do using Awareness kit?
With HUAWEI Awareness Kit, you can obtain a lot of different contextual information about users’ location, behavior, weather, current time, device status, ambient light, audio device status and makes it easier to provide more refined user experience.
Basic Usage
There are quite a few awareness "modules" in this SDK: Time Awareness, Location Awareness, Behavior Awareness, Beacon Awareness, Audio Device Status Awareness, Ambient Light Awareness, and Weather Awareness. Read on to find out how and when to use them.
Each of these modules has two modes: capture, which is an on-demand information retrieval; and barrier, which triggers an action when a specified condition is met.
Use case
The Barrier API allows us to set specific barriers for specific conditions in our app and when these conditions are satisfies, our app will be notified, so we could take action based on our conditions. In this sample, when user starts the activity and app notifies to the user “please connect the head set to listen music” while doing activity.
Table of content
1. Project setup
2. Headset capture API
3. Headset Barrier API
4. Flutter Local notification plugin
Advantages
1. Converged: Multi-dimensional and evolvable awareness capabilities can be called in a unified manner.
2. Accurate: The synergy of hardware and software makes data acquisition more accurate and efficient.
3. Fast: On-chip processing of local service requests and nearby access of cloud services promise a faster service response.
4. Economical: Sharing awareness capabilities avoids separate interactions between apps and the device, reducing system resource consumption. Collaborating with the EMUI (or Magic UI) and Kirin chip, Awareness Kit can even achieve optimal performance with the lowest power consumption.
Requirements
1. Any operating system(i.e. MacOS, Linux and Windows)
2. Any IDE with Flutter SDK installed (i.e. IntelliJ, Android Studio and VsCode etc.)
3. A little knowledge of Dart and Flutter.
4. A Brain to think
5. Minimum API Level 24 is required.
6. Required EMUI 5.0 and later version devices.
Setting up the Awareness kit
1. Firstly create a developer account in AppGallery Connect. After create your developer account, you can create a new project and new app. For more information check this link.
2. Generating a Signing certificate fingerprint, follow below command.
Code:
keytool -genkey -keystore <application_project_dir>\android\app\<signing_certificate_fingerprint_filename>.jks
-storepass <store_password> -alias <alias> -keypass<key_password> -keysize 2048 -keyalg RSA -validity 36500
3. The above command creates the keystore file in appdir/android/app.
4. Now we need to obtain the SHA256 key, follow the command.
Code:
keytool -list -v -keystore <application_project_dir>\android\app\<signing_certificate_fingerprint_filename>.jks
5. Enable the Awareness kit in the Manage API section and add the plugin.
6. After configuring project, we need to download agconnect-services.json file in your project and add into your project.
7. After that follow the URL for cross-platform plugins. Download required plugins.
  
8. The following dependencies for HMS usage need to be added to build.gradle file under the android directory.
Code:
buildscript {
ext.kotlin_version = '1.3.50'
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.5.0'
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
9. Then add the following line of code to the build.gradle file under the android/app directory.
Code:
apply plugin: 'com.huawei.agconnect'
10. Add the required permissions to the AndroidManifest.xml file under app>src>main folder.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.INTERNET" />
11. After completing all the above steps, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. You can find all the plugins in pub.dev with the latest versions.
Code:
huawei_awareness:
path: ../huawei_awareness/
12. To display local notification, we need to add flutter local notification plugin.
Code:
flutter_local_notifications: ^3.0.1+6
After adding them, run flutter pub get command. Now all the plugins are ready to use.
Note: Set multiDexEnabled to true in the android/app directory, so that app will not crash.
Code Implementation
Use Capture and Barrier API to get Headset status
This service will helps in your application before starting activity, it will remind you to connect the headset to play music.
We need to request the runtime permissions before calling any service.
Code:
@override
void initState() {
checkPermissions();
requestPermissions();
super.initState();
}
void checkPermissions() async {
locationPermission = await AwarenessUtilsClient.hasLocationPermission();
backgroundLocationPermission =
await AwarenessUtilsClient.hasBackgroundLocationPermission();
activityRecognitionPermission =
await AwarenessUtilsClient.hasActivityRecognitionPermission();
if (locationPermission &&
backgroundLocationPermission &&
activityRecognitionPermission) {
setState(() {
permissions = true;
notifyAwareness();
});
}
}
void requestPermissions() async {
if (locationPermission == false) {
bool status = await AwarenessUtilsClient.requestLocationPermission();
setState(() {
locationPermission = status;
});
}
if (backgroundLocationPermission == false) {
bool status =
await AwarenessUtilsClient.requestBackgroundLocationPermission();
setState(() {
locationPermission = status;
});
}
if (activityRecognitionPermission == false) {
bool status =
await AwarenessUtilsClient.requestActivityRecognitionPermission();
setState(() {
locationPermission = status;
});
checkPermissions();
}
}
Capture API: Now all the permissions are allowed, once app launched if we want to check the headset status, then we need to call the Capture API using getHeadsetStatus(), only one time this service will cal.
Code:
checkHeadsetStatus() async {
HeadsetResponse response = await AwarenessCaptureClient.getHeadsetStatus();
setState(() {
switch (response.headsetStatus) {
case HeadsetStatus.Disconnected:
_showNotification(
"Music", "Please connect headset before start activity");
break;
}
});
log(response.toJson(), name: "captureHeadset");
}
Barrier API: If you want to set some conditions into your app, then we need to use Barrier API. This service keep on listening event once satisfies the conditions automatically it will notifies to user. for example we mentioned some condition like we need to play music once activity starts, now user connects the headset automatically it will notifies to the user headset connected and music playing.
First we need to set barrier condition, it means the barrier will triggered when conditions satisfies.
Code:
String headSetBarrier = "HeadSet";
AwarenessBarrier headsetBarrier = HeadsetBarrier.keeping(
barrierLabel: headSetBarrier, headsetStatus: HeadsetStatus.Connected);
Add the barrier using updateBarriers() this method will return whether barrier added or not.
Code:
bool status =await AwarenessBarrierClient.updateBarriers(barrier: headsetBarrier);
If status returns true it means barrier successfully added, now we need to declare StreamSubscription to listen event, it will keep on update the data once condition satisfies it will trigger.
Code:
if(status){
log("Headset Barrier added.");
StreamSubscription<dynamic> subscription;
subscription = AwarenessBarrierClient.onBarrierStatusStream.listen((event) {
if (mounted) {
setState(() {
switch (event.presentStatus) {
case HeadsetStatus.Connected:
_showNotification("Cool HeadSet", "Headset Connected,Want to listen some music?");
isPlaying = true;
print("Headset Status: Connected");
break;
case HeadsetStatus.Disconnected:
_showNotification("HeadSet", "Headset Disconnected, your music stopped");
print("Headset Status: Disconnected");
isPlaying = false;
break;
case HeadsetStatus.Unknown:
_showNotification("HeadSet", "Your headset Unknown");
print("Headset Status: Unknown");
isPlaying = false;
break;
}
});
}
}, onError: (error) {
log(error.toString());
});
}else{
log("Headset Barrier not added.");
}
Need of Local Push Notification?
1. We can Schedule notification.
2. No web request is required to display Local notification.
3. No limit of notification per user.
4. Originate from the same device and displayed on the same device.
5. Alert the user or remind the user to perform some task.
This package provides us the required functionality of Local Notification. Using this package we can integrate our app with Local Notification in android and ios app both.
Add the following permission to integrate your app with the ability of scheduled notification.
Code:
<uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" />
<uses-permission android:name="android.permission.VIBRATE" />
Add inside application.
Code:
<receiver android:name="com.dexterous.flutterlocalnotifications.ScheduledNotificationBootReceiver">
<intent-filter>
<action android:name="android.intent.action.BOOT_COMPLETED"/>
</intent-filter>
</receiver>
<receiver android:name="com.dexterous.flutterlocalnotifications.ScheduledNotificationReceiver" />
Let’s create Flutter local notification object.
Code:
FlutterLocalNotificationsPlugin localNotification = FlutterLocalNotificationsPlugin();
@override
void initState() {
// TODO: implement initState
super.initState();
var initializationSettingsAndroid =
AndroidInitializationSettings('ic_launcher');
var initializationSettingsIOs = IOSInitializationSettings();
var initSetttings = InitializationSettings(
android: initializationSettingsAndroid, iOS: initializationSettingsIOs);
localNotification.initialize(initSetttings);
}
Now create logic for display simple notification.
Code:
Future _showNotification(String title, String description) async {
var androidDetails = AndroidNotificationDetails(
"channelId", "channelName", "content",
importance: Importance.high);
var iosDetails = IOSNotificationDetails();
var generateNotification =
NotificationDetails(android: androidDetails, iOS: iosDetails);
await localNotification.show(
0, title, description, generateNotification);
}
Demo
            
Tips and Tricks
1. Download latest HMS Flutter plugin.
2. Set minSDK version to 24 or later.
3. HMS Core APK 4.0.2.300 is required.
4. Currently this plugin not supporting background task.
5. Do not forget to click pug get after adding dependencies.
Conclusion
In this article, we have learned how to use Barrier API of Huawei Awareness Kit with a Local Notification to observe the changes in environmental factors even when the application is not running.
As you may notice, the permanent notification indicating that the application is running in the background is not dismissible by the user which can be annoying.
Based on requirement we can utilize different APIs, Huawei Awareness Kit has many other great features that we can use with foreground services in our applications.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Awareness Kit URL.
Awareness Capture API Article URL.
Original Source

A Quick Introduction about How to Implement Sound Detection

For some apps, it's necessary to have a function called sound detection that can recognize sounds like knocks on the door, rings of the doorbell, and car horns. Developing such a function can be costly for small- and medium-sized developers, so what should they do in this situation?
There's no need to worry about if you have the sound detection service in HUAWEI ML Kit. Integrating its SDK into your app is simple, and you can equip it with the sound detection function that can work well even when the device does not connect to the network.
Introduction to Sound Detection in HUAWEI ML Kit
This service can detect sound events online by real-time recording. The detected sound events can help you perform subsequent actions. Currently, the following types of sound events are supported: laughter, child crying, snoring, sneezing, shouting, cat meowing, dog barking, running water (such as from taps, streams, and ocean waves), car horns, doorbells, knocking on doors, fire alarms (including smoke alarms), and sirens (such as those from fire trucks, ambulances, police cars, and air defenses).
Preparations
Configuring the Development Environment
Create an app in AppGallery Connect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For details, see Getting Started with Android.
Enable ML Kit.
Click here to get more information.
After the app is created, an agconnect-services.json file will be automatically generated. Download it and copy it to the root directory of your project.
Configure the Huawei Maven repository address.
To learn more, click here.
Integrate the sound detection SDK.
It is recommended to integrate the SDK in full SDK mode. Add build dependencies for the SDK in the app-level build.gradle file.
Code:
// Import the sound detection package.
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-sdk:2.1.0.300'
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-model:2.1.0.300'
Add the AppGallery Connect plugin configuration as needed using either of the following methods:
Method 1: Add the following information under the declaration in the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block:
Code:
plugins {
id 'com.android.application'
id 'com.huawei.agconnect'
}
Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file. After a user installs your app from HUAWEI AppGallery, the machine learning model is automatically updated to the user's device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "sounddect"/>
For details, go to Integrating the Sound Detection SDK.
Development Procedure​
Obtain the microphone permission. If the app does not have this permission, error 12203 will be reported.
(Mandatory) Apply for the static permission.
Code:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
(Mandatory) Apply for the dynamic permission.
ActivityCompat.requestPermissions(
this, new String[]{Manifest.permission.RECORD_AUDIO
}, 1);
Create an MLSoundDector object.
Code:
private static final String TAG = "MLSoundDectorDemo";
// Object of sound detection.
private MLSoundDector mlSoundDector;
// Create an MLSoundDector object and configure the callback.
private void initMLSoundDector(){
mlSoundDector = MLSoundDector.createSoundDector();
mlSoundDector.setSoundDectListener(listener);
}
Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
// Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
Code:
private MLSoundDectListener listener = new MLSoundDectListener() {
@Override
public void onSoundSuccessResult(Bundle result) {
// Processing logic when the detection is successful. The detection result ranges from 0 to 12, corresponding to the 13 sound types whose names start with SOUND_EVENT_TYPE. The types are defined in MLSoundDectConstants.java.
int soundType = result.getInt(MLSoundDector.RESULTS_RECOGNIZED);
Log.d(TAG,"Detection success:"+soundType);
}
@Override
public void onSoundFailResult(int errCode) {
// Processing logic for detection failure. The possible cause is that your app does not have the microphone permission (Manifest.permission.RECORD_AUDIO).
Log.d(TAG,"Detection failure"+errCode);
}
};
Note: The code above prints the type of the detected sound as an integer. In the actual situation, you can convert the integer into a data type that users can understand.
Definition for the types of detected sounds:
Code:
<string-array name="sound_dect_voice_type">
<item>laughter</item>
<item>baby crying sound</item>
<item>snore</item>
<item>sneeze</item>
<item>shout</item>
<item>cat's meow</item>
<item>dog's bark</item>
<item>running water</item>
<item>car horn sound</item>
<item>doorbell sound</item>
<item>knock</item>
<item>fire alarm sound</item>
<item>alarm sound</item>
</string-array>
Start and stop sound detection.
Code:
@Override
public void onClick(View v) {
switch (v.getId()){
case R.id.btn_start_detect:
if (mlSoundDector != null){
boolean isStarted = mlSoundDector.start(this); // context: Context.
// If the value of isStared is true, the detection is successfully started. If the value of isStared is false, the detection fails to be started. (The possible cause is that the microphone is occupied by the system or another app.)
if (isStarted){
Toast.makeText(this,"The detection is successfully started.", Toast.LENGTH_SHORT).show();
}
}
break;
case R.id.btn_stop_detect:
if (mlSoundDector != null){
mlSoundDector.stop();
}
break;
}
}
Call destroy() to release resources when the sound detection page is closed.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
if (mlSoundDector != null){
mlSoundDector.destroy();
}
}
Testing the App​
Using the knock as an example, the output result of sound detection is expected to be 10.
Tap Start detecting and simulate a knock on the door. If you get logs as follows in Android Studio's console, they indicates that the integration of the sound detection SDK is successful.
More Information​
Sound detection belongs to one of the six capability categories of ML Kit, which are related to text, language/voice, image, face/body, natural language processing, and custom model.
Sound detection is a service in the language/voice-related category.
Interested in other categories? Feel free to have a look at the HUAWEI ML Kit document.
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source

AR Engine and Scene Kit Deliver Virtual Try-On of Glasses

Background​The ubiquity of the Internet and smart devices has made e-commerce the preferred choice for countless consumers. However, many longtime users have grown wary of the stagnant shopping model, and thus enhancing user experience is critical to stimulating further growth in e-commerce and attracting a broader user base. HMS Core offers intelligent graphics processing capabilities to identify a user's facial and physical features, which when combined with a new display paradigm, enables users to try on products virtually through their mobile phones, for a groundbreaking digital shopping experience.
Scenarios​AR Engine and Scene Kit allow users to virtually try on products found on shopping apps and shopping list sharing apps, which in turn will lead to greater customer satisfaction and fewer returns and replacements.
Effects​A user opens an shopping app, then the user taps a product's picture to view the 3D model of the product, which they can rotate, enlarge, and shrink for interactive viewing.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Getting Started​Configuring the Maven Repository Address for the HMS Core SDK
Open the project-level build.gradle file in your Android Studio project. Go to buildscript > repositories and allprojects > repositories to configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories{
...
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
...
maven { url 'http://developer.huawei.com/repo/'}
}
}
Adding Build Dependencies for the HMS Core SDK
Open the app-level build.gradle file of your project. Add build dependencies in the dependencies block and use the Full-SDK of Scene Kit and AR Engine SDK.
Code:
dependencies {
....
implementation 'com.huawei.scenekit:full-sdk:5.0.2.302'
implementation 'com.huawei.hms:arenginesdk:2.13.0.4'
}
For details about the preceding steps, please refer to the development guide for Scene Kit on HUAWEI Developers.
Adding Permissions in the AndroidManifest.xml File
Open the AndroidManifest.xml file in main directory and add the permission to use the camera above the <application line.
Code:
<!--Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
Development Procedure​Configuring MainActivity​Add two buttons to the layout configuration file of MainActivity. Set the background of the onBtnShowProduct button to the preview image of the product and add the text Try it on! to the onBtnTryProductOn button to guide the user to the feature.
Code:
<Button
android:layout_width="260dp"
android:layout_height="160dp"
android:background="@drawable/sunglasses"
android:onClick="onBtnShowProduct" />
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Try it on!"
android:textAllCaps="false"
android:textSize="24sp"
android:onClick="onBtnTryProductOn" />
If the user taps the onBtnShowProduct button, the 3D model of the product will be loaded. After tapping the onBtnTryProductOn button, the user will enter the AR fitting screen.
Configuring the 3D Model Display for a Product​1. Create a SceneSampleView inherited from SceneView.
Code:
public class SceneSampleView extends SceneView {
public SceneSampleView(Context context) {
super(context);
}
public SceneSampleView(Context context, AttributeSet attributeSet) {
super(context, attributeSet);
}
}
Override the surfaceCreated method to create and initialize SceneView. Then call loadScene to load the materials, which should be in the glTF or GLB format, to have them rendered and displayed. Call loadSkyBox to load skybox materials, loadSpecularEnvTexture to load specular maps, and loadDiffuseEnvTexture to load diffuse maps. These files should be in the DDS (cubemap) format.
All loaded materials are stored in the src > main > assets > SceneView folder.
Code:
@Override
public void surfaceCreated(SurfaceHolder holder) {
super.surfaceCreated(holder);
// Load the materials to be rendered.
loadScene("SceneView/sunglasses.glb");
// Call loadSkyBox to load skybox texture materials.
loadSkyBox("SceneView/skyboxTexture.dds");
// Call loadSpecularEnvTexture to load specular texture materials.
loadSpecularEnvTexture("SceneView/specularEnvTexture.dds");
// Call loadDiffuseEnvTexture to load diffuse texture materials.
loadDiffuseEnvTexture("SceneView/diffuseEnvTexture.dds");
}
2. Create a SceneViewActivity inherited from Activity.
Call setContentView using the onCreate method, and then pass SceneSampleView that you have created using the XML tag in the layout file to setContentView.
Code:
public class SceneViewActivity extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_sample);
}
}
Create SceneSampleView in the layout file as follows:
Code:
<com.huawei.scene.demo.sceneview.SceneSampleView
android:layout_width="match_parent"
android:layout_height="match_parent"/>
3. Create an onBtnShowProduct in MainActivity.
When the user taps the onBtnShowProduct button, SceneViewActivity is called to load, render, and finally display the 3D model of the product.
Code:
public void onBtnShowProduct(View view) {
startActivity(new Intent(this, SceneViewActivity.class));
}
Configuring AR Fitting for a Product​Product virtual try-on is easily accessible, thanks to the facial recognition, graphics rendering, and AR display capabilities offered by HMS Core.
1. Create a FaceViewActivity inherited from Activity, and create the corresponding layout file.
Create face_view in the layout file to display the try-on effect.
Code:
<com.huawei.hms.scene.sdk.FaceView
android:id="@+id/face_view"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:sdk_type="AR_ENGINE"></com.huawei.hms.scene.sdk.FaceView>
Create a switch. When the user taps it, they can check the difference between the appearances with and without the virtual glasses.
Code:
<Switch
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/switch_view"
android:layout_alignParentTop="true"
android:layout_marginTop="15dp"
android:layout_alignParentEnd="true"
android:layout_marginEnd ="15dp"
android:text="Try it on"
android:theme="@style/AppTheme"
tools:ignore="RelativeOverlap" />
2. Override the onCreate method in FaceViewActivity to obtain FaceView.
Code:
public class FaceViewActivity extends Activity {
private FaceView mFaceView;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_face_view);
mFaceView = findViewById(R.id.face_view);
}
}
3. Create a listener method for the switch. When the switch is enabled, the loadAsset method is called to load the 3D model of the product. Set the position for facial recognition in LandmarkType.
Code:
mSwitch.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
mFaceView.clearResource();
if (isChecked) {
// Load materials.
int index = mFaceView.loadAsset("FaceView/sunglasses.glb", LandmarkType.TIP_OF_NOSE);
}
}
});
Use setInitialPose to adjust the size and position of the model. Create the position, rotation, and scale arrays and pass values to them.
Code:
final float[] position = { 0.0f, 0.0f, -0.15f };
final float[] rotation = { 0.0f, 0.0f, 0.0f, 0.0f };
final float[] scale = { 2.0f, 2.0f, 0.3f };
Put the following code below the loadAsset line:
Code:
mFaceView.setInitialPose(index, position, scale, rotation);
4. Create an onBtnTryProductOn in MainActivity. When the user taps the onBtnTryProductOn button, the FaceViewActivity is called, enabling the user to view the try-on effect.
Code:
public void onBtnTryProductOn(View view) {
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(
this, new String[]{ Manifest.permission.CAMERA }, FACE_VIEW_REQUEST_CODE);
} else {
startActivity(new Intent(this, FaceViewActivity.class));
}
}
References​For more details, you can go to:
AR Engine official website; Scene Kit official website
Reddit to join our developer discussion
GitHub to download sample codes
Stack Overflow to solve any integration problems
Does it support Quick App?
Hello sir,
from where i can get the asset folder for above given code(virtual try-on glasses)
Mamta23 said:
Hello sir,
from where i can get the asset folder for above given code(virtual try-on glasses)
Click to expand...
Click to collapse
Hi,
You can get the demo here: https://github.com/HMS-Core/hms-AREngine-demo
Hope this helps!

Categories

Resources