Creating Lock Screen Images with the Smart Layout Service - Huawei Developers

After taking a stunning picture, the user may want to use it as their lock screen image, but even more so, to add text art to give the lock screen a flawless magazine cover effect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Fortunately, Image Kit makes this easy, by endowing your app with the smart layout service, which empowers users to create magazine-reminiscent lock screen images on a whim.
2. Scenario
The smart layout service provides apps with nine different text layout styles, which correspond to the text layouts commonly seen in images, and help users add typeset text to images. By arranging images and text in such a wide range of different styles, the service enables users to relive memorable life experiences in exhilarating new ways.
3. Key Steps and Code for Integration
Preparations
1. Configure the Huawei Maven repository address.
Open the build.gradle file in your Android Studio project.
Go to buildscript > repositories and allprojects > repositories, and add the Maven repository address for the HMS Core SDK.
Code:
<p style="line-height: 1.5em;">buildscript {
repositories{
...
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
…
maven { url 'http://developer.huawei.com/repo/'}
}
}
</p>
2. Add the build dependencies for the HMS Core SDK.
Open the build.gradle file in the app directory of your project.
Add build dependencies in the dependencies section and use the Full-SDK of Scene Kit and AR Engine SDK.
Code:
<p style="line-height: 1.5em;">dependencies {
….
implementation "com.huawei.hms:image-vision:1.0.3.301"
implementation "com.huawei.hms:image-vision-fallback:1.0.3.301"
}
</p>
3. Add permissions in the AndroidManifest.xml file.
Open the AndroidManifest.xml file in the main directory, and add the relevant permissions above the <application line.
Code:
<p style="line-height: 1.5em;"><!—Permissions to access Internet, and read data from and write data to the external storage-->
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
</p>
Development Procedure
1. Layout
The following parameters need to be configured to facilitate smart layout display:
1. title: copywriting title, which is mandatory and can contain a maximum of 10 characters. If the title exceeds 10 characters, extra characters will be forcibly truncated.
2. description: copywriting content, which can contain a maximum of 66 characters. If the content exceeds 66 characters, extra characters will be forcibly truncated and replaced with an ellipsis (...).
3. copyRight: name of the copyright holder, which can contain a maximum of 10 characters. If the name exceeds 10 characters, extra characters will be forcibly truncated and replaced with an ellipsis (...).
4. anchor: link that redirects users to the details, which can contain a maximum of 6 characters. If the link exceeds 6 characters, extra characters will be forcibly truncated and replaced with an ellipsis (...).
5. info: copywriting layout style. You can select one from info1 to info9. info8 represents a vertical layout, which currently applies only to Chinese text. info3 is used by default if the user selects info8 but the input title and text are not in Chinese.
Functions of certain buttons:
INIT_SERVICE: Initializes the service.
SOTP_SERVICE: Stops the service.
IMAGE: Opens an image on the mobile phone.
GET_RESULT: Obtains the typeset image to be displayed.
2. INIT_SERVICE: Initializing the Service
To call setVisionCallBack during service initialization, your app must implement the ImageVision.VisionCallBack API and override its onSuccess(int successCode) and onFailure(int errorCode) methods. The onSuccess method is called after successful framework initialization. In the onSuccess method, the smart layout service will be initialized again.
Code:
private void initTag(final Context context) {
// Obtain an ImageVisionImpl object.
imageVisionTagAPI = ImageVision.getInstance(this);
imageVisionTagAPI.setVisionCallBack(new ImageVision.VisionCallBack() {
@Override
public void onSuccess(int successCode) {
int initCode = imageVisionTagAPI.init(context, authJson);
}
@Override
public void onFailure(int errorCode) {
LogsUtil.e(TAG, "getImageVisionAPI failure, errorCode = " + errorCode);
}
});
}
For more details about the format of authJson, please visit Filter Service.
The token is mandatory. For more details about how to obtain a token, please visit How to Obtain a Token.
Code:
/**
* Gets token.
*
* @param context the context
* @param client_id the client id
* @param client_secret the client secret
* @return the token
*/
public static String getToken(Context context, String client_id, String client_secret) {
String token = "";
try {
String body = "grant_type=client_credentials&client_id=" + client_id + "&client_secret=" + client_secret;
token = commonHttpsRequest(context, body, context.getResources().getString(R.string.urlToken));
if (token != null && token.contains(" ")) {
token = token.replaceAll(" ", "+");
token = URLEncoder.encode(token, "UTF-8");
}
} catch (UnsupportedEncodingException e) {
LogsUtil.e(TAG, e.getMessage());
}
return token;
}
For more, you can check https://forums.developer.huawei.com/forumPortal/en/topic/0201465106068040043

Related

Implement the Automatic Bill Number Input Function Using ML Kit’s Text Recognition

More information like this, you can visit HUAWEI Developer Forum​
Introduction
In the previous post, we looked at how to use HUAWEI ML Kit’s card recognition function to implement the card binding function. With this function, users only need to provide a photo of their card, and your app will automatically recognize all of the key information. That makes entering card information much easier, but can we do the same thing for bills or discount coupons? Of course we can! In this post, I will show you how to implement automatic input of bill numbers and discount codes using HUAWEI ML Kit’s text recognition function.
Application
Text recognition is useful in a huge range of situations. For example, if you scan the following bill, indicate that the bill service number starts with “NO.DE SERVICIO”, and limit the length to 12 characters, you can quickly get the bill service number “123456789123” using the text recognition.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Similarly, if you scan the discount coupon below, start the discount code with “FAVE-” and limit the length to 4 characters, you’ll get the discount code “8329”, and can then complete the payment.
Pretty useful, right? You can also customize the information you want your app to recognize.
Integrating Text Recognition
So, let’s look at how to process bill numbers and discount codes.
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developer.
Here, we’ll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add Configurations to the File Header
Once you’ve integrated the SDK, add the following configuration to the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.3 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-ocr:2.0.1.300'
// Import the Latin character recognition model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-latin-model:2.0.1.300'
// Import the Japanese and Korean character recognition model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-jk-model:2.0.1.300'
// Import the Chinese and English character recognition model package.
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:2.0.1.300'
}
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Automatically Update
Code:
<manifest>
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="ocr" />
...
</manifest>
1.5 Apply for the Camera Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
2. Code Development
2.1 Create an Analyzer
Code:
MLTextAnalyzer analyzer = new MLTextAnalyzer.Factory(context).setLanguage(type).create();
2.2 Set the Recognition Result Processor to Bind with the Analyzer
Code:
analyzer.setTransactor(new OcrDetectorProcessor());
2.3 Call the Synchronous API
Use the built-in LensEngine of the SDK to create an object, register the analyzer, and initialize camera parameters.
Code:
lensEngine = new LensEngine.Creator(context, analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(width, height)
.applyFps(30.0f)
.enableAutomaticFocus(true)
.create();
2.4 Call the run Method to Start the Camera and Read the Camera Streams for the Recognition
Code:
try {
lensEngine.run(holder);
} catch (IOException e) {
// Exception handling logic.
Log.e("TAG", "e=" + e.getMessage());
}
2.5 Process the Recognition Result As Required
Code:
public class OcrDetectorProcessor implements MLAnalyzer.MLTransactor<MLText.Block> {
@Override
public void transactResult(MLAnalyzer.Result<MLText.Block> results) {
SparseArray<MLText.Block> items = results.getAnalyseList();
// Process the recognition result as required. Only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
…
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.6 Stop the Analyzer and Release the Detection Resources When the Detection Ends
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
Demo Effect
And that’s it! Remember that you can expand this capability if you need to. Now, let’s look at how to scan travel bills.
And here’s how to scan discount codes to quickly obtain online discounts and complete payments.
Github Source Code
https://github.com/HMS-Core/hms-ml-demo/tree/master/Receipt-Text-Recognition
Awsome work !!

Integrate ML Kit’s Document Skew Correction Capability

Introduction
In the previous post, we looked at how to use HUAWEI ML Kit’s text recognition capability. With this capability, users just need to provide an image, and your app will automatically recognize all of the key information. But what if the image is skew? Can we still get all the same key information? Yes, we can! ML Kit’s document skew correction capability automatically recognizes the position of the document, corrects the shooting angle, and lets users customize the edge points. So even if the document is skew, your users can still get all of the information they need.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Application Scenarios
Document skew correction is useful in a huge range of situations. For example, if your camera is at an angle when you shoot a paper document, the resulting image may be difficult to read. You can use document skew correction to adjust the document to the correct angle, which makes it easier to read.
Document skew correction also works for cards.
And if you’re traveling, you can use it to straighten out pictures of road signs that you’ve taken from an angle.
Isn’t it convenient? Now, I will show you how to quickly integrate this capability.
Document Correction Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we’ll just look at the most important procedures.
1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-documentskew:2.0.2.300'
// Import the document detection/correction model package.
implementation 'com.huawei.hms:ml-computer-vision-documentskew-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: ‘com.huawei.agconnect’
apply plugin: ‘com.android.application’
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "dsc"/>
1.5 Apply for Camera Permission and Local Image Reading Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2 Code Development
2.1 Create a Text Box Detection/Correction Analyzer
Code:
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting.Factory().create();
MLDocumentSkewCorrectionAnalyzer analyzer = MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
2.2 Create an MLFrame Object by Using android.graphics.Bitmap for the Analyzer to Detect Images
JPG, JPEG, and PNG images are supported. It is recommended that you limit the image size to between 320 x 320 px and 1920 x 1920 px.
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncDocumentSkewDetect Asynchronous Method or analyseFrame Synchronous Method to Detect the Text Box.
When the return code is MLDocumentSkewCorrectionConstant.SUCCESS, the coordinates of the four verticals of the text box are returned. These coordinates are relative to the coordinates of the input image. If the coordinates are inconsistent with those of the device, you need to convert them. Otherwise, the returned data will be meaningless.
Code:
// Call the asyncDocumentSkewDetect asynchronous method.
Task<MLDocumentSkewDetectResult> detectTask = analyzer.asyncDocumentSkewDetect(mlFrame);
detectTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
@Override
public void onSuccess(MLDocumentSkewDetectResult detectResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
})
// Call the analyseFrame synchronous method.
SparseArray<MLDocumentSkewDetectResult> detect = analyzer.analyseFrame(mlFrame);
if (detect != null && detect.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Detection success.
} else {
// Detection failure.
}
2.4 Obtain the Coordinate Data of the Four Verticals in the Text Box Once the Detection is Successful
Use the upper left vertex as the starting point, and add the upper left vertex, upper right vertex, lower right vertex, and lower left vertex to the list (List<Point>). Finally, create an MLDocumentSkewCorrectionCoordinateInput object.
If the synchronous method analyseFrame is called, the detection results will be obtained first, as you can see in the following figure. (If the asynchronous method asyncDocumentSkewDetect is called, skip this step.)
Code:
MLDocumentSkewDetectResult detectResult = detect.get(0);
Obtain the coordinate data for the four verticals of the text box and create an MLDocumentSkewCorrectionCoordinateInput object.
Code:
Point leftTop = detectResult.getLeftTopPosition();
Point rightTop = detectResult.getRightTopPosition();
Point leftBottom = detectResult.getLeftBottomPosition();
Point rightBottom = detectResult.getRightBottomPosition();
List<Point> coordinates = new ArrayList<>();
coordinates.add(leftTop);
coordinates.add(rightTop);
coordinates.add(rightBottom);
coordinates.add(leftBottom);
MLDocumentSkewCorrectionCoordinateInput coordinateData = new MLDocumentSkewCorrectionCoordinateInput(coordinates);
2.5 Call the asyncDocumentSkewCorrect Asynchronous Method or syncDocumentSkewCorrect Synchronous Method to Correct the Text Box
Code:
// Call the asyncDocumentSkewCorrect asynchronous method.
Task<MLDocumentSkewCorrectionResult> correctionTask = analyzer.asyncDocumentSkewCorrect(mlFrame, coordinateData);
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
// Call the syncDocumentSkewCorrect synchronous method.
SparseArray<MLDocumentSkewCorrectionResult> correct= analyzer.syncDocumentSkewCorrect(mlFrame, coordinateData);
if (correct != null && correct.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Correction success.
} else {
// Correction failure.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
Can we use ML Kit to translate the document to other language ?

Integrate HUAWEI ML Kit with Ease, for High-Level Business Card Recognition

Introduction
In an age of digital living, maintaining interpersonal connections has never been more important. It's a common ritual for professionals to exchange business cards, but this can come with certain hassles – having to input the contact information for each new contact, including names, companies, and addresses. That's where business card recognition comes into the picture.
How It Works
The business card recognition service utilizes the optical character recognition (OCR) technology to digitize text on a card, converting it into an editable format, and classifying the recognized information by category. When you have integrated the service into your app, your users will be able to collect information on a business card simply by snapping a picture, scanning the QR code on the card, or importing the card image.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Preparations
Before API development, there are a few things that you'll need to do in preparation, for example, configuring the Maven repository address of the HMS Core SDK in your project, and integrating the SDK for the business card recognition service.
Install Android Studio
Official download website
Installation guide
Add Huawei Maven Repository to the Project-Level build.gradle File
Open the build.gradle File in the Root Directory of Your Android Studio Project
Add the Maven Repository Address
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
allprojects {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
Import the SDK
Code:
dependencies {
// Text recognition SDK.
implementation 'com.huawei.hms:ml-computer-vision-ocr:2.0.1.300'
// Text recognition model.
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-jk-model:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-latin-model:2.0.1.300'
}
Add the Manifest File
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="ocr" />
...
</manifest>
Add Permissions
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.hardware.camera.autofocus" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.autofocus" />
Submit a Dynamic Permission Application
Code:
if (!(ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED)) {
requestCameraPermission();
}
Development
1. Create the text analyzer MLTextAnalyzer to recognize text within images. Use the custom parameter MLLocalTextSetting to configure the on-device text analyzer.
Code:
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("zh")
.create();
MLTextAnalyzer analyzer = MLAnalyzerFactory.getInstance()
.getLocalTextAnalyzer(setting);
2. Use android.graphics.Bitmap to create an MLFrame. Supported image formats include JPG, JPEG, PNG, and BMP. It is recommended that the aspect ratio for landscape business cards be 2:1 and that for portrait business cards be 1:2.
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
3. Pass the MLFrame object to the asyncAnalyseFrame method for text recognition.
Code:
Task<MLText> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText text) {
// Recognition success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
}
});
4. After recognition is complete, stop the analyzer to release recognition resources.
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// IOException
} catch (Exception e) {
// Exception
}
More Information
We have developed a demo app that demonstrates the effects of the business card recognition service.
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow

How to Integrate HUAWEI ML Kit's Face Detection Capability

We all like to take selfies from time-to-time, but because most of us are not professional photographers, our photographs can fall short of our expectations. Wouldn't it be great if there was an app that beautified users' photos, and even better, gave the option to add various stickers?
Or, wouldn't it be convenient if there was a parental control function in your app, which prevented children from getting too close to their phone, or staring at the screen for too long? Well, with ML Kit's face detection capability, you can do all of this!
ML Kit's face detection capability detects up to 855 facial keypoints and returns the coordinates for the eyebrows, eyes, nose, mouth, and ears, as well as the contour and angle of the face. Once you've integrated this capability, you can quickly create beauty apps which enable users to add fun facial effects and stickers to their images. Face detection also detects whether the subject's eyes are open, whether they are wearing glasses or a hat, whether they have a beard, and even their gender and age. In addition, it can detect up to seven facial expressions: smiling, neutral, angry, disgusted, frightened, sad, and surprised. This is great if you want to create apps such as smile-cameras.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Eye-Enlarging and Face-Shaping Functions-Development Practice
1. Preparations
To find detailed information about the preparations you need to make, please refer to Development Process. Here, we'll just look at the most important procedures.
1.1 Configure the Maven repository address in the project-level build.gradle file.
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add configurations to the file header.
After integrating the SDK, add the following configuration to the file header:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.3 Configure SDK dependencies in the app-level build.gradle file.
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Import the contour and keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
// Import the facial expression detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
// Import the facial feature detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}
1.4 Add these statements to the AndroidManifest.xml file so the machine learning model can update automatically.
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
...
</manifest>
1.5 Obtain camera permissions
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
2. Code Development
2.1 Create a face analyzer by using the default parameter configurations.
Code:
analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
2.2 Create an MLFrame object by using the android.graphics.Bitmap so the analyzer can detect images.
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncAnalyseFrame method to perform face detection.
Code:
Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {
@Override
public void onSuccess(List<MLFace> faces) {
// Detection success. Obtain the face keypoints.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
2.4 Use progress bar to process the face in the image. Call the magnifyEye and smallFaceMesh methods to implement the eye-enlarging and face-shaping algorithms.
Code:
private SeekBar.OnSeekBarChangeListener new SeekBar.OnSeekBarChangeListener() {
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
switch (seekBar.getId()) {
case R.id.seekbareye: // When the progress bar of the eye enlarging changes, ...
case R.id.seekbarface: // When the progress bar of the face shaping changes, ...
}
}
}
2.5 Release the analyzer when the detection ends.
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
Log.e(TAG, "e=" + e.getMessage());
}
Demo Effects
Face Stickers-Development Practice
1. Preparations
1.1 Add the Maven repository to the project-level build.gradle file.
Open the build.gradle file in the root directory of your Android Studio project.
Add the following Maven repository address:
Code:
buildscript {
{
maven {url 'http://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}
}
1.2 Add SDK dependencies to the app-level build.gradle file.
Code:
// Face detection SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Face detection model.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
1.3 Apply for the camera, network access, and storage permissions in the AndroidManifest.xml file.
Code:
<!-- Camera permission -->
< uses-feature android:name="android.hardware.camera" />
< uses-permission android:name="android.permission.CAMERA" />
< !-- Write permission -->
< uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
< !-- Read permission -->
< uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2. Code Development
2.1 Set the face analyzer.
Code:
MLFaceAnalyzerSetting detectorOptions;
detectorOptions = new MLFaceAnalyzerSetting.Factory()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_FEATURES)
.setShapeType(MLFaceAnalyzerSetting.TYPE_SHAPES)
.allowTracing(MLFaceAnalyzerSetting.MODE_TRACING_FAST)
.create();
detector = MLAnalyzerFactory.getInstance().getFaceAnalyzer(detectorOptions);
More details, you can visit https://forums.developer.huawei.com/forumPortal/en/topic/0203423486077750006

Quickly Integrate HUAWEI ML Kit's Form Recognition Service

Intro
Questionnaires are useful when you want to collect specific information for the purposes of market research. But how can you convert the large amounts of data you collect from questionnaires into electronic documents? One very effective tool is ML Kit's form recognition service. This guide will show you how to integrate this service, so you can easily input and convert data from forms.
Applicable Scenarios
ML Kit's form recognition service uses AI to recognize the images you input and return information about a form’s structure (including rows, columns, and coordinates of cells) and form text in both Chinese and English (including punctuation). This service can be widely applied in everyday work scenarios. For example, if you’ve collected a lot of paper questionnaires, you can quickly convert them into electronic documents. This is cheaper and requires less time and effort than typing them up manually.
Precautions
· Forms such as questionnaires can be recognized.
· Images containing more than one form cannot be recognized, and the form header and footer information cannot be obtained.
· For the best results, try to adhere to the following conditions:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Procedure
1. Preparations
To find detailed information about the preparations you need to make, please refer to Development Process.
Here, we'll just look at the most important steps.
1.1 Configure the Maven repository address in the project-level build.gradle file.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add configurations to the file header.
Once you’ve integrated the SDK, add the following configuration to the file header:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.3 Configure SDK dependencies in the app-level build.gradle file.
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-formrecognition:2.0.4.300'
// Import the form recognition model package.
implementation 'com.huawei.hms:ml-computer-vision-formrecognition-model:2.0.4.300'
}
1.4 Add the following statements to the AndroidManifest.xml file so the machine learning model can update automatically:
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "fr"/>
1.5 Apply for camera permissions.
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
2. Code Development
2.1 Create a form recognition analyzer.
Code:
MLFormRecognitionAnalyzerSetting setting = new MLFormRecognitionAnalyzerSetting.Factory().create();
MLFormRecognitionAnalyzer analyzer = MLFormRecognitionAnalyzerFactory.getInstance().getFormRecognitionAnalyzer(setting);
2.2 Create an MLFrame object by using android.graphics.Bitmap which will enable the analyzer to recognize forms. Only JPG, JPEG, and PNG images are supported. We recommend that the image size be within a range of 960 x 960 px to 1920 x 1920 px.
Code:
MLFrame mlFrame = MLFrame.fromBitmap(bitmap);
2.3 Call the asynchronous method asyncAnalyseFrame or the synchronous method analyseFrame to start the form recognition. (For details about the data structure definition of JsonObject, please refer to JsonObject Data Structure Definition.)
Code:
// Call the asynchronous method asyncAnalyseFrame.
Task<JsonObject> recognizeTask = analyzer.asyncAnalyseFrame(mlFrame);
recognizeTask.addOnSuccessListener(new OnSuccessListener<JsonObject>() {
@Override
public void onSuccess(JsonObject recognizeResult) {
// Recognition success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
}
});
// Call the synchronous method analyseFrame.
SparseArray<JsonObject> recognizeResult = analyzer.analyseFrame(mlFrame);
if (recognizeResult != null && recognizeResult.get(0).get("retCode").getAsInt() == MLFormRecognitionConstant.SUCCESS) {
// Recognition success.
} else {
// Recognition failure.
}
2.4 Stop the analyzer and release the recognition resources when the recognition finishes.
Code:
if (analyzer != null) {
analyzer.stop();
}
Summary
ML Kit's form recognition service enables you to recognize forms in images. It’s particularly useful for tasks like collecting questionnaire data because it is quicker, cheaper, and requires less effort than typing up questionnaires manually.
Learn More
For more information, please visit HUAWEI Developers.
For detailed instructions, please visit Development Guide.
You can join the HMS Core developer discussion on Reddit.
You can download the demo and sample code from GitHub.
To solve integration problems, please go to Stack Overflow.
Form recognition seems the new function on ML. Can you share more information about it? Thanks~
Kylie Harris said:
Form recognition seems the new function on ML. Can you share more information about it? Thanks~
Click to expand...
Click to collapse
For more details, you can refer the official document https://developer.huawei.com/consum...uides-V5/form-recognition-0000001058920154-V5
Does recognition happen locally on device or server?

Categories

Resources