HMS Image Super Resolution Application - Huawei Developers

More information like this, you can visit HUAWEI Developer Forum​
Introduction:
HMS ML Kit features image super-resolution service which provides 1x super-resolution capability. This feature removes the compression noise of images to obtain clear images.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Precautions:
1. Prior to the image super-resolution service, it is necessary to convert images into bitmaps in ARGB format. After
the service processes, the image output are bitmaps in ARGB format.
2. Maximum size of an input image is 1024 x 768 px or 768 x 1024 px. The minimum size is 64 x 64 px.
Integration:
1. Create a project in android studio and Huawei AGC.
2. Provide the SHA-256 Key in App Information Section.
3. Download the agconnect-services.json from AGCand save into app directory.
4. In root build.gradle
Navigate to allprojects > repositories and buildscript > repositories and add the below line.
Code:
maven { url 'http://developer.huawei.com/repo/' }
5. In app build.gradle
Configure the Maven dependency
Code:
implementation 'com.huawei.hms:ml-computer-vision-imageSuperResolution:2.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-imageSuperResolution-model:2.0.2.300'
Apply plugin
Code:
apply plugin: 'com.huawei.agconnect'
6. Permissions in Manifest
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Code Implementation:
Here is an image convertor application that uses HMS image super-resolution service. This application can select image from gallery and also can capture image. Follow the steps
1. Create an image super-resolution analyzer.
Code:
private void createAnalyzer() {
MLImageSuperResolutionAnalyzerSetting settings = new MLImageSuperResolutionAnalyzerSetting.Factory()
// Set the scale of image super resolution to 1x.
.setScale(MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_1X)
.create();
analyzer = MLImageSuperResolutionAnalyzerFactory.getInstance().getImageSuperResolutionAnalyzer(settings);
}
2. Create an MLFrame object by using android.graphics.Bitmap.
Code:
MLFrame mlFrame = new MLFrame.Creator().setBitmap(srcBitmap).create();
3. Perform super-resolution processing on the image.
Code:
Task<MLImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSuperResolutionResult>() {
public void onSuccess(MLImageSuperResolutionResult result) {
// Recognition success.
Toast.makeText(getApplicationContext(), "Success", Toast.LENGTH_SHORT).show();
setImage(result.getBitmap());
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Recognition failure.
Toast.makeText(getApplicationContext(), "Failed:" + e.getMessage(), Toast.LENGTH_SHORT).show();
}
});
4. After the recognition is complete, stop the analyzer to release recognition resources.
Code:
private void release() {
if (analyzer == null) {
return;
}
analyzer.stop();
}
5. Capture picture from camera.
Code:
private void capturePictureFromCamera(){
if (checkSelfPermission(Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED)
{
requestPermissions(new String[]{Manifest.permission.CAMERA}, MY_CAMERA_PERMISSION_CODE);
}
else
{
Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(cameraIntent, CAMERA_REQUEST);
}
}
6. Acceessing image from Gallery.
Code:
private void getImageFromGallery(){
Intent intent = new Intent();
intent.setAction(Intent.ACTION_GET_CONTENT);
intent.setType("image/*");
startActivityForResult(intent, GALLERY_REQUEST);
}
Screen Shots:
Conclusion:
Image super-resolution service is widely used in day to day life and supports in common scenarios like improving low-quality images on the network, obtaining clear images while reading news, improving image clarity of Identification Card etc.This service intelligently reduces the image noise and provides clear image without changing resolution.
Reference:
HMSCore-Guides

Is there any image size limitations.

How to Reset the Mobile app?
I have installed an emulated to play the world best game on my computer. But emulator is not working smoothly.
Please share some good technique to run it well so that I can enjoy the taste of the game.
Thank you.

cool thing, I'll definitely have to try it, especially with my old photos.

Does this work with every image? I've some old black and white photos

JohnMes said:
cool thing, I'll definitely have to try it, especially with my old photos.
Click to expand...
Click to collapse
It will be a good choice.:highfive:

Rushikesh787 said:
Does this work with every image? I've some old black and white photos
Click to expand...
Click to collapse
I think it can be better to have a try. Maybe we can gain a surprise.

will it convert all format images ?

I would definitely want to try this out

Awesome
I would definitely try this app on my girlfriend's photo

Related

Android: How to Develop an ID Photo DIY Applet in 30 Min

The source code will be shared and it's quite friendly to users who are fresh to Android. The whole process will take only 30 minutes.
It's an application level development and we won't go through the algorithm of image segmentation. I use Huawei Mlkit help to develop this app and it provides the capability of image segmentation. Developers will learn how to quickly develop a ID photo DIY applet using such SDK.
Background
I don't know if you have had such an experience. All of a sudden, schools or companies needed to provide one inch or two inch head photos of individuals. They needed to apply for a passport or student card which have requirements for the background color of the photos. However, many people don't have time to take photos at the photo studio. Or they have taken them before, but the background color of the photos doesn't meet the requirements. I had a similar experience. At that time, the school asked for a passport, and the school photo studio was closed again. I took photos with my mobile phone in a hurry, and then used the bedspread as the background to deal with it. As a result, I was scolded by the teacher.
Many years later, mlkit machine learning has the function of image segmentation. Using this SDK to develop a small program of certificate photo DIY could perfectly solve the embarrassment in that year.
Here is the demo for the result.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How effective is it, is it great, just need to write a small program to quickly achieve!
Core Tip: This SDK is free, and all Android models are covered!
ID photo development actual combat
1. Preparation
1.1 Add Huawei Maven Warehouse in Project Level Gradle
Open the Android studio project level build.gradle file.
Add the following Maven addresses:
Code:
[CODE]buildscript {
repositories {
maven {url 'http://developer.huawei.com/repo/'}
} }allprojects {
repositories {
maven { url 'http://developer.huawei.com/repo/'}
}}
[/CODE]
1.2 Add SDK Dependency in Application Level build.gradle
Code:
Introducing SDK and basic SDK of face recognition:
dependencies{
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:1.0.2.301' }
1.3 Add Model in Android manifest.xml File
To enable the application to automatically update the latest machine learning model to the user's device after the user installs your application from the Huawei application market. Add the following statement to the Android manifest.xml file of the application:
Code:
<manifest
<application
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg "/>
</application></manifest>
1.4 Apply for Camera and Storage Permission in Android manifest.xml File
Code:
<!--使用存储权限--><uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
2. Two Key Steps of Code Development
2.1 Dynamic Authority Application
Code:
@Overrideprotected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (!allPermissionsGranted()) {
getRuntimePermissions();
}}@Overridepublic void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != PERMISSION_REQUESTS) {
return;
}
boolean isNeedShowDiag = false;
for (int i = 0; i < permissions.length; i++) {
if (permissions[i].equals(Manifest.permission.READ_EXTERNAL_STORAGE) && grantResults[i] != PackageManager.PERMISSION_GRANTED) {
isNeedShowDiag = true;
}
}
if (isNeedShowDiag && !ActivityCompat.shouldShowRequestPermissionRationale(this, Manifest.permission.CALL_PHONE)) {
AlertDialog dialog = new AlertDialog.Builder(this)
.setMessage(getString(R.string.camera_permission_rationale))
.setPositiveButton(getString(R.string.settings), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS);
intent.setData(Uri.parse("package:" + getPackageName())); // 根据包名打开对应的设置界面
startActivityForResult(intent, 200);
startActivity(intent);
}
})
.setNegativeButton(getString(R.string.cancel), new DialogInterface.OnClickListener() {
@Override public void onClick(DialogInterface dialog, int which) {
finish();
}
}).create();
dialog.show();
}}
2.2 Creating an Image Segmentation Detector
Code:
The image segmentation detector can be created through the image segmentation detection configurator "mlimagesegmentation setting".
MLImageSegmentationSetting setting = new MLImageSegmentationSetting.Factory()
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setExact(true)
.create();
this.analyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(setting);
2.3 Create "mlframe" Object through android.graphics.bitmap for Analyzer to Detect Pictures
The image segmentation detector can be created through the image segmentation detection configurator "MLImageSegmentationSetting".
MLFrame mlFrame = new MLFrame.Creator().setBitmap(this.originBitmap).create();
2.4 Call "asyncanalyseframe" Method for Image Segmentation
Code:
// 创建一个task,处理图像分割检测器返回的结果。 Task<MLImageSegmentation> task = analyzer.asyncAnalyseFrame(frame); // 异步处理图像分割检测器返回结果 Task<MLImageSegmentation> task = this.analyzer.asyncAnalyseFrame(mlFrame);
task.addOnSuccessListener(new OnSuccessListener<MLImageSegmentation>() {
@Override public void onSuccess(MLImageSegmentation mlImageSegmentationResults) {
// Transacting logic for segment success.
if (mlImageSegmentationResults != null) {
StillCutPhotoActivity.this.foreground = mlImageSegmentationResults.getForeground();
StillCutPhotoActivity.this.preview.setImageBitmap(StillCutPhotoActivity.this.foreground);
StillCutPhotoActivity.this.processedImage = ((BitmapDrawable) ((ImageView) StillCutPhotoActivity.this.preview).getDrawable()).getBitmap();
StillCutPhotoActivity.this.changeBackground();
} else {
StillCutPhotoActivity.this.displayFailure();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override public void onFailure(Exception e) {
// Transacting logic for segment failure.
StillCutPhotoActivity.this.displayFailure();
return;
}
});
2.5 Change the Picture Background
Code:
this.backgroundBitmap = BitmapUtils.loadFromPath(StillCutPhotoActivity.this, id, targetedSize.first, targetedSize.second);BitmapDrawable drawable = new BitmapDrawable(backgroundBitmap);
this.preview.setDrawingCacheEnabled(true);
this.preview.setBackground(drawable);
this.preview.setImageBitmap(this.foreground);
this.processedImage = Bitmap.createBitmap(this.preview.getDrawingCache());
this.preview.setDrawingCacheEnabled(false);
Conclusion
In this way, a small program of ID photo DIY has been made. Let's see the demo.
If you have strong hands-on ability, you can also add and change suits or other operations. The source code has been uploaded to GitHub. You can also improve this function on GitHub.
https://github.com/HMS-Core/hms-ml-demo/tree/master/ID-Photo-DIY
Please stamp the source code address of GitHub (the project directory is id-photo-diy).
Based on the ability of image segmentation, it cannot only be used to do the DIY program of ID photo, but also realize the following related functions:
1. People's portraits in daily life can be cut out, some interesting photos can be made by changing the background, or the background can be virtualized to get more beautiful and artistic photos.
2. Identify the sky, plants, food, cats and dogs, flowers, water surface, sand surface, buildings, mountains and other elements in the image, and make special beautification for these elements, such as making the sky bluer and the water clearer.
3. Identify the objects in the video stream, edit the special effects of the video stream, and change the background.
For other functions, please brainstorm together!
For a more detailed development guide, please refer to the official website of Huawei developer Alliance:
https://developer.huawei.com/consumer/en/doc/development/HMS-Guides/ml-introduction-4

Integrate ML Kit’s Document Skew Correction Capability

Introduction
In the previous post, we looked at how to use HUAWEI ML Kit’s text recognition capability. With this capability, users just need to provide an image, and your app will automatically recognize all of the key information. But what if the image is skew? Can we still get all the same key information? Yes, we can! ML Kit’s document skew correction capability automatically recognizes the position of the document, corrects the shooting angle, and lets users customize the edge points. So even if the document is skew, your users can still get all of the information they need.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Application Scenarios
Document skew correction is useful in a huge range of situations. For example, if your camera is at an angle when you shoot a paper document, the resulting image may be difficult to read. You can use document skew correction to adjust the document to the correct angle, which makes it easier to read.
Document skew correction also works for cards.
And if you’re traveling, you can use it to straighten out pictures of road signs that you’ve taken from an angle.
Isn’t it convenient? Now, I will show you how to quickly integrate this capability.
Document Correction Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we’ll just look at the most important procedures.
1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-documentskew:2.0.2.300'
// Import the document detection/correction model package.
implementation 'com.huawei.hms:ml-computer-vision-documentskew-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: ‘com.huawei.agconnect’
apply plugin: ‘com.android.application’
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "dsc"/>
1.5 Apply for Camera Permission and Local Image Reading Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2 Code Development
2.1 Create a Text Box Detection/Correction Analyzer
Code:
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting.Factory().create();
MLDocumentSkewCorrectionAnalyzer analyzer = MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
2.2 Create an MLFrame Object by Using android.graphics.Bitmap for the Analyzer to Detect Images
JPG, JPEG, and PNG images are supported. It is recommended that you limit the image size to between 320 x 320 px and 1920 x 1920 px.
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncDocumentSkewDetect Asynchronous Method or analyseFrame Synchronous Method to Detect the Text Box.
When the return code is MLDocumentSkewCorrectionConstant.SUCCESS, the coordinates of the four verticals of the text box are returned. These coordinates are relative to the coordinates of the input image. If the coordinates are inconsistent with those of the device, you need to convert them. Otherwise, the returned data will be meaningless.
Code:
// Call the asyncDocumentSkewDetect asynchronous method.
Task<MLDocumentSkewDetectResult> detectTask = analyzer.asyncDocumentSkewDetect(mlFrame);
detectTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
@Override
public void onSuccess(MLDocumentSkewDetectResult detectResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
})
// Call the analyseFrame synchronous method.
SparseArray<MLDocumentSkewDetectResult> detect = analyzer.analyseFrame(mlFrame);
if (detect != null && detect.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Detection success.
} else {
// Detection failure.
}
2.4 Obtain the Coordinate Data of the Four Verticals in the Text Box Once the Detection is Successful
Use the upper left vertex as the starting point, and add the upper left vertex, upper right vertex, lower right vertex, and lower left vertex to the list (List<Point>). Finally, create an MLDocumentSkewCorrectionCoordinateInput object.
If the synchronous method analyseFrame is called, the detection results will be obtained first, as you can see in the following figure. (If the asynchronous method asyncDocumentSkewDetect is called, skip this step.)
Code:
MLDocumentSkewDetectResult detectResult = detect.get(0);
Obtain the coordinate data for the four verticals of the text box and create an MLDocumentSkewCorrectionCoordinateInput object.
Code:
Point leftTop = detectResult.getLeftTopPosition();
Point rightTop = detectResult.getRightTopPosition();
Point leftBottom = detectResult.getLeftBottomPosition();
Point rightBottom = detectResult.getRightBottomPosition();
List<Point> coordinates = new ArrayList<>();
coordinates.add(leftTop);
coordinates.add(rightTop);
coordinates.add(rightBottom);
coordinates.add(leftBottom);
MLDocumentSkewCorrectionCoordinateInput coordinateData = new MLDocumentSkewCorrectionCoordinateInput(coordinates);
2.5 Call the asyncDocumentSkewCorrect Asynchronous Method or syncDocumentSkewCorrect Synchronous Method to Correct the Text Box
Code:
// Call the asyncDocumentSkewCorrect asynchronous method.
Task<MLDocumentSkewCorrectionResult> correctionTask = analyzer.asyncDocumentSkewCorrect(mlFrame, coordinateData);
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
// Detection success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
// Call the syncDocumentSkewCorrect synchronous method.
SparseArray<MLDocumentSkewCorrectionResult> correct= analyzer.syncDocumentSkewCorrect(mlFrame, coordinateData);
if (correct != null && correct.get(0).getResultCode() == MLDocumentSkewCorrectionConstant.SUCCESS) {
// Correction success.
} else {
// Correction failure.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
Can we use ML Kit to translate the document to other language ?

How to Improve Text Resolution Using HUAWEI ML Kit's Text Super-Resolution Capability

Introduction
Often, when you take screenshots of text and images you find online and share them with your friends, the apps you use to send them will automatically compress the screenshots. This means the people receiving them get low definition images which are hard to read.
Is there any way to solve this problem? Of course! ML Kit's text super-resolution capability improves text resolution in images. It enables users to magnify images containing text until they're 9 times bigger while greatly enhancing the text’s definition, making it easier to read.
Application Scenarios
The text super-resolution capability is useful in a huge range of situations. For example, when screenshots are compressed, the text super-resolution capability will restore them to their original definition.
When you take a photograph of a document from far away, or can't properly adjust the focus, the text may not be clear. In this situation, you can use the text super-resolution capability to improve the definition and readability of the document.
Pretty useful, right? Below, I'll show you how to integrate this capability
Text Super-Resolution Development
1. Configure the Maven Repository Address
1.1 Open the build.gradle File in the Root Directory of Your Android Studio Project
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
1.2 Add the AppGallery Connect Plug-in and the Maven Repository
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Integrate the Text Image Super-Resolution SDK
2.1 Full SDK Integration (Recommended)
Code:
dependencies{
// Import the base SDK.
Implementation 'com.huawei.hms:ml-computer-vision-textimagesuperresolution:2.0.3.300'
// Import the text image super-resolution model package.
implementation 'com.huawei.hms:ml-computer-vision-textimagesuperresolution-model:2.0.3.300'
}
2.2 Add Configurations to the File Header
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
2.3 Update the Machine Learning Model
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "tisr"/>
3. Code Development
3.1 Create a Text Image Super-Resolution Analyzer
Code:
MLTextImageSuperResolutionAnalyzer analyzer = MLTextImageSuperResolutionAnalyzerFactory.getInstance().getTextImageSuperResolutionAnalyzer();
3.2 Create an MLFrame Object by Using android.graphics.Bitmap
The bitmap type must be ARGB8888. If it isn’t, you’ll need to convert it.
Code:
// Create an MLFrame object using the bitmap. The bitmap parameter indicates the input image.
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
3.3 Processing Super-Resolution for Images That Contain Texts
Code:
Task<MLTextImageSuperResolutionResult> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLTextImageSuperResolutionResult>() {
public void onSuccess(MLTextImageSuperResolutionResult result) {
// Processing logic for super-resolution success.
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for super-resolution failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize the messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} else {
// Other errors.
}
});
3.4 Stop the Analyzer to Release Detection Resources Once the Super-Resolution is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
GitHub Source Code
How far it can detect the text using camera ?

Integrate HUAWEI ML Kit with Ease, for High-Level Business Card Recognition

Introduction
In an age of digital living, maintaining interpersonal connections has never been more important. It's a common ritual for professionals to exchange business cards, but this can come with certain hassles – having to input the contact information for each new contact, including names, companies, and addresses. That's where business card recognition comes into the picture.
How It Works
The business card recognition service utilizes the optical character recognition (OCR) technology to digitize text on a card, converting it into an editable format, and classifying the recognized information by category. When you have integrated the service into your app, your users will be able to collect information on a business card simply by snapping a picture, scanning the QR code on the card, or importing the card image.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Preparations
Before API development, there are a few things that you'll need to do in preparation, for example, configuring the Maven repository address of the HMS Core SDK in your project, and integrating the SDK for the business card recognition service.
Install Android Studio
Official download website
Installation guide
Add Huawei Maven Repository to the Project-Level build.gradle File
Open the build.gradle File in the Root Directory of Your Android Studio Project
Add the Maven Repository Address
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
allprojects {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
Import the SDK
Code:
dependencies {
// Text recognition SDK.
implementation 'com.huawei.hms:ml-computer-vision-ocr:2.0.1.300'
// Text recognition model.
implementation 'com.huawei.hms:ml-computer-vision-ocr-cn-model:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-jk-model:2.0.1.300'
implementation 'com.huawei.hms:ml-computer-vision-ocr-latin-model:2.0.1.300'
}
Add the Manifest File
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="ocr" />
...
</manifest>
Add Permissions
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.hardware.camera.autofocus" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.autofocus" />
Submit a Dynamic Permission Application
Code:
if (!(ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED)) {
requestCameraPermission();
}
Development
1. Create the text analyzer MLTextAnalyzer to recognize text within images. Use the custom parameter MLLocalTextSetting to configure the on-device text analyzer.
Code:
MLLocalTextSetting setting = new MLLocalTextSetting.Factory()
.setOCRMode(MLLocalTextSetting.OCR_DETECT_MODE)
.setLanguage("zh")
.create();
MLTextAnalyzer analyzer = MLAnalyzerFactory.getInstance()
.getLocalTextAnalyzer(setting);
2. Use android.graphics.Bitmap to create an MLFrame. Supported image formats include JPG, JPEG, PNG, and BMP. It is recommended that the aspect ratio for landscape business cards be 2:1 and that for portrait business cards be 1:2.
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
3. Pass the MLFrame object to the asyncAnalyseFrame method for text recognition.
Code:
Task<MLText> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<MLText>() {
@Override
public void onSuccess(MLText text) {
// Recognition success.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
}
});
4. After recognition is complete, stop the analyzer to release recognition resources.
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// IOException
} catch (Exception e) {
// Exception
}
More Information
We have developed a demo app that demonstrates the effects of the business card recognition service.
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow

Solution to Creating an Image Classifier

I don't know if it's the same for you, but I always get frustrated when sorting through my phone's album. It seems to take forever before I can find the image that I want to use. As a coder, I can't help but wonder if there's a solution for this. Is there a way to organize an entire album? Well, let's take a look at how to develop an image classifier using a service called image classification.
Development Preparations​1. Configure the Maven repository address for the SDK to be used.
Java:
repositories {
maven {
url'https://cmc.centralrepo.rnd.huawei.com/artifactory/product_maven/' }
}
2. Integrate the image classification SDK.
Java:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-classification:3.3.0.300'
// Import the image classification model package.
implementation 'com.huawei.hms:ml-computer-vision-image-classification-model:3.3.0.300'
Project Configuration​1. Set the authentication information for the app.
This information can be set through an API key or access token.
Use the setAccessToken method to set an access token during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setAccessToken("your access token");
Or, use setApiKey to set an API key during app initialization. This needs to be set only once.
Java:
MLApplication.getInstance().setApiKey("your ApiKey");
2. Create an image classification analyzer in on-device static image detection mode.
Java:
// Method 1: Use customized parameter settings for device-based recognition.
MLLocalClassificationAnalyzerSetting setting =
new MLLocalClassificationAnalyzerSetting.Factory()
.setMinAcceptablePossibility(0.8f)
.create();
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer(setting);
// Method 2: Use default parameter settings for on-device recognition.
MLImageClassificationAnalyzer analyzer = MLAnalyzerFactory.getInstance().getLocalImageClassificationAnalyzer();
3. Create an MLFrame object.
Java:
// Create an MLFrame object using the bitmap which is the image data in bitmap format. JPG, JPEG, PNG, and BMP images are supported. It is recommended that the image dimensions be greater than or equal to 112 x 112 px.
MLFrame frame = MLFrame.fromBitmap(bitmap);
4. Call asyncAnalyseFrame to classify images.
Java:
Task<List<MLImageClassification>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLImageClassification>>() {
@Override
public void onSuccess(List<MLImageClassification> classifications) {
// Recognition success.
// Callback when the MLImageClassification list is returned, to obtain information like image categories.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Recognition failure.
try {
MLException mlException = (MLException)e;
// Obtain the result code. You can process the result code and customize relevant messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error message. You can quickly locate the fault based on the result code.
String errorMessage = mlException.getMessage();
} catch (Exception error) {
// Handle the conversion error.
}
}
});
5. Stop the analyzer after recognition is complete.
Java:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
// Exception handling.
}
Demo​
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Remarks​The image classification capability supports the on-device static image detection mode, on-cloud static image detection mode, and camera stream detection mode. The demo here illustrates only the first mode.
I came up with a bunch of application scenarios to use image classification, for example: education apps. With the help of image classification, such an app enables its users to categorize images taken in a period into different albums; travel apps. Image classification allows such apps to classify images according to where they are taken or by objects in the images; file sharing apps. Image classification allows users of such apps to upload and share images by image category.
References​>>Image classification Development Guide
>>Reddit to join developer discussions
>>GitHub to download the sample code
>>Stack Overflow to solve integration problems

Categories

Resources