{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hi all,
In the era of powerful mobile devices we store thousands of photos, have video calls, shop, manage our bank accounts and perform many other tasks in the palm of our hands. We can also manage to take photos and videos or have video calls with our cameras integrated on our mobile phones. But, it would be inefficient to only use these cameras we carry by ourselves all day to take raw photos and videos. They can perform more, much more.
Face detection services are used by many applications in different industries. It is mainly used for security and entertainment purposes. For example, it can be used by a taxi app to identify its taxi’s customer, it can be used by a smart home app to identify guests’ faces and announce to the host who the person ringing the bell at the door is, it can be used to draw moustache on faces in an entertainment app or it can be used to detect if a drivers eyes are open or not and warn our driver if his/her eyes closed.
In contrary to how many different areas face detection can be used and how important tasks this service performs, it is really easy to implement a face detection app with the help of Huawei ML Kit. As it’s a device side capability that works on all Android devices with ARM architecture, it is completely free, faster and more secure than other services. The face detection service can detect the shapes and features of your user’s face, including their facial expression, age, gender, and wearing.
With face detection service you can detect up to 855 face contour points to locate face coordinates including face contour, eyebrows, eyes, nose, mouth, and ears, and identify the pitch, yaw, and roll angles of a face. You can detect seven facial features including the possibility of opening the left eye, possibility of opening the right eye, possibility of wearing glasses, gender possibility, possibility of wearing a hat, possibility of wearing a beard, and age. In addition to there, you can also detect facial expressions, namely, smiling, neutral, anger, disgust, fear, sadness, and surprise.
Let’s start to build our demo application step by step from scratch!
1.Firstly, let’s create our project on Android Studio. We can create a project selecting Empty Activity option and then follow the steps described in this post to create and sign our project in App Gallery Connect. You can follow this guide.
2. Secondly, In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.
3. Now we have integrated Huawei Mobile Services (HMS) into our project. Now let’s follow the documentation on developer.huawei.com and find the packages to add to our project. In the website click Developer > HMS Core > AI > ML Kit. Here you will find introductory information to services, references, SDKs to download and others. Under ML Kit tab follow Android > Getting Started > Integrating HMS Core SDK > Adding Build Dependencies > Integrating the Face Detection SDK. We can follow the guide here to add face detection capability to our project. To later go round and learn more about this service I added all of the three packages shown here. You can only choose the base SDK or select packages according to your needs. After the integration your app-level build.gradle file will look like this.
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 29
buildToolsVersion "30.0.1"
defaultConfig {
applicationId "com.demo.faceapp"
minSdkVersion 21
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.1'
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Import the contour and key point detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
// Import the facial expression detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
// Import the facial feature detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}
And your project-level build.gradle file will look like this.
Code:
buildscript {
ext.kotlin_version = "1.3.72"
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
Don’t forget to add the following meta-data tags in your AndroidManifest.xml. This is for automatic update of the machine learning model.
Code:
<?xml version="1.0" encoding="utf-8"?>
<manifest ...
<application ...
</application>
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
</manifest>
4. Now we can select detecting faces on a static image or on a camera stream. Let’s choose detecting faces on a camera stream for this example. Firstly, let’s create our analyzer. Its type is MLFaceAnalyzer. It is responsible for analyzing the faces detected. Here is the sample implementation. We can also use our MLFaceAnalyzer with default settings to make it simple.
Code:
private lateinit var mAnalyzer: MLFaceAnalyzer
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
mAnalyzer = createAnalyzer()
}
private fun createAnalyzer(): MLFaceAnalyzer {
val settings = MLFaceAnalyzerSetting.Factory()
.allowTracing()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
.setShapeType(MLFaceAnalyzerSetting.TYPE_SHAPES)
.setMinFaceProportion(.5F)
.setKeyPointType(MLFaceAnalyzerSetting.TYPE_KEYPOINTS)
.create()
return MLAnalyzerFactory.getInstance().getFaceAnalyzer(settings)
}
5. Create a simple layout. Use two surfaceViews, one above the other, for camera frames and for our overlay. Because, later we will draw some shapes on our overlay view. Here is a sample layout.
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<SurfaceView
android:id="@+id/surfaceViewCamera"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<SurfaceView
android:id="@+id/surfaceViewOverlay"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
6. We should prepare our views. We need two surfaceHolders in our application. We are going to make surfaceHolderOverlay transparent, because we want to see our camera frames. Later we are going to add a callback to our surfaceHolderCamera to know when it’s created, changed and destroyed. Let’s create them.
Code:
private lateinit var mAnalyzer: MLFaceAnalyzer
private var surfaceHolderCamera: SurfaceHolder? = null
private var surfaceHolderOverlay: SurfaceHolder? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
mAnalyzer = createAnalyzer()
prepareViews()
}
private fun prepareViews() {
surfaceHolderCamera = surfaceViewCamera.holder
surfaceHolderOverlay = surfaceViewOverlay.holder
surfaceHolderOverlay?.setFormat(PixelFormat.TRANSPARENT)
surfaceHolderCamera?.addCallback(surfaceHolderCallback)
}
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
}
override fun surfaceCreated(holder: SurfaceHolder?) {
}
}
7. Now we can create our LensEngine. It is a magic class that handles camera frames for us. You can set different settings to your LensEngine. Here is how you can create it simply. As you can see in the example, the order of width and height passed to LensEngine changes according to orientation. We can create our LensEngine inside surfaceChanged method of surfaceHolderCallback and release it inside surfaceDestroyed. Here is an example of creating and running LensEngine. LensEngine needs a surfaceHolder or a surfaceTexture to run on.
Code:
private lateinit var mAnalyzer: MLFaceAnalyzer
private lateinit var mLensEngine: LensEngine
private var surfaceHolderCamera: SurfaceHolder? = null
private var surfaceHolderOverlay: SurfaceHolder? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
mAnalyzer = createAnalyzer()
prepareViews()
}
private fun prepareViews() {
surfaceHolderCamera = surfaceViewCamera.holder
surfaceHolderOverlay = surfaceViewOverlay.holder
surfaceHolderOverlay?.setFormat(PixelFormat.TRANSPARENT)
surfaceHolderCamera?.addCallback(surfaceHolderCallback)
}
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine = createLensEngine(width, height)
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
}
}
private fun createLensEngine(width: Int, height: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(this, mAnalyzer)
.applyFps(20F)
.setLensType(LensEngine.FRONT_LENS)
.enableAutomaticFocus(true)
return if (resources.configuration.orientation == Configuration.ORIENTATION_PORTRAIT) {
lensEngineCreator.let {
it.applyDisplayDimension(height, width)
it.create()
}
} else {
lensEngineCreator.let {
it.applyDisplayDimension(width, height)
it.create()
}
}
}
8. We also need somewhere to receive detected results and interact with them. For this purpose we create our FaceAnalyzerTransactor class, you can name it as you wish. It should implement MLAnalyzer.MLTransactor<MLFace> interface. We are going to set an overlay which is of type SurfaceHolder, get our canvas from this overlay and draw some shapes on the canvas. We have the required data about the detected face inside transactResult method. Here is the sample implementation of the whole of our FaceAnalyzerTransactor class.
Code:
class FaceAnalyzerTransactor : MLAnalyzer.MLTransactor<MLFace> {
private var mOverlay: SurfaceHolder? = null
fun setOverlay(surfaceHolder: SurfaceHolder) {
mOverlay = surfaceHolder
}
override fun transactResult(result: MLAnalyzer.Result<MLFace>?) {
draw(result?.analyseList)
}
private fun draw(faces: SparseArray<MLFace>?) {
val canvas = mOverlay?.lockCanvas()
if (canvas != null && faces != null) {
//Clear the canvas
canvas.drawColor(0, PorterDuff.Mode.CLEAR)
for (face in faces.valueIterator()) {
//Draw all 855 points of the face. If Front Lens is selected, change x points side.
for (point in face.allPoints) {
val x = mOverlay?.surfaceFrame?.right?.minus(point.x)
if (x != null) {
Paint().also {
it.color = Color.YELLOW
it.style = Paint.Style.FILL
it.strokeWidth = 16F
canvas.drawPoint(x, point.y, it)
}
}
}
//Prepare a string to show if the user smiles or not and draw a text on the canvas.
val smilingString = if (face.emotions.smilingProbability > 0.5) "SMILING" else "NOT SMILING"
Paint().also {
it.color = Color.RED
it.textSize = 60F
it.textAlign = Paint.Align.CENTER
canvas.drawText(smilingString, face.border.exactCenterX(), face.border.exactCenterY(), it)
}
}
mOverlay?.unlockCanvasAndPost(canvas)
}
}
override fun destroy() {
}
}
9. Create a FaceAnalyzerTransactor instance in MainActivity and use it as shown below.
Code:
private lateinit var mAnalyzer: MLFaceAnalyzer
private lateinit var mLensEngine: LensEngine
private lateinit var mFaceAnalyzerTransactor: FaceAnalyzerTransactor
private var surfaceHolderCamera: SurfaceHolder? = null
private var surfaceHolderOverlay: SurfaceHolder? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
init()
}
private fun init() {
mAnalyzer = createAnalyzer()
mFaceAnalyzerTransactor = FaceAnalyzerTransactor()
mAnalyzer.setTransactor(mFaceAnalyzerTransactor)
prepareViews()
}
Also, don’t forget to set the overlay of our transactor. We can do this inside surfaceChanged method like this.
Code:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine = createLensEngine(width, height)
surfaceHolderOverlay?.let { mFaceAnalyzerTransactor.setOverlay(it) }
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
}
}
10. We are almost done! Don’t forget to ask for permissions from our users. We need CAMERA and WRITE_EXTERNAL_STORAGE permissions. WRITE_EXTERNAL_STORAGE permission is for automatically updating the machine learning model. Add these permissions to your AndroidManifest.xml and ask from user to grant them at runtime. Here is a simple example.
Code:
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, 0)
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 0 && grantResults.isNotEmpty() && hasPermissions(requiredPermissions))
init()
}
11. Well done! We have finished all the steps and created our project. Now we can test it. Here are some examples.
12. We have created a simple FaceApp which detects faces, face features and emotions. You can produce countless number of types of face detection apps, it is up to your imagination. ML Kit empowers your apps with the power of AI. If you have any questions, please ask through the link below. You can also find this project on Github.
Related Links
Thanks to Oğuzhan Demirci for this article.
Original post: https://medium.com/huawei-developers/build-a-face-detection-app-with-huawei-ml-kit-32caec06484
Nice and useful article
Does face detection depend on specific hardware devices?
Related
More articles like this, you can visit HUAWEI Developer Forum.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this article I will tell you about Object Detection with HMS ML Kit first and then we are going to build an Android application which uses HMS ML Kit to detect and track objects in a camera stream. If you haven’t read my last article on detecting objects statically yet, here it is. You can also find introductive information about Artificial Intelligence, Machine Learning and Huawei ML Kit’s capabilities in that article.
The object detection and tracking service can detect and track multiple objects in an image. The detected objects can be located and classified in real time. It is also an ideal choice for filtering out unwanted objects in an image. By the way, Huawei ML Kit provides on device object detection capabilities, hence it is completely free.
Let’s don’t waste our precious time and start doing our sample project step by step!
1. If you haven’t registered as a Huawei Developer yet. Here is the link.
2. Create a Project on AppGalleryConnect. You can follow the steps shown here.
3. In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.
4. Integrate ML Kit SDK into your project. Your app level build.gradle will look like this:
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.demo.objectdetection"
minSdkVersion 21
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
kotlinOptions { jvmTarget = "1.8" }
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.core:core-ktx:1.3.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
//HMS ML Kit
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
}
and your project-level build.gradle is like this:
Code:
buildscript {
ext.kotlin_version = '1.3.72'
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
classpath 'com.android.tools.build:gradle:3.6.3'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
5. Create the layout first. There will be two surfaceViews. The first surfaceView is to display our camera stream, the second surfaceView is to draw our canvas. We will draw rectangles around detected objects and write their respective types on our canvas and show this canvas on our second surfaceView. Here is the sample:
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<SurfaceView
android:id="@+id/surface_view_camera"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<SurfaceView
android:id="@+id/surface_view_overlay"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
6. By the way, make sure you set your activity style as “Theme.AppCompat.Light.NoActionBar” or similar in res/styles.xml to hide the action bar.
7.1. We have two important classes that help us detect objects in HMS ML Kit. MLObjectAnalyzer and LensEngine. MLObjectAnalyzer detects object information (MLObject) in an image. We can also customize it using MLObjectAnalyzerSetting. Here is our createAnalyzer method:
Code:
private fun createAnalyzer(): MLObjectAnalyzer {
val analyzerSetting = MLObjectAnalyzerSetting.Factory()
.setAnalyzerType(MLObjectAnalyzerSetting.TYPE_VIDEO)
.allowMultiResults()
.allowClassification()
.create()
return MLAnalyzerFactory.getInstance().getLocalObjectAnalyzer(analyzerSetting)
}
7.2. Other important class that we are using today is LensEngine. LensEngine is responsible for camera initialization, frame obtaining, and logic control functions. Here is our createLensEngine method:
Code:
private fun createLensEngine(orientation: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(applicationContext, mAnalyzer)
.setLensType(LensEngine.BACK_LENS)
.applyFps(10F)
.enableAutomaticFocus(true)
return when(orientation) {
Configuration.ORIENTATION_PORTRAIT ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().heightPixels, getDisplayMetrics().widthPixels).create()
else ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().widthPixels, getDisplayMetrics().heightPixels).create()
}
}
8. Well, LensEngine handles camera frames, MLObjectAnalyzer detects MLObjects in those frames. Now we need to create our ObjectAnalyzerTranscator class which implements MLAnalyzer.MLTransactor interface. The detected MLObjects are going to be dropped in transactResult method of this class. I will share our ObjectAnalyzerTransactor class here with an additional draw method for drawing rectangles and some text around detected objects.
Code:
package com.demo.objectdetection
import android.graphics.Color
import android.graphics.Paint
import android.graphics.PorterDuff
import android.util.Log
import android.util.SparseArray
import android.view.SurfaceHolder
import androidx.core.util.forEach
import androidx.core.util.isNotEmpty
import androidx.core.util.valueIterator
import com.huawei.hms.mlsdk.common.MLAnalyzer
import com.huawei.hms.mlsdk.objects.MLObject
class ObjectAnalyzerTransactor : MLAnalyzer.MLTransactor<MLObject> {
companion object {
private const val TAG = "ML_ObAnalyzerTransactor"
}
private var mSurfaceHolderOverlay: SurfaceHolder? = null
fun setSurfaceHolderOverlay(surfaceHolder: SurfaceHolder) {
mSurfaceHolderOverlay = surfaceHolder
}
override fun transactResult(results: MLAnalyzer.Result<MLObject>?) {
val items = results?.analyseList
items?.forEach { key, value ->
Log.d(TAG, "transactResult -> " +
"Border: ${value.border} " + //Rectangle around this object
"Type Possibility: ${value.typePossibility} " + //Possibility between 0-1
"Tracing Identity: ${value.tracingIdentity} " + //Tracing number of this object
"Type Identity: ${value.typeIdentity}") //Furniture, Plant, Food etc.
}
items?.also {
draw(it)
}
}
private fun draw(items: SparseArray<MLObject>) {
val canvas = mSurfaceHolderOverlay?.lockCanvas()
if (canvas != null) {
//Clear canvas first
canvas.drawColor(0, PorterDuff.Mode.CLEAR)
for (item in items.valueIterator()) {
val type = getItemType(item)
//Draw a rectangle around detected object.
val rectangle = item.border
Paint().also {
it.color = Color.YELLOW
it.style = Paint.Style.STROKE
it.strokeWidth = 8F
canvas.drawRect(rectangle, it)
}
//Draw text on the upper left corner of the detected object, writing its type.
Paint().also {
it.color = Color.BLACK
it.style = Paint.Style.FILL
it.textSize = 24F
canvas.drawText(type, (rectangle.left).toFloat(), (rectangle.top).toFloat(), it)
}
}
}
mSurfaceHolderOverlay?.unlockCanvasAndPost(canvas)
}
private fun getItemType(item: MLObject) = when(item.typeIdentity) {
MLObject.TYPE_OTHER -> "Other"
MLObject.TYPE_FACE -> "Face"
MLObject.TYPE_FOOD -> "Food"
MLObject.TYPE_FURNITURE -> "Furniture"
MLObject.TYPE_PLACE -> "Place"
MLObject.TYPE_PLANT -> "Plant"
MLObject.TYPE_GOODS -> "Goods"
else -> "No match"
}
override fun destroy() {
Log.d(TAG, "destroy")
}
}
9. Our lensEngine needs a surfaceHolder to run on. Therefore will start it when our surfaceHolder is ready. Here is our callback:
Code:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine.close()
init()
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
mLensEngine.run(holder)
}
}
10. We require CAMERA and WRITE_EXTERNAL_STORAGE permissions. Make sure you add them to your AndroidManifest.xml file and ask user at runtime. For the sake of simplicity we do it as shown below:
Code:
class MainActivity : AppCompatActivity() {
companion object {
private const val PERMISSION_REQUEST_CODE = 8
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
}
override fun onCreate(savedInstanceState: Bundle?) {
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, PERMISSION_REQUEST_CODE)
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == PERMISSION_REQUEST_CODE && hasPermissions(requiredPermissions))
init()
}
}
11. Let’s bring them all the pieces together. Here is our MainActivity.
Code:
package com.demo.objectdetection
import android.Manifest
import android.content.Context
import android.content.pm.PackageManager
import android.content.res.Configuration
import android.graphics.PixelFormat
import android.os.Bundle
import android.util.DisplayMetrics
import android.view.SurfaceHolder
import android.view.WindowManager
import androidx.appcompat.app.AppCompatActivity
import androidx.core.app.ActivityCompat
import androidx.core.content.ContextCompat
import com.huawei.hms.mlsdk.MLAnalyzerFactory
import com.huawei.hms.mlsdk.common.LensEngine
import com.huawei.hms.mlsdk.objects.MLObjectAnalyzer
import com.huawei.hms.mlsdk.objects.MLObjectAnalyzerSetting
import kotlinx.android.synthetic.main.activity_main.*
class MainActivity : AppCompatActivity() {
companion object {
private const val TAG = "ML_MainActivity"
private const val PERMISSION_REQUEST_CODE = 8
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
}
private lateinit var mAnalyzer: MLObjectAnalyzer
private lateinit var mLensEngine: LensEngine
private lateinit var mSurfaceHolderCamera: SurfaceHolder
private lateinit var mSurfaceHolderOverlay: SurfaceHolder
private lateinit var mObjectAnalyzerTransactor: ObjectAnalyzerTransactor
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, PERMISSION_REQUEST_CODE)
}
private fun init() {
mAnalyzer = createAnalyzer()
mLensEngine = createLensEngine(resources.configuration.orientation)
mSurfaceHolderCamera = surface_view_camera.holder
mSurfaceHolderOverlay = surface_view_overlay.holder
mSurfaceHolderOverlay.setFormat(PixelFormat.TRANSPARENT)
mSurfaceHolderCamera.addCallback(surfaceHolderCallback)
mObjectAnalyzerTransactor = ObjectAnalyzerTransactor()
mObjectAnalyzerTransactor.setSurfaceHolderOverlay(mSurfaceHolderOverlay)
mAnalyzer.setTransactor(mObjectAnalyzerTransactor)
}
private fun createAnalyzer(): MLObjectAnalyzer {
val analyzerSetting = MLObjectAnalyzerSetting.Factory()
.setAnalyzerType(MLObjectAnalyzerSetting.TYPE_VIDEO)
.allowMultiResults()
.allowClassification()
.create()
return MLAnalyzerFactory.getInstance().getLocalObjectAnalyzer(analyzerSetting)
}
private fun createLensEngine(orientation: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(applicationContext, mAnalyzer)
.setLensType(LensEngine.BACK_LENS)
.applyFps(10F)
.enableAutomaticFocus(true)
return when(orientation) {
Configuration.ORIENTATION_PORTRAIT ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().heightPixels, getDisplayMetrics().widthPixels).create()
else ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().widthPixels, getDisplayMetrics().heightPixels).create()
}
}
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine.close()
init()
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
mLensEngine.run(holder)
}
}
override fun onDestroy() {
super.onDestroy()
//Release resources
mAnalyzer.stop()
mLensEngine.release()
}
private fun getDisplayMetrics() = DisplayMetrics().let {
(getSystemService(Context.WINDOW_SERVICE) as WindowManager).defaultDisplay.getMetrics(it)
it
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == PERMISSION_REQUEST_CODE && hasPermissions(requiredPermissions))
init()
}
}
12. In summary we used LensEngine to handle camera frames for us. We displayed the frames on our first surfaceView. Then MLObjectAnalyzer analyzed these frames and detected objects came into transactResult of our ObjectAnalyzerTrasactor class. In this method we iterated through all objects detected and drew them on our second surfaceView which we used as an overlay. Here is the output:
Image segmentation is a widely used term in image processing and computer vision world. It is the process of partitioning a digital image into multiple segments. The expected output from an image that we applied image segmentation on is labeling of each pixel into subgroups that we defined. By applying image segmentation, we get a more meaningful image, we get an image that each pixel of which is represented by categories such as human body, sky, plant, food etc.
Image segmentation can be used in many different scenarios. It can be used in photography apps to change the background or apply some effect only on plants or human body etc. It can also be used to determine cancerous cells in a microscope image or get land usage information in a satellite image or to determine the amount of herbicides that needed to be sprayed in a field according to crop density.
Huawei ML Kit’s Image Segmentation service segments same elements (such as human body, plant, and sky) from an image. The elements supported include human body, sky, plant, food, cat, dog, flower, water, sand, building, mountain, and others. By the way, Huawei ML Kit works on all Android phones with ARM architecture and as it is a device-side capability it is free.
ML Kit Image Segmentation allows developers two types of segmentation: human body and multiclass segmentation. We can apply image segmentation on static images and video streams if we select human body type. But we can only apply segmentation for static images in multiclass segmentation.
In multiclass image segmentation the return value of the process is the coordinate array of each element. For instance, if an image consists of human body, sky, plant, and cat, the return value is the coordinate array of the four elements. After this point our app can apply different effects on elements. For example, we can shange the blue sky with a red one.
The return values of human body segmentation include the coordinate array of the human body, human body image with a transparent background, and gray-scale image with a white human body and black background. Our app can further process the elements based on the return values, such as replacing the video background and cutting out the human body from an image.
Today, we are going to build Background Eraser app in Kotlin, on Android Studio. We are going to use human body segmentation function of ML Kit. In this example we are going to learn how to integrate HMS ML Kit into our project first and then we will see how to apply segmentation on images simply.
In the application we will have three imageViews. One for background image, one for selected image (supposed to contain human body) and one for processed image. We will distract human body/ bodies in the selected image and change its background with our selected background. It will be really simple, just to see how to apply this function in an application. I won’t bother you much with details. After we develop this simple application, you can go ahead and add various capabilities to your app.
Let’s start building our demo application step by step from scratch!
1. Firstly, let’s create our project on Android Studio. I named my project as Background Eraser. It’s totally up to you. We can create a project by selecting Empty Activity option and then follow the steps described in this page to create and sign our project in App Gallery Connect.
2. Secondly, In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.
3. Now we have integrated Huawei Mobile Services (HMS) into our project. Now let’s follow the documentation on developer.huawei.com and find the packages to add to our project. In the website click Developer / HMS Core/ AI / ML Kit. Here you will find introductory information to services, references, SDKs to download and others. Under ML Kit tab follow Android / Getting Started / Integrating HMS Core SDK / Adding Build Dependencies / Integrating the Image Segmentation SDK. We can follow the guide here to add image segmentation capability to our project. As we are not going to use multiclass segmentation in this project we only add base SDK and human body segmentation package. We have also one meta-data tag to be added into our AndroidManifest.xml file. After the integration your app-level build.gradle file will look like this.
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin:'com.huawei.agconnect'
android {
compileSdkVersion 30
buildToolsVersion "30.0.1"
defaultConfig {
applicationId "com.demo.backgrounderaser"
minSdkVersion 23
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.1'
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
//AGC Core
implementation'com.huawei.agconnect:agconnect-core:1.3.1.300'
//Image Segmentation Base SDK
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.2.300'
//Image Segmentation Human Body Model
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.2.300'
}
And your project-level build.gradle file will look like this.
Code:
buildscript {
ext.kotlin_version = "1.4.0"
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
Don’t forget to add the following meta-data tags in your AndroidManifest.xml. This is for automatic update of the machine learning model.
Code:
<manifest … >
<application
… >
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "imgseg"/>
</application>
</manifest>
4. We can select applying human body segmentation on static images or video streams. In this project we are going to see how to do this on static images. To apply segmentation on images firstly we need to create our analyzer. setExact() method is to determine if we apply fine segmentation or not. I chose true here. As analyzer type we chose BODY_SEG here. The second option being IMAGE_SEG for multiclass segmentation. setScene() method is for determining result type. Here we chose FOREGROUND_ONLY, meaning a human body image with a transparent background and an original image for segmentation will be returned. Here is how to create it.
Code:
private lateinit var mAnalyzer: MLImageSegmentationAnalyzer
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
createAnalyzer()
}
private fun createAnalyzer(): MLImageSegmentationAnalyzer {
val analyzerSetting = MLImageSegmentationSetting.Factory()
.setExact(true)
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setScene(MLImageSegmentationScene.FOREGROUND_ONLY)
.create()
return MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(analyzerSetting).also {
mAnalyzer = it
}
}
5. Create a simple layout. It should contain three imageView and two butons. One imageView for background image, one imageView for image with human body to apply segmentation, one imageView for displaying processed image. One button for selecting background image and the other button for selecting image to be processed. Here is my example, you can reach the same result with a different design, too.
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<androidx.constraintlayout.widget.Guideline
android:id="@+id/guidelineTopHorizontal"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintGuide_percent="0.3"
android:orientation="horizontal" />
<androidx.constraintlayout.widget.Guideline
android:id="@+id/guidelineBottomHorizontal"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintGuide_percent="0.9"
android:orientation="horizontal" />
<androidx.constraintlayout.widget.Guideline
android:id="@+id/guidelineCenterVertical"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
app:layout_constraintGuide_percent="0.5"
android:orientation="vertical" />
<ImageView
android:id="@+id/ivBackgroundFill"
android:layout_width="0dp"
android:layout_height="0dp"
android:padding="@dimen/margin_m"
android:contentDescription="@string/image_to_fill_background"
app:layout_constraintBottom_toBottomOf="@id/guidelineTopHorizontal"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="@id/guidelineCenterVertical"
app:layout_constraintTop_toTopOf="parent" />
<ImageView
android:id="@+id/ivSelectedBitmap"
android:layout_width="0dp"
android:layout_height="0dp"
android:padding="@dimen/margin_m"
android:contentDescription="@string/image_to_fill_background"
app:layout_constraintBottom_toBottomOf="@id/guidelineTopHorizontal"
app:layout_constraintStart_toStartOf="@id/guidelineCenterVertical"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<ImageView
android:id="@+id/ivProcessedBitmap"
android:layout_width="0dp"
android:layout_height="0dp"
android:padding="@dimen/margin_m"
android:contentDescription="@string/image_to_fill_background"
app:layout_constraintBottom_toBottomOf="@id/guidelineBottomHorizontal"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="@id/guidelineTopHorizontal" />
<Button
android:id="@+id/buttonSelectBackground"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/select_background"
android:contentDescription="@string/image_to_fill_background"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="@id/guidelineCenterVertical"
app:layout_constraintTop_toTopOf="@id/guidelineBottomHorizontal" />
<Button
android:id="@+id/buttonPickImage"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/pick_image"
android:contentDescription="@string/image_to_fill_background"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toStartOf="@id/guidelineCenterVertical"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="@id/guidelineBottomHorizontal" />
</androidx.constraintlayout.widget.ConstraintLayout>
6. In order to get our background image and source image we can use intents. OnClickListeners of our buttons will help us get images via intents from device storage as shown below. In onActivityResult we will appoint bitmaps we get to corresponding variables: mBackgroundFill and mSelectedBitmap. Here WRITE_EXTERNAL_STORAGE permission is requested to automatically update Huawei ML Kit’s image segmentation model.
Code:
class MainActivity : AppCompatActivity() {
companion object {
private const val IMAGE_REQUEST_CODE = 58
private const val BACKGROUND_REQUEST_CODE = 32
}
private lateinit var mAnalyzer: MLImageSegmentationAnalyzer
private var mBackgroundFill: Bitmap? = null
private var mSelectedBitmap: Bitmap? = null
private var mProcessedBitmap: Bitmap? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if(ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED)
init()
else
ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.WRITE_EXTERNAL_STORAGE), 0)
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 0 && grantResults.isNotEmpty() && grantResults[0] == PackageManager.PERMISSION_GRANTED)
init()
}
private fun init() {
initView()
createAnalyzer()
}
private fun initView() {
buttonPickImage.setOnClickListener { getImage(IMAGE_REQUEST_CODE) }
buttonSelectBackground.setOnClickListener { getImage(BACKGROUND_REQUEST_CODE) }
}
private fun getImage(requestCode: Int) {
Intent(Intent.ACTION_GET_CONTENT).also {
it.type = "image/*"
startActivityForResult(it, requestCode)
}
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode == Activity.RESULT_OK) {
data?.data?.also {
when(requestCode) {
IMAGE_REQUEST_CODE -> {
mSelectedBitmap = MediaStore.Images.Media.getBitmap(contentResolver, it)
if (mSelectedBitmap != null) {
ivSelectedBitmap.setImageBitmap(mSelectedBitmap)
}
}
BACKGROUND_REQUEST_CODE -> {
mBackgroundFill = MediaStore.Images.Media.getBitmap(contentResolver, it)
if (mBackgroundFill != null) {
ivBackgroundFill.setImageBitmap(mBackgroundFill)
}
}
}
}
}
}
}
Code:
7. Now let’s create our method for analyzing images. It is very simple. It takes a bitmap as parameter, creates an MLFrame object out of it and asynchronously analyzes this MLFrame. It has OnSuccess and OnFailure callbacks. We will try to add the selected background if the analyze result is successful. Please remember that we chose our analyzer’s result type as FOREGROUND_ONLY. So, expect it to return us the original image together with human body image with transparent background.
Code:
private fun analyse(bitmap: Bitmap) {
val mlFrame = MLFrame.fromBitmap(bitmap)
mAnalyzer.asyncAnalyseFrame(mlFrame)
.addOnSuccessListener {
addSelectedBackground(it)
}
.addOnFailureListener {
Log.e(TAG, "analyse -> asyncAnalyseFrame: ", it)
}
}
8. To add human body part onto our background image, we should make sure we have a background image first, then we should obtain a mutable bitmap out of our background to work upon, then we should resize this mutable bitmap according to our selected bitmap to make our image more realistic, then we should create our canvas from our mutable bitmap, then we draw human body part on our canvas and lastly we can use our processed image. Here is our addSelectedBackground() method.
Code:
private fun addSelectedBackground(mlImageSegmentation: MLImageSegmentation) {
if (mBackgroundFill == null) {
Toast.makeText(applicationContext, "Please select a background image!", Toast.LENGTH_SHORT).show()
} else {
var mutableBitmap = if (mBackgroundFill!!.isMutable) {
mBackgroundFill
} else {
mBackgroundFill!!.copy(Bitmap.Config.ARGB_8888, true)
}
if (mutableBitmap != null) {
/*
* If background image size is different than our selected image,
* we change our background image's size according to selected image.
*/
if (mutableBitmap.width != mlImageSegmentation.original.width ||
mutableBitmap.height != mlImageSegmentation.original.height) {
mutableBitmap = Bitmap.createScaledBitmap(
mutableBitmap,
mlImageSegmentation.original.width,
mlImageSegmentation.original.height,
false)
}
val canvas = mutableBitmap?.let { Canvas(it) }
canvas?.drawBitmap(mlImageSegmentation.foreground, 0F, 0F, null)
mProcessedBitmap = mutableBitmap
ivProcessedBitmap.setImageBitmap(mProcessedBitmap)
}
}
}
9. Here is the whole of MainActivity. You can find this project on Github, too.
Code:
class MainActivity : AppCompatActivity() {
companion object {
private const val TAG = "BE_MainActivity"
private const val IMAGE_REQUEST_CODE = 58
private const val BACKGROUND_REQUEST_CODE = 32
}
private lateinit var mAnalyzer: MLImageSegmentationAnalyzer
private var mBackgroundFill: Bitmap? = null
private var mSelectedBitmap: Bitmap? = null
private var mProcessedBitmap: Bitmap? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if(ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED)
init()
else
ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.WRITE_EXTERNAL_STORAGE), 0)
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 0 && grantResults.isNotEmpty() && grantResults[0] == PackageManager.PERMISSION_GRANTED)
init()
}
private fun init() {
initView()
createAnalyzer()
}
private fun initView() {
buttonPickImage.setOnClickListener { getImage(IMAGE_REQUEST_CODE) }
buttonSelectBackground.setOnClickListener { getImage(BACKGROUND_REQUEST_CODE) }
}
private fun createAnalyzer(): MLImageSegmentationAnalyzer {
val analyzerSetting = MLImageSegmentationSetting.Factory()
.setExact(true)
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setScene(MLImageSegmentationScene.FOREGROUND_ONLY)
.create()
return MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(analyzerSetting).also {
mAnalyzer = it
}
}
private fun analyse(bitmap: Bitmap) {
val mlFrame = MLFrame.fromBitmap(bitmap)
mAnalyzer.asyncAnalyseFrame(mlFrame)
.addOnSuccessListener {
addSelectedBackground(it)
}
.addOnFailureListener {
Log.e(TAG, "analyse -> asyncAnalyseFrame: ", it)
}
}
private fun addSelectedBackground(mlImageSegmentation: MLImageSegmentation) {
if (mBackgroundFill == null) {
Toast.makeText(applicationContext, "Please select a background image!", Toast.LENGTH_SHORT).show()
} else {
var mutableBitmap = if (mBackgroundFill!!.isMutable) {
mBackgroundFill
} else {
mBackgroundFill!!.copy(Bitmap.Config.ARGB_8888, true)
}
if (mutableBitmap != null) {
/*
* If background image size is different than our selected image,
* we change our background image's size according to selected image.
*/
if (mutableBitmap.width != mlImageSegmentation.original.width ||
mutableBitmap.height != mlImageSegmentation.original.height) {
mutableBitmap = Bitmap.createScaledBitmap(
mutableBitmap,
mlImageSegmentation.original.width,
mlImageSegmentation.original.height,
false)
}
val canvas = mutableBitmap?.let { Canvas(it) }
canvas?.drawBitmap(mlImageSegmentation.foreground, 0F, 0F, null)
mProcessedBitmap = mutableBitmap
ivProcessedBitmap.setImageBitmap(mProcessedBitmap)
}
}
}
private fun getImage(requestCode: Int) {
Intent(Intent.ACTION_GET_CONTENT).also {
it.type = "image/*"
startActivityForResult(it, requestCode)
}
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode == Activity.RESULT_OK) {
data?.data?.also {
when(requestCode) {
IMAGE_REQUEST_CODE -> {
mSelectedBitmap = MediaStore.Images.Media.getBitmap(contentResolver, it)
if (mSelectedBitmap != null) {
ivSelectedBitmap.setImageBitmap(mSelectedBitmap)
analyse(mSelectedBitmap!!)
}
}
BACKGROUND_REQUEST_CODE -> {
mBackgroundFill = MediaStore.Images.Media.getBitmap(contentResolver, it)
if (mBackgroundFill != null) {
ivBackgroundFill.setImageBitmap(mBackgroundFill)
mSelectedBitmap?.let { analyse(it) }
}
}
}
}
}
}
}
10. Well done! We have finished all the steps and created our project.
11. We have created a simple Background Eraser app. We can get human body related pixels out of our image and apply different backgrounds to it. This project is to show you the basics. You can go further, create much better projects and come up with various ideas to apply image segmentation to. I hope you enjoyed this article and created your project easily. If you have any questions please ask.
More information like this, you can visit HUAWEI Developer Forum
Introduction
Huawei Cab Application is to explore more about HMS Kits in real time scenario, we can use this app as reference to CP during HMS integration, and they can understand easily how HMS works in real time scenario, refer previous article.
Dashboard Module
The Dashboard page allows a user to book a cab with help of Huawei location, map and site kits, as follows:
Location kit provides to get the current location and location updates, and it provides flexible location based services globally to the users.
Huawei Map kit is to display maps, it covers map data of more than 200 countries and regions for searching any location address.
Site kit provides with convenient and secure access to diverse, place-related services to users.
Kits covered in Dashboard Module:
1. Location kit
2. Map kit
3. Site kit
4. Ads kit
Third Party API’s:
1. TomTom for Directions API
App Screenshots:
Auto fetch current location and also customize current location button in huawei map.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Pick destination you want to travel with marker in the huawei map view.
Pick destination using HMS Site Kit.
Draw route between start and end location.
Huawei Location Kit
HUAWEI Location Kit combines the GPS, Wi-Fi, and base station location functionalities in your app to build up global positioning capabilities, allowing you to provide flexible location-based services targeted at users around globally. Currently, it provides three main capabilities: fused location, activity identification, and geo-fence. You can call one or more of these capabilities as needed.
Huawei Map Kit
The HMS Core Map SDK is a set of APIs for map development in Android. The map data covers most countries outside China and supports multiple languages. The Map SDK uses the WGS 84 GPS coordinate system, which can meet most requirements of map development outside China. You can easily add map-related functions in your Android app, including:
Map display: Displays buildings, roads, water systems, and Points of Interest (POIs).
Map interaction: Controls the interaction gestures and buttons on the map.
Map drawing: Adds location markers, map layers, overlays, and various shapes.
Site Kit
HUAWEI Site Kit provide to users with convenient and secure access to diverse, place-related services.
Ads Kit
Huawei Ads provide developers extensive data capabilities to deliver high quality ad content to users. By integrating HMS ads kit we can start earning right away. It is very useful particularly when we are publishing a free app and want to earn some money from it.
Integrating HMS ads kit does not take more than 10 minuts HMS ads kit currently offers five types of ad format like Banner, Native, Rewarded, Interstitial and Splash ads.
In this article, we will see banner ad also.
Banner Ad: Banner ads are rectangular images that occupy a spot at the top, middle, or bottom within an app's layout. Banner ads refresh automatically at regular intervals. When a user taps a banner ad, the user is redirected to the advertiser's page in most cases.
TomTom for Direction API
TomTom Technology for a moving world. Meet the leading independent location, navigation and map technology specialist.
Follow the steps for TomTom Direction API.
Step1: Visit TomTom Developer Portal. https://developer.tomtom.com/
Step2: Login/Signup.
Step3: Click PRODUCTS and select Directions API
Step 4: Route API, as follows: https://developer.tomtom.com/routing-api/routing-api-documentation-routing/calculate-route
Step5: Click MY DASHBOARD and Copy Key and Use it in your application.
Integration Preparations
To integrate HUAWEI Map, Site, Location and Ads Kit, you must complete the following preparations:
Create an app in AppGallery Connect.
Create a project in Android Studio.
Generate a signing certificate.
Generate a signing certificate fingerprint.
Configure the signing certificate fingerprint.
Add the app package name and save the configuration file.
Add the AppGallery Connect plug-in and the Maven repository in the project-level build.gradle file.
Configure the signature file in Android Studio.
Configuring the Development Environment
Enabling HUAWEI Map, Site, Location and Ads Kit.
1. Sign in to AppGallery Connect, select My apps, click an app, and navigate to Develop > Overview > Manage APIs.
2. Click agconnect-services.json to download the configuration file.
3. Copy the agconnect-services.json file to the app's root directory.
4. Open the build.gradle file in the root directory of your Android Studio project.
5. Configure the following information in the build.gradle file.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath 'com.huawei.agconnect:agcp:1.3.2.301'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
6. Open the build.gradle file in the app directory.
7. Add Dependencies in app level build.gradle.
Code:
apply plugin: 'com.huawei.agconnect'dependencies {
implementation 'com.huawei.hms:maps:5.0.1.300'
implementation 'com.huawei.hms:location:5.0.0.302'
implementation 'com.huawei.hms:site:5.0.0.300'
implementation 'com.huawei.hms:ads-lite:13.4.30.307'
}
8. Apply for relevant permissions in sections at the same level as the application section in the AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
9. Add Huawei MapView and Banner Ad in layout file: activity_maview.xml
Code:
<!--MapView-->
<com.huawei.hms.maps.MapView
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/mapview"
android:layout_width="match_parent"
android:layout_height="match_parent"
map:cameraTargetLat="51"
map:cameraTargetLng="10"
map:cameraZoom="8.5"
map:mapType="normal"
map:uiCompass="true"
map:uiZoomControls="true" />
<!--BannerView-->
<com.huawei.hms.ads.banner.BannerView
android:id="@+id/hw_banner_view"
android:layout_width="match_parent"
android:layout_height="wrap_content"
hwads:adId="@string/banner_ad_id"
android:layout_alignParentTop="true"
hwads:bannerSize="BANNER_SIZE_360_57" />
10. Add the configuration for calling the MapView to the activity.
Code:
class DashboardFragment : Fragment(), OnMapReadyCallback, ReverseGeoCodeListener {private var hMap: HuaweiMap? = null
private var mMapView: MapView? = null
private var pickupLat: Double = 0.0
private var pickupLng: Double = 0.0
private var dropLat: Double = 0.0
private var dropLng: Double = 0.0
private var mPolyline: Polyline? = null
private var mMarkerDestination: Marker? = nulloverride fun onActivityCreated(savedInstanceState: Bundle?) {
super.onActivityCreated(savedInstanceState) // create fusedLocationProviderClient
mFusedLocationProviderClient = LocationServices.getFusedLocation ProviderClient(activity) mapView?.onCreate(savedInstanceState)
mapView?.onResume()
mapView?.getMapAsync(this)}
fun initBannerAds() {
// Obtain BannerView based on the configuration in layout.
val adParam: AdParam = AdParam.Builder().build()
hw_banner_view.loadAd(adParam)
}
override fun onMapReady(map: HuaweiMap) {
Log.d(TAG, "onMapReady: ")
hMap = map hMap?.isMyLocationEnabled = true
hMap?.uiSettings?.isMyLocationButtonEnabled = false
getLastLocation()
hMap?.setOnCameraMoveStartedListener {
when (it) {
HuaweiMap.OnCameraMoveStartedListener.REASON_GESTURE -> {
Log.d(TAG, "The user gestured on the map.")
val midLatLng: LatLng = hMap?.cameraPosition!!.target
Log.d("Moving_LatLng ", "" + midLatLng)
dropLat = midLatLng.latitude
dropLng = midLatLng.longitude
val task = MyAsyncTask(this, false)
task.execute(midLatLng.latitude, midLatLng.longitude) }
HuaweiMap.OnCameraMoveStartedListener
.REASON_API_ANIMATION -> {
Log.d(TAG, "The user tapped something on the map.")
}
HuaweiMap.OnCameraMoveStartedListener
.REASON_DEVELOPER_ANIMATION -> {
Log.d(TAG, "The app moved the camera.")
}
}
}
}
11. Add the life cycle method of the MapView
Code:
override fun onStart() {
super.onStart()
mMapView?.onStart()
}
override fun onStop() {
super.onStop()
mMapView?.onStop()
}
override fun onDestroy() {
super.onDestroy()
mMapView?.onDestroy()
}
override fun onPause() {
mMapView?.onPause()
super.onPause()
}
override fun onResume() {
super.onResume()
mMapView?.onResume()
}
override fun onLowMemory() {
super.onLowMemory()
mMapView?.onLowMemory()
}
12. Check permissions to let the user allow App to access user location from android 6.0
Code:
if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.P) {
Log.i(TAG, "sdk < 28 Q")
if (checkSelfPermission(
this,
ACCESS_FINE_LOCATION
) != PERMISSION_GRANTED && checkSelfPermission(
this,
ACCESS_COARSE_LOCATION
) != PERMISSION_GRANTED
) {
val strings = arrayOf(ACCESS_FINE_LOCATION, ACCESS_COARSE_LOCATION)
requestPermissions(this, strings, 1)
}
} else {
if (checkSelfPermission(
[email protected],
ACCESS_FINE_LOCATION
) != PERMISSION_GRANTED && checkSelfPermission(
[email protected], ACCESS_COARSE_LOCATION
) != PERMISSION_GRANTED && checkSelfPermission(
[email protected],
"android.permission.ACCESS_BACKGROUND_LOCATION"
) != PERMISSION_GRANTED
) {
val strings = arrayOf(
ACCESS_FINE_LOCATION,
ACCESS_COARSE_LOCATION,
"android.permission.ACCESS_BACKGROUND_LOCATION"
)
requestPermissions(this, strings, 2)
}
}
13. In you activity you need to handle the response by this way.
Code:
Array<String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 1) {
if (grantResults.size > 1 && grantResults[0] == PERMISSION_GRANTED && grantResults[1] == PERMISSION_GRANTED) {
Log.i(TAG, "onRequestPermissionsResult: apply LOCATION PERMISSION successful")
} else {
Log.i(TAG, "onRequestPermissionsResult: apply LOCATION PERMISSSION failed")
}
}
if (requestCode == 2) {
if (grantResults.size > 2 && grantResults[2] == PERMISSION_GRANTED && grantResults[0] == PERMISSION_GRANTED && grantResults[1] == PERMISSION_GRANTED) { override fun onRequestPermissionsResult(requestCode: Int, permissions: Log.i(TAG, "onRequestPermissionsResult: apply ACCESS_BACKGROUND_LOCATION successful")
} else {
Log.i(TAG, "onRequestPermissionsResult: apply ACCESS_BACKGROUND_LOCATION failed")
}
}
}
You can perform subsequent steps by referring to the RequestLocationUpdatesWithCallbackActivity.kt file.
14. Create a location provider client and device setting client.
Code:
// create fusedLocationProviderClient
fusedLocationProviderClient=LocationServices.getFusedLocationProviderClient(this)
// create settingsClient
settingsClient = LocationServices.getSettingsClient(this)
15. Create a location request.
Code:
mLocationRequest = LocationRequest().apply {
// set the interval for location updates, in milliseconds
interval = 1000
needAddress = true
// set the priority of the request
priority = LocationRequest.PRIORITY_HIGH_ACCURACY
}
16. Create a result callback.
Code:
mLocationCallback = object : LocationCallback() {
override fun onLocationResult(locationResult: LocationResult?) {
if (locationResult != null) {
val locations: List<Location> =
locationResult.locations
if (locations.isNotEmpty()) {
for (location in locations) {
LocationLog.i(TAG,"onLocationResult location[Longitude,Latitude,Accuracy]:${location.longitude} , ${location.latitude} , ${location.accuracy}")
}
}
}
}
override fun onLocationAvailability(locationAvailability: LocationAvailability?) {
locationAvailability?.let {
val flag: Boolean = locationAvailability.isLocationAvailable
LocationLog.i(TAG, "onLocationAvailability isLocationAvailable:$flag")
}
}
}
17. Request location updates.
Code:
private fun requestLocationUpdatesWithCallback() {
try {
val builder = LocationSettingsRequest.Builder()
builder.addLocationRequest(mLocationRequest)
val locationSettingsRequest = builder.build()
// check devices settings before request location updates.
//Before requesting location update, invoke checkLocationSettings to check device settings.
val locationSettingsResponseTask: Task<LocationSettingsResponse> = settingsClient.checkLocationSettings(locationSettingsRequest)
locationSettingsResponseTask.addOnSuccessListener { locationSettingsResponse: LocationSettingsResponse? ->
Log.i(TAG, "check location settings success {$locationSettingsResponse}")
// request location updates
fusedLocationProviderClient.requestLocationUpdates(mLocationRequest, mLocationCallback, Looper.getMainLooper())
.addOnSuccessListener {
LocationLog.i(TAG, "requestLocationUpdatesWithCallback onSuccess")
}
.addOnFailureListener { e ->
LocationLog.e(TAG, "requestLocationUpdatesWithCallback onFailure:${e.message}")
}
}
.addOnFailureListener { e: Exception ->
LocationLog.e(TAG, "checkLocationSetting onFailure:${e.message}")
when ((e as ApiException).statusCode) {
LocationSettingsStatusCodes.RESOLUTION_REQUIRED -> try {
val rae = e as ResolvableApiException
rae.startResolutionForResult(
[email protected]uestLocationUpdatesWithCallbackActivity, 0
)
} catch (sie: SendIntentException) {
Log.e(TAG, "PendingIntent unable to execute request.")
}
}
}
} catch (e: Exception) {
LocationLog.e(TAG, "requestLocationUpdatesWithCallback exception:${e.message}")
}
}
18. Remove location updates.
Code:
private fun removeLocationUpdatesWithCallback() {
try {
fusedLocationProviderClient.removeLocationUpdates(mLocationCallback)
.addOnSuccessListener {
LocationLog.i(
TAG,
"removeLocationUpdatesWithCallback onSuccess"
)
}
.addOnFailureListener { e ->
LocationLog.e(
TAG,
"removeLocationUpdatesWithCallback onFailure:${e.message}"
)
}
} catch (e: Exception) {
LocationLog.e(
TAG,
"removeLocationUpdatesWithCallback exception:${e.message}"
)
}
}
19. Reverse Geocoding
Code:
companion object {
private const val TAG = "MapViewDemoActivity"
private const val MAPVIEW_BUNDLE_KEY = "MapViewBundleKey"
class MyAsyncTask internal constructor(
private val context: DashboardFragment,
private val isStartPos: Boolean
) : AsyncTask<Double, Double, String?>() {
private var resp: String? = null
lateinit var geocoding: Geocoding
override fun onPreExecute() {
}
override fun doInBackground(vararg params: Double?): String? {
try {
geocoding = Geocoding()
geocoding.reverseGeoCodeListener = context
geocoding.reverseGeocoding(
"reverseGeocode",
"YOUR_API_KEY",
params[0],
params[1],
isStartPos
)
} catch (e: InterruptedException) {
e.printStackTrace()
} catch (e: Exception) {
e.printStackTrace()
}
return resp
} override fun onPostExecute(result: String?) {
}
override fun onProgressUpdate(vararg values: Double?) {
}
}
}
20. Geocoding class for converting latlng into Address
Code:
class Geocoding {
var reverseGeoCodeListener: ReverseGeoCodeListener? = null
val ROOT_URL = "https://siteapi.cloud.huawei.com/mapApi/v1/siteService/"
val conection =
"?key="
val JSON = MediaType.parse("application/json; charset=utf-8")
open fun reverseGeocoding(serviceName: String, apiKey: String?, lat: Double?, lng: Double?, isStartPos: Boolean) {
var sites: String = ""
val json = JSONObject()
val location = JSONObject()
try {
location.put("lng", lng)
location.put("lat", lat)
json.put("location", location)
json.put("language", "en")
json.put("politicalView", "CN")
json.put("returnPoi", true)
Log.d("MapViewDemoActivity",json.toString())
} catch (e: JSONException) {
Log.e("error", e.message)
}
val body : RequestBody = RequestBody.create(JSON, json.toString())
val client = OkHttpClient()
val request = Request.Builder()
.url(ROOT_URL + serviceName + conection + URLEncoder.encode(apiKey, "UTF-8"))
.post(body)
.build()
client.newCall(request).enqueue(object : Callback {
@Throws(IOException::class)
override fun onResponse(call: Call, response: Response) {
var str_response = response.body()!!.string()
val json_response:JSONObject = JSONObject(str_response)
val responseCode:String = json_response.getString("returnCode")
if (responseCode.contentEquals("0")) {
Log.d("ReverseGeocoding", str_response)
var jsonarray_sites: JSONArray = json_response.getJSONArray("sites")
var i:Int = 0
var size:Int = jsonarray_sites.length()
var json_objectdetail:JSONObject=jsonarray_sites.getJSONObject(0)
sites = json_objectdetail.getString("formatAddress")
Log.d("formatAddress", sites)
reverseGeoCodeListener?.getAddress(sites, isStartPos)
} else{
sites = "No Result"
Log.d("formatAddress", "")
reverseGeoCodeListener?.onAdddressError(sites)
}
}
override fun onFailure(call: Call, e: IOException) {
sites = "No Result"
Log.e("ReverseGeocoding", e.toString())
reverseGeoCodeListener?.onAdddressError(sites)
}
})
}
}
This is not the end. For full content, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201352135916100198&fid=0101187876626530001
When am integrating site getting error code 6? can you explain why
sujith.e said:
When am integrating site getting error code 6? can you explain why
Click to expand...
Click to collapse
Hi, Sujith.e. The possible cause is that the API key contains special characters. You need to encode the special characters using encodeURI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone. In this article, I will try to talk about the uses of Huawei Search Kit, Huawei ML Kit and Huawei Network Kit. I have developed a demo app using these 3 kits to make it clearer.
What is Search Kit?HUAWEI Search Kit fully opens Petal Search capabilities through the device-side SDK and cloud-side APIs, enabling ecosystem partners to quickly provide the optimal mobile app search experience.
What is Network Kit?Network Kit is a basic network service suite. It incorporates Huawei’s experience in far-field network communications, and utilizes scenario-based RESTful APIs as well as file upload and download APIs. Therefore, Network Kit can provide you with easy-to-use device-cloud transmission channels featuring low latency, high throughput, and high security.
What is ML Kit — ASR?Automatic speech recognition (ASR) can recognize speech not longer than 60s and convert the input speech into text in real time. This service uses industry-leading deep learning technologies to achieve a recognition accuracy of over 95%.
Development Steps1. Integration
First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. You can access the article about that steps from the link below.
Android | Integrating Your Apps With Huawei HMS Core
Hi, this article explains you how to integrate with HMS (Huawei Mobile Services) and making AppGallery Connect Console project settings.
medium.com
2. Adding Dependencies
After HMS Core is integrated into the project and the Search Kit and ML Kit are activated through the console, the required library should added to the build.gradle file in the app directory as follows. The project’s minSdkVersion value should be 24. For this, the minSdkVersion value in the same file should be updated to 24.
Java:
...
defaultConfig {
applicationId "com.myapps.searchappwithml"
minSdkVersion 24
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
...
dependencies {
...
implementation 'com.huawei.agconnect:agconnect-core:1.5.0.300'
implementation 'com.huawei.hms:network-embedded:5.0.1.301'
implementation 'com.huawei.hms:searchkit:5.0.4.303'
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.2.0.300'
...
}
3. Adding Permissions
Java:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
4. Application Class
When the application is started, we need to initialize the Kits in the Application class. Then we need to specify the Application class to the “android: name” tag in the manifest file.
Java:
@HiltAndroidApp
class SearchApplication : Application(){
override fun onCreate() {
super.onCreate()
initNetworkKit()
initSearchKit()
initMLKit()
}
private fun initNetworkKit(){
NetworkKit.init(
applicationContext,
object : NetworkKit.Callback() {
override fun onResult(result: Boolean) {
if (result) {
Log.i(NETWORK_KIT_TAG, "init success")
} else {
Log.i(NETWORK_KIT_TAG, "init failed")
}
}
})
}
private fun initSearchKit(){
SearchKitInstance.init(this, APP_ID)
CoroutineScope(Dispatchers.IO).launch {
SearchKitInstance.instance.refreshToken()
}
}
private fun initMLKit() {
MLApplication.getInstance().apiKey = API_KEY
}
}
5. Getting Access Token
We need to use Access Token to send requests to Search Kit. I used the Network Kit to request the Access Token. Its use is very similar to services that perform other Network operations.
As with other Network Services, there are Annotations such as POST, FormUrlEncoded, Headers, Field.
Java:
interface AccessTokenService {
@POST("oauth2/v3/token")
@FormUrlEncoded
@Headers("Content-Type:application/x-www-form-urlencoded", "charset:UTF-8")
fun createAccessToken(
@Field("grant_type") grant_type: String,
@Field("client_secret") client_secret: String,
@Field("client_id") client_id: String
) : Submit<String>
}
We need to create our request structure using the RestClient class.
Java:
@Module
@InstallIn(ApplicationComponent::class)
class ApplicationModule {
companion object{
private const val TIMEOUT: Int = 500000
private var restClient: RestClient? = null
fun getClient() : RestClient {
val httpClient = HttpClient.Builder()
.connectTimeout(TIMEOUT)
.writeTimeout(TIMEOUT)
.readTimeout(TIMEOUT)
.build()
if (restClient == null) {
restClient = RestClient.Builder()
.baseUrl("https://oauth-login.cloud.huawei.com/")
.httpClient(httpClient)
.build()
}
return restClient!!
}
}
}
Finally, by sending the request, we reach the AccessToken.
Java:
data class AccessTokenModel (
var access_token : String,
var expires_in : Int,
var token_type : String
)
...
fun SearchKitInstance.refreshToken() {
ApplicationModule.getClient().create(AccessTokenService::class.java)
.createAccessToken(
GRANT_TYPE,
CLIENT_SECRET,
CLIENT_ID
)
.enqueue(object : Callback<String>() {
override fun onFailure(call: Submit<String>, t: Throwable) {
Log.d(ACCESS_TOKEN_TAG, "getAccessTokenErr " + t.message)
}
override fun onResponse(
call: Submit<String>,
response: Response<String>
) {
val convertedResponse =
Gson().fromJson(response.body, AccessTokenModel::class.java)
setInstanceCredential(convertedResponse.access_token)
}
})
}
6. ML Kit (ASR) — Search Kit
Since we are using ML Kit (ASR), we first need to get microphone permission from the user. Then we start ML Kit (ASR) with the help of a button and get a text from the user. By sending this text to the function we created for the Search Kit, we reach the data we will show on the screen.
Here I used the Search Kit’s Web search feature. Of course, News, Image, Video search features can be used according to need.
Java:
@AndroidEntryPoint
class MainActivity : AppCompatActivity() {
private lateinit var binding: MainBinding
private val adapter: ResultAdapter = ResultAdapter()
private var isPermissionGranted: Boolean = false
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = MainBinding.inflate(layoutInflater)
setContentView(binding.root)
binding.button.setOnClickListener {
if (isPermissionGranted) {
startASR()
}
}
binding.recycler.adapter = adapter
val permission = arrayOf(Manifest.permission.INTERNET, Manifest.permission.RECORD_AUDIO)
ActivityCompat.requestPermissions(this, permission,MIC_PERMISSION)
}
override fun onRequestPermissionsResult(
requestCode: Int,
permissions: Array<out String>,
grantResults: IntArray
) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
when (requestCode) {
MIC_PERMISSION -> {
// If request is cancelled, the result arrays are empty.
if (grantResults.isNotEmpty()
&& grantResults[0] == PackageManager.PERMISSION_GRANTED
&& grantResults[1] == PackageManager.PERMISSION_GRANTED) {
// permission was granted
Toast.makeText(this, "Permission granted", Toast.LENGTH_SHORT).show()
isPermissionGranted = true
} else {
// permission denied,
Toast.makeText(this, "Permission denied", Toast.LENGTH_SHORT).show()
}
return
}
}
}
private fun startASR() {
val intent = Intent(this, MLAsrCaptureActivity::class.java)
.putExtra(MLAsrCaptureConstants.LANGUAGE, "en-US")
.putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX)
startActivityForResult(intent, ASR_REQUEST_CODE)
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == ASR_REQUEST_CODE) {
when (resultCode) {
MLAsrCaptureConstants.ASR_SUCCESS -> if (data != null) {
val bundle = data.extras
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
val text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT).toString()
performSearch(text)
}
}
MLAsrCaptureConstants.ASR_FAILURE -> if (data != null) {
val bundle = data.extras
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_CODE)) {
val errorCode = bundle.getInt(MLAsrCaptureConstants.ASR_ERROR_CODE)
Toast.makeText(this, "Error Code $errorCode", Toast.LENGTH_LONG).show()
}
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_MESSAGE)) {
val errorMsg = bundle.getString(MLAsrCaptureConstants.ASR_ERROR_MESSAGE)
Toast.makeText(this, "Error Code $errorMsg", Toast.LENGTH_LONG).show()
}
}
else -> {
Toast.makeText(this, "Failed to get data", Toast.LENGTH_LONG).show()
}
}
}
}
private fun performSearch(query: String) {
CoroutineScope(Dispatchers.IO).launch {
val searchKitInstance = SearchKitInstance.instance
val webSearchRequest = WebSearchRequest().apply {
setQ(query)
setLang(loadLang())
setSregion(loadRegion())
setPs(5)
setPn(1)
}
val response = searchKitInstance.webSearcher.search(webSearchRequest)
displayResults(response.data)
}
}
private fun displayResults(data: List<WebItem>) {
runOnUiThread {
adapter.items.apply {
clear()
addAll(data)
}
adapter.notifyDataSetChanged()
}
}
}
Output
ConclusionBy using these 3 kits effortlessly, you can increase the quality of your application in a short time. I hope this article was useful to you. See you in other articles
ReferencesNetwork Kit: https://developer.huawei.com/consum...s-V5/network-introduction-0000001050440045-V5
ML Kit: https://developer.huawei.com/consum...s-V5/service-introduction-0000001050040017-V5
Search Kit: https://developer.huawei.com/consum...re-Guides-V5/introduction-0000001055591730-V5
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we can learn how to correct the document position using Huawei ML Kit of Document Skew Correction feature. This service automatically identifies the location of a document in an image and adjusts the shooting angle to the angle facing the document, even if the document is tilted. This service is majorly used in daily life. For example, if you have captured any document, bank card, driving license etc. from the phone camera with an unfair position, then this feature will adjust the document angle and provides the perfect position.
So, I will provide a series of articles on this Patient Tracking App, in upcoming articles I will integrate other Huawei Kits.
If you are new to this application, follow my previous articles.
https://forums.developer.huawei.com/forumPortal/en/topic/0201902220661040078
https://forums.developer.huawei.com/forumPortal/en/topic/0201908355251870119
https://forums.developer.huawei.com/forumPortal/en/topic/0202914346246890032
Precautions
Ensure that the camera faces document, document occupies most of the image, and the boundaries of the document are in viewfinder.
The best shooting angle is within 30 degrees. If the shooting angle is more than 30 degrees, the document boundaries must be clear enough to ensure better effects.
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Must have a Huawei phone with HMS 4.0.0.300 or later.
3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 and above installed.
4. Minimum API Level 21 is required.
5. Required EMUI 9.0.0 and later version devices.
How to integrate HMS Dependencies
1. First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.
2. Create a project in android studio, refer Creating an Android Studio Project.
3. Generate a SHA-256 certificate fingerprint.
4. To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.
Note: Project Name depends on the user created name.
5. Create an App in AppGallery Connect.
6. Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
7. Enter SHA-256 certificate fingerprint and click Save button, as follows.
Note: Above steps from Step 1 to 7 is common for all Huawei Kits.
8. Click Manage APIs tab and enable ML Kit.
9. Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
Java:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
10. Add the below plugin and dependencies in build.gradle(Module) file.
Java:
apply plugin: 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.5.0.300'
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-documentskew:2.1.0.300'
// Import the document detection/correction model package.
implementation 'com.huawei.hms:ml-computer-vision-documentskew-model:2.1.0.300'
11. Now Sync the gradle.
12. Add the required permission to the AndroidManifest.xml file.
Java:
<uses-permission android:name="android.permission.CAMERA " />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Let us move to development
I have created a project on Android studio with empty activity let us start coding.
In the DocumentCaptureActivity.kt we can find the business logic.
Java:
class DocumentCaptureActivity : AppCompatActivity(), View.OnClickListener {
private val TAG: String = DocumentCaptureActivity::class.java.getSimpleName()
private var analyzer: MLDocumentSkewCorrectionAnalyzer? = null
private var mImageView: ImageView? = null
private var bitmap: Bitmap? = null
private var input: MLDocumentSkewCorrectionCoordinateInput? = null
private var mlFrame: MLFrame? = null
var imageUri: Uri? = null
var FlagCameraClickDone = false
var fabc: ImageView? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_document_capture)
findViewById<View>(R.id.btn_click).setOnClickListener(this)
mImageView = findViewById(R.id.image_result)
// Create the setting.
val setting = MLDocumentSkewCorrectionAnalyzerSetting.Factory()
.create()
// Get the analyzer.
analyzer = MLDocumentSkewCorrectionAnalyzerFactory.getInstance()
.getDocumentSkewCorrectionAnalyzer(setting)
fabc = findViewById(R.id.fab)
fabc!!.setOnClickListener(View.OnClickListener {
FlagCameraClickDone = false
val gallery = Intent(Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(gallery, 1)
})
}
override fun onClick(v: View?) {
this.analyzer()
}
private fun analyzer() {
// Call document skew detect interface to get coordinate data
val detectTask = analyzer!!.asyncDocumentSkewDetect(mlFrame)
detectTask.addOnSuccessListener { detectResult ->
if (detectResult != null) {
val resultCode = detectResult.getResultCode()
// Detect success.
if (resultCode == MLDocumentSkewCorrectionConstant.SUCCESS) {
val leftTop = detectResult.leftTopPosition
val rightTop = detectResult.rightTopPosition
val leftBottom = detectResult.leftBottomPosition
val rightBottom = detectResult.rightBottomPosition
val coordinates: MutableList<Point> = ArrayList()
coordinates.add(leftTop)
coordinates.add(rightTop)
coordinates.add(rightBottom)
coordinates.add(leftBottom)
[email protected](MLDocumentSkewCorrectionCoordinateInput(coordinates))
[email protected]()}
else if (resultCode == MLDocumentSkewCorrectionConstant.IMAGE_DATA_ERROR) {
// Parameters error.
Log.e(TAG, "Parameters error!")
[email protected]() }
else if (resultCode == MLDocumentSkewCorrectionConstant.DETECT_FAILD) {
// Detect failure.
Log.e(TAG, "Detect failed!")
[email protected]()
}
} else {
// Detect exception.
Log.e(TAG, "Detect exception!")
[email protected]()
}
}.addOnFailureListener { e -> // Processing logic for detect failure.
Log.e(TAG, e.message + "")
[email protected]()
}
}
// Show result
private fun displaySuccess(refineResult: MLDocumentSkewCorrectionResult) {
if (bitmap == null) {
this.displayFailure()
return
}
// Draw the portrait with a transparent background.
val corrected = refineResult.getCorrected()
if (corrected != null) {
mImageView!!.setImageBitmap(corrected)
} else {
this.displayFailure()
}
}
private fun displayFailure() {
Toast.makeText(this.applicationContext, "Fail", Toast.LENGTH_LONG).show()
}
private fun setDetectData(input: MLDocumentSkewCorrectionCoordinateInput) {
this.input = input
}
// Refine image
private fun refineImg() {
// Call refine image interface
val correctionTask = analyzer!!.asyncDocumentSkewCorrect(mlFrame, input)
correctionTask.addOnSuccessListener { refineResult ->
if (refineResult != null) {
val resultCode = refineResult.getResultCode()
if (resultCode == MLDocumentSkewCorrectionConstant.SUCCESS) {
this.displaySuccess(refineResult)
} else if (resultCode == MLDocumentSkewCorrectionConstant.IMAGE_DATA_ERROR) {
// Parameters error.
Log.e(TAG, "Parameters error!")
[email protected]()
} else if (resultCode == MLDocumentSkewCorrectionConstant.CORRECTION_FAILD) {
// Correct failure.
Log.e(TAG, "Correct failed!")
[email protected]()
}
} else {
// Correct exception.
Log.e(TAG, "Correct exception!")
[email protected]()
}
}.addOnFailureListener { // Processing logic for refine failure.
[email protected]()
}
}
override fun onDestroy() {
super.onDestroy()
if (analyzer != null) {
try {
analyzer!!.stop()
} catch (e: IOException) {
Log.e(TAG, "Stop failed: " + e.message)
}
}
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode == RESULT_OK && requestCode == 1) {
imageUri = data!!.data
try {
bitmap = MediaStore.Images.Media.getBitmap(this.contentResolver, imageUri)
// Create a MLFrame by using the bitmap.
mlFrame = MLFrame.Creator().setBitmap(bitmap).create()
} catch (e: IOException) {
e.printStackTrace()
}
// BitmapFactory.decodeResource(getResources(), R.drawable.new1);
FlagCameraClickDone = true
findViewById<View>(R.id.btn_click).visibility = View.VISIBLE
mImageView!!.setImageURI(imageUri)
}
}
}
In the activity_document_capture.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
tools:context=".mlkit.DocumentCaptureActivity">
<ImageView
android:id="@+id/image_result"
android:layout_width="400dp"
android:layout_height="520dp"
android:paddingLeft="5dp"
android:paddingTop="5dp"
android:src="@drawable/slip"
android:paddingStart="5dp"
android:paddingBottom="5dp"/>
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="horizontal"
android:weightSum="4"
android:layout_alignParentBottom="true"
android:gravity="center_horizontal" >
<ImageView
android:id="@+id/cam"
android:layout_width="0dp"
android:layout_height="41dp"
android:layout_margin="4dp"
android:layout_weight="1"
android:text="sample"
app:srcCompat="@android:drawable/ic_menu_gallery" />
<Button
android:id="@+id/btn_click"
android:layout_width="10dp"
android:layout_height="wrap_content"
android:layout_margin="4dp"
android:textSize="19sp"
android:layout_weight="2"
android:textAllCaps="false"
android:text="Capture" />
<ImageView
android:id="@+id/fab"
android:layout_width="0dp"
android:layout_height="42dp"
android:layout_margin="4dp"
android:layout_weight="1"
android:text="sample"
app:srcCompat="@android:drawable/ic_menu_camera" />
</LinearLayout>
</RelativeLayout>
Demo
Tips and Tricks
1. Make sure you are already registered as Huawei developer.
2. Set minSDK version to 21 or later, otherwise you will get AndriodManifest merge issue.
3. Make sure you have added the agconnect-services.json file to app folder.
4. Make sure you have added SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
Conclusion
In this article, we have learnt to correct the document position using Document Skew Correction feature by Huawei ML Kit. This service automatically identifies the location of a document in an image and adjust the shooting angle to angle facing the document, even if the document is tilted.
I hope you have read this article. If you found it is helpful, please provide likes and comments.
Reference
ML Kit - Document Skew Correction
ML Kit – Training Video