3D technology is developing fast, and more and more museums rely on such technology to hold online exhibitions, opening up history to more people. Users can immerse themselves in online exhibitions through themed virtual environments, stunning lighting effects, and exhibit models that can be enlarged and shrunk, so that visitors can view every detail. On top of this, such exhibitions also feature background music (BGM) and audio guides, providing background to each exhibit.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Demo
Here is a virtual exhibit that showcases a high level of realism.
To create something like this, we just need an Android Studio project with Kotlin, which implements the following functions: 3D scene creation, model display, and audio playback. Let's have a look at it.
1. Preparing a 3D Model
This can be effortlessly done with 3D Modeling Kit, a service that was recently added to HMS Core. This kit automatically generates a textured 3D model using images shot from different angles with a common mobile phone camera. The kit equips an app with the ability to build and preview 3D models. For more details, please refer to How to Build a 3D Product Model Within Just 5 Minutes.
2. Creating a 3D Scene View
Next, use Scene Kit to create an interactive 3D view for the model just created. For example:
Integrate Scene Kit.
Software requirements:
JDK version: 1.8 (recommended)
minSdkVersion: 19 or later
targetSdkVersion: 30 (recommended)
compileSdkVersion: 30 (recommended)
Gradle version: 5.4.1 or later (recommended)
Configure the following information in the project-level build.gradle file:
Code:
buildscript {
repositories {
...
maven { url 'https://developer.huawei.com/repo/' }
}
...
}
allprojects {
repositories {
...
maven { url 'https://developer.huawei.com/repo/' }
}
}
Configure the following information in the app-level build.gradle file:
Code:
dependencies {
...
implementation 'com.huawei.scenekit:full-sdk:5.1.0.300'
}
To enable the view binding feature, add the following code in the app-level build.gradle file:
android {
...
buildFeatures {
viewBinding true
}
...
}
To enable the view binding feature, add the following code in the app-level build.gradle file:
Code:
android {
...
buildFeatures {
viewBinding true
}
...
}
After synchronizing data in the build.gradle files, we can use Scene Kit in the project.
This article only describes how to use the kit to display the 3D model for an exhibit and how to interact with the model. To try other functions of Scene Kit, please refer to its official document.
Create a 3D scene view.
A custom SceneView is created to ensure that the first model can be automatically loaded after view initialization.
Code:
import android.content.Context
import android.util.AttributeSet
import android.view.SurfaceHolder
import com.huawei.hms.scene.sdk.SceneView
class CustomSceneView : SceneView {
constructor(context: Context?) : super(context)
constructor(
context: Context?,
attributeSet: AttributeSet?
) : super(context, attributeSet)
override fun surfaceCreated(holder: SurfaceHolder) {
super.surfaceCreated(holder)
loadScene("qinghuaci/scene.gltf")
loadSpecularEnvTexture("qinghuaci/specularEnvTexture.dds")
loadDiffuseEnvTexture("qinghuaci/diffuseEnvTexture.dds")
}
}
To implement model display, add the model file of the object to a folder under src > main >assets. For example:
loadScene(), loadSpecularEnvTexture(), and loadDiffuseEnvTexture() of surfaceCreated are used to load the object models. After the surface is created, the first object model will be loaded to it.
Next, open the XML file (activity_main.xml in this project) that is used to display the 3D scene view. Add CustomSceneView just constructed. The following code adds the arrow images for switching between object models.
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<com.example.sceneaudiodemo.CustomSceneView
android:id="@+id/csv_main"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<ImageView
android:id="@+id/iv_rightArrow"
android:layout_width="32dp"
android:layout_height="32dp"
android:layout_margin="12dp"
android:src="@drawable/ic_arrow"
android:tint="@color/white"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<ImageView
android:id="@+id/iv_leftArrow"
android:layout_width="32dp"
android:layout_height="32dp"
android:layout_margin="12dp"
android:rotation="180"
android:src="@drawable/ic_arrow"
android:tint="@color/white"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
We can now open the app to check the first exhibit: a blue and white porcelain vase.
Add the model switching function.
This function allows users to switch between different exhibit models.
Configure the following information in MainActivity:
Code:
private lateinit var binding: ActivityMainBinding
private var selectedId = 0
private val modelSceneList = arrayListOf(
"qinghuaci/scene.gltf",
"tangyong/scene.gltf",
)
private val modelSpecularList = arrayListOf(
"qinghuaci/specularEnvTexture.dds",
"tangyong/specularEnvTexture.dds",
)
private val modelDiffList = arrayListOf(
"qinghuaci/diffuseEnvTexture.dds",
"tangyong/diffuseEnvTexture.dds",
)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = ActivityMainBinding.inflate(layoutInflater)
val view = binding.root
setContentView(view)
binding.ivRightArrow.setOnClickListener {
if (modelSceneList.size == 0) [email protected]
selectedId = (selectedId + 1) % modelSceneList.size // Ensure the exhibit model ID is within the range of the model list.
loadImage()
}
binding.ivLeftArrow.setOnClickListener {
if (modelSceneList.size == 0) [email protected]
if (selectedId == 0) selectedId = modelSceneList.size - 1 // Ensure the exhibit model ID is within the range of the model list.
else selectedId -= 1
loadImage()
}
}
private fun loadImage() {
binding.csvMain.loadScene(modelSceneList[selectedId])
binding.csvMain.loadSpecularEnvTexture(modelSpecularList[selectedId])
binding.csvMain.loadDiffuseEnvTexture(modelDiffList[selectedId])
}
A simple logic is created in onCreate(), which is used to switch between the next and previous model. Paths of object models are saved as hard-coded character strings in each model list. On top of this, the logic can be modified to enable dynamic model display. selectedId indicates the ID of the object model being displayed.
Now we've successfully implemented 3D model display via SceneView. The images below illustrate the effect.
3. Adding Audio Guides for Exhibits
To help users grasp a greater understanding of the exhibits, Audio Kit supports voice recordings, which can be played when models are displayed to introduce their history and background.
Integrate Audio Kit.
Software requirements:
JDK version: 1.8.211 or later
minSdkVersion: 21 or later
targetSdkVersion: 30 (recommended)
compileSdkVersion: 30 (recommended)
Gradle version: 4.6 or later (recommended)
Audio Kit has higher software demands than Scene Kit does, so ensure that the software meets these requirements.
Add configurations for Audio Kit to the app-level build.gradle file:
Code:
dependencies {
...
implementation 'com.huawei.hms:audiokit-player:1.1.0.300'
...
}
Do not change the project-level build.gradle file, because the libraries needed for Audio Kit have been added during the configuration for Scene Kit.
Add a play button to the activity_main.xml file:
Code:
<Button
android:id="@+id/btn_playSound"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Play"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent" />
This button is used to play audio for the object being displayed.
Add the following configurations to MainActivity:
Code:
private var mHwAudioManager: HwAudioManager? = null
private var mHwAudioPlayerManager: HwAudioPlayerManager? = null
override fun onCreate(savedInstanceState: Bundle?) {
...
initPlayer(this)
binding.btnPlaySound.setOnClickListener {
mHwAudioPlayerManager?.play(selectedId) // Create a playlist instance. selectedId: parameter of the audio to be played.
}
...
}
private fun initPlayer(context: Context) {
val hwAudioPlayerConfig = HwAudioPlayerConfig(context)
HwAudioManagerFactory.createHwAudioManager(hwAudioPlayerConfig,
object : HwAudioConfigCallBack {
override fun onSuccess(hwAudioManager: HwAudioManager?) {
try {
mHwAudioManager = hwAudioManager
mHwAudioPlayerManager = hwAudioManager?.playerManager
mHwAudioPlayerManager?.playList(getPlaylist(), 0, 0)
} catch (ex: Exception) {
ex.printStackTrace()
}
}
override fun onError(p0: Int) {
Log.e("init:onError: ","$p0")
}
})
}
fun getPlaylist(): List<HwAudioPlayItem>? {
val playItemList: MutableList<HwAudioPlayItem> = ArrayList()
val audioPlayItem1 = HwAudioPlayItem()
val sound = Uri.parse("android.resource://yourpackagename/raw/soundfilename").toString() // soundfilename: audio file name that does not contain the extension.
audioPlayItem1.audioId = "1000"
audioPlayItem1.singer = "Taoge"
audioPlayItem1.onlinePath =
"https://lfmusicservice.hwcloudtest.cn:18084/HMS/audio/Taoge-chengshilvren.mp3" // The sample code uses a song as an example.
audioPlayItem1.setOnline(1)
audioPlayItem1.audioTitle = "chengshilvren"
playItemList.add(audioPlayItem1)
val audioPlayItem2 = HwAudioPlayItem()
audioPlayItem2.audioId = "1001"
audioPlayItem2.singer = "Taoge"
audioPlayItem2.onlinePath =
"https://lfmusicservice.hwcloudtest.cn:18084/HMS/audio/Taoge-dayu.mp3"// The sample code uses a song as an example.
audioPlayItem2.setOnline(1)
audioPlayItem2.audioTitle = "dayu"
playItemList.add(audioPlayItem2)
return playItemList
}
Once the configurations above are added, the app can begin to play audio guides for exhibits.
Note that the audio files added in the sample project are online files, so if you want to know how to add local audio files, please refer to the API reference of Audio Kit, which is a service that allows you to add audio files to the project to play when the object models are displayed.
What we've created is the exhibit models that can be rotated 360° and zoomed in and out, and feature sound effects, by utilizing HMS Core services.
These services can be used in industries other than displaying cultural relics, for example:
In social media, to generate 3D Qmojis, video memes, and virtual video backgrounds for users.
In e-commerce, for 3D product display, indoor scene rendering for furniture layout preview, and AR try-on.
In audio and video, for 3D lock screen/theme generation, 3D special effect rendering, and generation of special effects for live streaming.
In education, for creating 3D teaching demonstration/3D books and implementing VR distance learning.
Sounds interesting, right? To learn more about these kits, check out:
3D Modeling Kit and its sample code
Scene Kit and its sample code
Audio Kit and its sample code
Related
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hand keypoint detection is the process of finding fingertips, knuckles and wrists in an image. Hand keypoint detection and hand gesture recognition is still a challenging problem in computer vision domain. It is really a tough work to build your own model for hand keypoint detection as it is hard to collect a large enough hand dataset and it requires expertise in this domain.
Hand keypoint detection can be used in variety of scenarios. For example, it can be used during artistic creation, users can convert the detected hand keypoints into a 2D model, and synchronize the model to a character’s model to produce a vivid 2D animation. You can create a puppet animation game using the above idea. Another example may be creating a rock paper scissors game. Or if you take it further, you can create a sign language to text conversion application. As you see varieties to possible usage scenarios are abundant and there is no limit to ideas.
Hand keypoint detection service is a brand-new feature that is added to Huawei Machine Learning Kit family. It has recently been released and it is making developers and computer vision geeks really excited! It detects 21 points of a hand and can detect up to ten hands in an image. It can detect hands in a static image or in a camera stream. Currently, it does not support scenarios where your hand is blocked by more than 50% or you wear gloves. You don’t need an internet connection as this is a device side capability and what is more: It is completely free!
It wouldn’t be a nice practice only to read related documents and forget about it after a few days. So I created a simple demo application that counts fingers and tells us the number we show by hand. I strongly advise you to develop your hand keypoint detection application beside me. I developed the application in Android Studio in Kotlin. Now, I am going to explain you how to build this application step by step. Don’t hesitate to ask questions in the comments if you face any issues.
1.Firstly, let’s create our project on Android Studio. I named my project as HandKeyPointDetectionDemo. I am sure you can find better names for your application. We can create our project by selecting Empty Activity option and then follow the steps described in this post to create and sign our project in App Gallery Connect.
2. In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.
3. Now we have integrated Huawei Mobile Services (HMS) into our project. Now let’s follow the documentation on developer.huawei.com and find the packages to add to our project. In the website click Developer / HMS Core/ AI / ML Kit. Here you will find introductory information to services, references, SDKs to download and others. Under ML Kit tab follow Android / Getting Started / Integrating HMS Core SDK / Adding Build Dependencies / Integrating the Hand Keypoint Detection SDK. We can follow the guide here to add hand detection capability to our project. We have also one meta-data tag to be added into our AndroidManifest.xml file. After the integration your app-level build.gradle file will look like this.
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 30
buildToolsVersion "30.0.2"
defaultConfig {
applicationId "com.demo.handkeypointdetection"
minSdkVersion 21
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.1'
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'androidx.constraintlayout:constraintlayout:2.0.1'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
//AppGalleryConnect Core
implementation 'com.huawei.agconnect:agconnect-core:1.3.1.300'
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.2.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.2.300'
}
Our project-level build.gradle file:
Code:
buildscript {
ext.kotlin_version = "1.4.0"
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
And don’t forget to add related meta-data tags into your AndroidManifest.xml.
Code:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.demo.handkeypointdetection">
<uses-permission android:name="android.permission.CAMERA" />
<application
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "handkeypoint"/>
</application>
</manifest>
4. I created a class named HandKeyPointDetector. This class will be called from our activity or fragment. Its init method has two parameters context and a viewgroup. We will add our views on rootLayout.
Code:
fun init(context: Context, rootLayout: ViewGroup) {
mContext = context
mRootLayout = rootLayout
addSurfaceViews()
}
5. We are going to detect hand key points in a camera stream, so we create a surfaceView for camera preview and another surfaceView to draw somethings. The surfaceView that is going to be used as overlay should be transparent. Then, we add our views to our rootLayout passed as a parameter from our activity. Lastly we add SurfaceHolder.Callback to our surfaceHolder to know when it is ready.
Code:
private fun addSurfaceViews() {
val surfaceViewCamera = SurfaceView(mContext).also {
it.layoutParams = LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)
mSurfaceHolderCamera = it.holder
}
val surfaceViewOverlay = SurfaceView(mContext).also {
it.layoutParams = LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)
mSurfaceHolderOverlay = it.holder
mSurfaceHolderOverlay.setFormat(PixelFormat.TRANSPARENT)
mHandKeyPointTransactor.setOverlay(mSurfaceHolderOverlay)
}
mRootLayout.addView(surfaceViewCamera)
mRootLayout.addView(surfaceViewOverlay)
mSurfaceHolderCamera.addCallback(surfaceHolderCallback)
}
6. Inside our surfaceHolderCallback we override three methods: surfaceCreated, surfacehanged and surfaceDestroyed.
Code:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceCreated(holder: SurfaceHolder) {
createAnalyzer()
}
override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {
prepareLensEngine(width, height)
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder) {
mLensEngine.release()
}
}
7. createAnalyzer method creates MLKeyPointAnalyzer with settings. If you want you can use default settings also. Scene type can be keypoint and rectangle around hands or we can use TYPE_ALL for both. Max hand results can be up to MLHandKeypointAnalyzerSetting.MAX_HANDS_NUM which is 10 currently. As we will count fingers of 2 hands, I set it to 2.
Code:
private fun createAnalyzer() {
val settings = MLHandKeypointAnalyzerSetting.Factory()
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
.setMaxHandResults(2)
.create()
mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(settings)
mAnalyzer.setTransactor(mHandKeyPointTransactor)
}
8. LensEngine is responsible for handling camera frames for us. All we need to do is to prepare it with right dimensions according to orientation, choose the camera we want to work with, apply fps an so on.
Code:
private fun prepareLensEngine(width: Int, height: Int) {
val dimen1: Int
val dimen2: Int
if (mContext.resources.configuration.orientation == Configuration.ORIENTATION_LANDSCAPE) {
dimen1 = width
dimen2 = height
} else {
dimen1 = height
dimen2 = width
}
mLensEngine = LensEngine.Creator(mContext, mAnalyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(dimen1, dimen2)
.applyFps(5F)
.enableAutomaticFocus(true)
.create()
}
9. When you no longer need the analyzer stop it and release resources.
Code:
fun stopAnalyzer() {
mAnalyzer.stop()
}
10. As you can see in step-7 we used mHandKeyPointTransactor. It is a custom class that we created named HandKeyPointTransactor, which inherits MLAnalyzer.MLTransactor<MLHandKeypoints>. It has two overriden methods inside. transactResult and destroy. Detected results will fall inside transactResult method and then we will try to find the number.
Code:
override fun transactResult(result: MLAnalyzer.Result<MLHandKeypoints>?) {
if (result == null)
return
val canvas = mOverlay?.lockCanvas() ?: return
//Clear canvas.
canvas.drawColor(0, PorterDuff.Mode.CLEAR)
//Find the number shown by our hands.
val numberString = analyzeHandsAndGetNumber(result)
//Find the middle of the canvas
val centerX = canvas.width / 2F
val centerY = canvas.height / 2F
//Draw a text that writes the number we found.
canvas.drawText(numberString, centerX, centerY, Paint().also {
it.style = Paint.Style.FILL
it.textSize = 100F
it.color = Color.GREEN
})
mOverlay?.unlockCanvasAndPost(canvas)
}
11. We will check hand by hand and then finger by finger to find the fingers that are up to find the number.
Code:
private fun analyzeHandsAndGetNumber(result: MLAnalyzer.Result<MLHandKeypoints>): String {
val hands = ArrayList<Hand>()
var number = 0
for (key in result.analyseList.keyIterator()) {
hands.add(Hand())
for (value in result.analyseList.valueIterator()) {
number += hands.last().createHand(value.handKeypoints).getNumber()
}
}
return number.toString()
}
For more information, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202369245767250343&fid=0101187876626530001
Image classification uses the transfer learning algorithm to perform minute-level learning training on hundreds of images in specific fields (such as vehicles and animals) based on the base classification model with good generalization capabilities, and can automatically generate a model for image classification. The generated model can automatically identify the category to which the image belongs. This is an auto generated model. What if we want to create our image classification model?
In Huawei ML Kit it is possible. The AI Create function in HiAI Foundation provides the transfer learning capability of image classification. With in-depth machine learning and model training, AI Create can help users accurately identify images. In this article we will create own image classification model and we will develop an Android application with using this model. Let’s start.
First of all we need some requirement for creating our model;
You need a Huawei account for create custom model. For more detail click here.
You will need HMS Toolkit. In Android Studio plugins find HMS Toolkit and add plugin into your Android Studio.
You will need Python in our computer. Install Python 3.7.5 version. Mindspore is not used in other versions.
And the last requirements is the model. You will need to find the dataset. You can use any dataset you want. I will use flower dataset. You can find my dataset in here.
Model Creation
Create a new project in Android Studio. Then click HMS on top of the Android Studio screen. Then open Coding Assistant.
1- In the Coding Assistant screen, go to AI and then click AI Create. Set the following parameters, then click Confirm.
Operation type : Select New Model
Model Deployment Location : Select Deployment Cloud.
After click confirm a browser will be opened to log into your Huawei account. After log into your account a window will opened as below.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2- Drag or add the image classification folders to the Please select train image folder area then set Output model file path and train parameters. If you have extensive experience in deep learning development, you can modify the parameter settings to improve the accuracy of the image recognition model. After preparation click Create Model to start training and generate an image classification model.
3- Then it will start training. You can follow the process on log screen:
4- After training successfully completed you will see the screen like below:
In this screen you can see the train result, train parameter and train dataset information of your model. You can give some test data for testing your model accuracy if you want. Here is the sample test results:
5- After confirming that the training model is available, you can choose to generate a demo project.
Generate Demo: HMS Toolkit automatically generates a demo project, which automatically integrates the trained model. You can directly run and build the demo project to generate an APK file, and run the file on the simulator or real device to check the image classification performance.
Using Model Without Generated Demo Project
If you want to use the model in your project you can follow the steps:
1- In your project create an Assests file:
2- Then navigate to the folder path you chose in step 1 in Model Creation. Find your model the extension will be in the form of “.ms” . Then copy your model into Assets file. After we need one more file. Create a txt file containing your model tags. Then copy that file into Assets folder also.
3- Download and add the CustomModelHelper.kt file into your project. You can find repository in here:
https://github.com/iebayirli/AICreateCustomModel
Don’t forget the change the package name of CustomModelHelper class. After the ML Kit SDK is added, its errors will be fixed.
4- After completing the add steps, we need to add maven to the project level build.gradle file to get the ML Kit SDKs. Your gradle file should be like this:
Code:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
ext.kotlin_version = "1.3.72"
repositories {
google()
jcenter()
maven { url "https://developer.huawei.com/repo/" }
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
jcenter()
maven { url "https://developer.huawei.com/repo/" }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
5- Next, we are adding ML Kit SDKs into our app level build.gradle. And don’t forget the add aaptOption. Your app level build.gradle file should be like this:
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
android {
compileSdkVersion 30
buildToolsVersion "30.0.2"
defaultConfig {
applicationId "com.iebayirli.aicreatecustommodel"
minSdkVersion 26
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
kotlinOptions{
jvmTarget= "1.8"
}
aaptOptions {
noCompress "ms"
}
}
dependencies {
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.2'
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'androidx.constraintlayout:constraintlayout:2.0.2'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.2'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
implementation 'com.huawei.hms:ml-computer-model-executor:2.0.3.301'
implementation 'mindspore:mindspore-lite:0.0.7.000'
def activity_version = "1.2.0-alpha04"
// Kotlin
implementation "androidx.activity:activity-ktx:$activity_version"
}
6- Let’s create the layout first:
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<androidx.constraintlayout.widget.Guideline
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/guideline1"
android:orientation="horizontal"
app:layout_constraintGuide_percent=".65"/>
<ImageView
android:id="@+id/ivImage"
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintDimensionRatio="3:4"
android:layout_margin="16dp"
android:scaleType="fitXY"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintBottom_toBottomOf="@+id/guideline1"/>
<TextView
android:id="@+id/tvResult"
android:layout_width="0dp"
android:layout_height="0dp"
android:layout_margin="16dp"
android:autoSizeTextType="uniform"
android:background="@android:color/white"
android:autoSizeMinTextSize="12sp"
android:autoSizeMaxTextSize="36sp"
android:autoSizeStepGranularity="2sp"
android:gravity="center_horizontal|center_vertical"
app:layout_constraintTop_toTopOf="@+id/guideline1"
app:layout_constraintBottom_toTopOf="@+id/btnRunModel"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"/>
<Button
android:id="@+id/btnRunModel"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:text="Pick Image and Run"
android:textAllCaps="false"
android:background="#ffd9b3"
android:layout_marginBottom="8dp"
app:layout_constraintWidth_percent=".75"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"/>
</androidx.constraintlayout.widget.ConstraintLayout>
7- Then lets create const values in our activity. We are creating four values. First value is for permission. Other values are relevant to our model. Your code should look like this:
Code:
companion object {
const val readExternalPermission = android.Manifest.permission.READ_EXTERNAL_STORAGE
const val modelName = "flowers"
const val modelFullName = "flowers" + ".ms"
const val labelName = "labels.txt"
}
8- Then we create the CustomModelHelper example. We indicate the information of our model and where we want to download the model:
Code:
private val customModelHelper by lazy {
CustomModelHelper(
this,
modelName,
modelFullName,
labelName,
LoadModelFrom.ASSETS_PATH
)
}
9- After, we are creating two ActivityResultLauncher instances for gallery permission and image picking with using Activity Result API:
Code:
private val galleryPermission =
registerForActivityResult(ActivityResultContracts.RequestPermission()) {
if (!it)
finish()
}
private val getContent =
registerForActivityResult(ActivityResultContracts.GetContent()) {
val inputBitmap = MediaStore.Images.Media.getBitmap(
contentResolver,
it
)
ivImage.setImageBitmap(inputBitmap)
customModelHelper.exec(inputBitmap, onSuccess = { str ->
tvResult.text = str
})
}
In getContent instance. We are converting selected uri to bitmap and calling the CustomModelHelper exec() method. If the process successfully finish we update textView.
10- After creating instances the only thing we need to is launching ActivityResultLauncher instances into onCreate():
Code:
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
galleryPermission.launch(readExternalPermission)
btnRunModel.setOnClickListener {
getContent.launch(
"image/*"
)
}
}
11- Let’s bring them all the pieces together. Here is our MainActivity:
Code:
package com.iebayirli.aicreatecustommodel
import android.os.Bundle
import android.provider.MediaStore
import androidx.activity.result.contract.ActivityResultContracts
import androidx.appcompat.app.AppCompatActivity
import kotlinx.android.synthetic.main.activity_main.*
class MainActivity : AppCompatActivity() {
private val customModelHelper by lazy {
CustomModelHelper(
this,
modelName,
modelFullName,
labelName,
LoadModelFrom.ASSETS_PATH
)
}
private val galleryPermission =
registerForActivityResult(ActivityResultContracts.RequestPermission()) {
if (!it)
finish()
}
private val getContent =
registerForActivityResult(ActivityResultContracts.GetContent()) {
val inputBitmap = MediaStore.Images.Media.getBitmap(
contentResolver,
it
)
ivImage.setImageBitmap(inputBitmap)
customModelHelper.exec(inputBitmap, onSuccess = { str ->
tvResult.text = str
})
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
galleryPermission.launch(readExternalPermission)
btnRunModel.setOnClickListener {
getContent.launch(
"image/*"
)
}
}
companion object {
const val readExternalPermission = android.Manifest.permission.READ_EXTERNAL_STORAGE
const val modelName = "flowers"
const val modelFullName = "flowers" + ".ms"
const val labelName = "labels.txt"
}
}
Summary
In summary, we learned how to create a custom image classification model. We used HMS Toolkit for model training. After model training and creation we learned how to use our model in our application. If you want more information about Huawei ML Kit you find in here.
Here is the output:
https://github.com/iebayirli/AICreateCustomModel
Minimum sdk version for this
Hi everyone!
Today I will be briefing through how to implement a 3D Scene to display objects and play sounds in your Andorid Kotlin projects.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
All we need is Android Studio with version 3.5 or higher and a smartphone running Android 4.4 or later. The kits we need requires these specifications at minimum:
For Scene Kit alone:
JDK version: 1.7 or later
minSdkVersion: 19 or later
targetSdkVersion: 19 or later
compileSdkVersion: 19 or later
Gradle version: 3.5 or later
For Audio Kit alone:
JDK version: 1.8.211 or later
minSdkVersion: 21
targetSdkVersion: 29
compileSdkVersion: 29
Gradle version: 4.6 or later
That brings us to use Audio Kit’s minimum requirements as Scene Kit requirements are lower. So we should keep those in our minds while configuring our project. Let’s begin with implementing Scene Kit.
First of all, our aim with this Scene Kit implementation is to achieve a view of 3D object that we can interact with like this:
We will also add multiple objects and be able to cycle through. In order to use Scene Kit in your project, start by adding these implementations to build.gradle files.
Code:
buildscript {
repositories {
...
maven { url 'https://developer.huawei.com/repo/' }
}
...
}
allprojects {
repositories {
...
maven { url 'https://developer.huawei.com/repo/' }
}
}
Code:
dependencies {
...
implementation 'com.huawei.scenekit:full-sdk:5.1.0.300'
}
Note that in the project I have used viewBinding feature of Kotlin in order to skip boilerplate view initialization codes. If you want to use viewBinding, you should add this little code in your app-level build.gradle.
Code:
android {
...
buildFeatures {
viewBinding true
}
...
}
After syncing gradle files, we are ready to use Scene Kit in our project. Keep in mind that our purpose is solely to display 3D objects that user can interact with. But Scene Kit has much more deeper abilities that is provided for us. If you are actually looking for something different or want to discover all abilities, follow the link. Else, let’s continue with a custom scene view.
Scene Kit - HMS Core - HUAWEI Developer
Simple purpose of this custom view is just to load automatically our first object into the view when it is done initializing. Of course you can skip this part if you don’t need this purpose. Bear in mind that you should use default SceneView and load manually instead. You can still find the code for loading objects in this code snippet.
Code:
import android.content.Context
import android.util.AttributeSet
import android.view.SurfaceHolder
import com.huawei.hms.scene.sdk.SceneView
class CustomSceneView : SceneView {
constructor(context: Context?) : super(context)
constructor(
context: Context?,
attributeSet: AttributeSet?
) : super(context, attributeSet)
override fun surfaceCreated(holder: SurfaceHolder) {
super.surfaceCreated(holder)
loadScene("car1/scene.gltf")
loadSpecularEnvTexture("car1/specularEnvTexture.dds")
loadDiffuseEnvTexture("car1/diffuseEnvTexture.dds")
}
}
Well we cannot display anything actually before adding our object files in our projects. You will need to obtain object files elsewhere as object models I have, are not my creation. You can find public objects with ‘gltf object’ queries in search engines. Once you have your object, head to your project file and create ‘assets’ file under ‘../src/main/’ and place your object file here. In my case:
In the surfaceCreated method, loadScene(), loadSpecularEnvTexture() and loadDiffuseEnvTexture() methods are used to load our object. Once the view surface is created, our first object will be loaded into it. Now head to the xml for your 3D objects to display, in this guide case, activity_main.xml. In activity_main.xml, create the view we just created. I have also added simple arrows to navigate between models.
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<com.example.sceneaudiodemo.CustomSceneView
android:id="@+id/csv_main"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<ImageView
android:id="@+id/iv_rightArrow"
android:layout_width="32dp"
android:layout_height="32dp"
android:layout_margin="12dp"
android:src="@drawable/ic_arrow"
android:tint="@color/white"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<ImageView
android:id="@+id/iv_leftArrow"
android:layout_width="32dp"
android:layout_height="32dp"
android:layout_margin="12dp"
android:rotation="180"
android:src="@drawable/ic_arrow"
android:tint="@color/white"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
Now we are all set for our object to be displayed once our app is launched. Let’s add a few other objects and navigate between. In MainActivity:
Code:
private lateinit var binding: ActivityMainBinding
private var selectedId = 0
private val modelSceneList = arrayListOf(
"car1/scene.gltf",
"car2/scene.gltf",
"car3/scene.gltf"
)
private val modelSpecularList = arrayListOf(
"car1/specularEnvTexture.dds",
"car2/specularEnvTexture.dds",
"car3/specularEnvTexture.dds"
)
private val modelDiffList = arrayListOf(
"car1/diffuseEnvTexture.dds",
"car2/diffuseEnvTexture.dds",
"car3/diffuseEnvTexture.dds"
)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = ActivityMainBinding.inflate(layoutInflater)
val view = binding.root
setContentView(view)
binding.ivRightArrow.setOnClickListener {
if (modelSceneList.size == 0) [email protected]
selectedId = (selectedId + 1) % modelSceneList.size // To keep our id in the range of our model list
loadImage()
}
binding.ivLeftArrow.setOnClickListener {
if (modelSceneList.size == 0) [email protected]
if (selectedId == 0) selectedId = modelSceneList.size - 1 // To keep our id in the range of our model list
else selectedId -= 1
loadImage()
}
}
private fun loadImage() {
binding.csvMain.loadScene(modelSceneList[selectedId])
binding.csvMain.loadSpecularEnvTexture(modelSpecularList[selectedId])
binding.csvMain.loadDiffuseEnvTexture(modelDiffList[selectedId])
}
In onCreate(), we are making a simple next/previous logic to change our objects. And we are storing our objects’ file paths as strings in separate lists that we created hardcoded. You may want to tinker to make it dynamic but I wanted to keep it simple for the guide. We used ‘selectedId’ to keep track of current object being displayed.
And that’s all for SceneView implementation for 3D object views!
Now no time to waste, let’s continue with adding Audio Kit.
Head back to the app-level build.gradle and add Audio Kit implementation.
Code:
dependencies {
...
implementation 'com.huawei.hms:audiokit-player:1.1.0.300'
...
}
As we already added necessary repository while implementing Scene Kit, we won’t need to make any changes in the project-level build.gradle. So let’s go back and complete Audio Kit.
I added a simple play button to activity_main.xml.
Code:
<Button
android:id="@+id/btn_playSound"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Play"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent" />
I will use this button to play sound for current object. Afterwards, all we need to do is make these changes in our MainActivity.
Code:
private var mHwAudioManager: HwAudioManager? = null
private var mHwAudioPlayerManager: HwAudioPlayerManager? = null
override fun onCreate(savedInstanceState: Bundle?) {
...
initPlayer(this)
binding.btnPlaySound.setOnClickListener {
mHwAudioPlayerManager?.play(selectedId) // Requires playlist to play, selectedId works for index to play.
}
...
}
private fun initPlayer(context: Context) {
val hwAudioPlayerConfig = HwAudioPlayerConfig(context)
HwAudioManagerFactory.createHwAudioManager(hwAudioPlayerConfig,
object : HwAudioConfigCallBack {
override fun onSuccess(hwAudioManager: HwAudioManager?) {
try {
mHwAudioManager = hwAudioManager
mHwAudioPlayerManager = hwAudioManager?.playerManager
mHwAudioPlayerManager?.playList(getPlaylist(), 0, 0)
} catch (ex: Exception) {
ex.printStackTrace()
}
}
override fun onError(p0: Int) {
Log.e("init:onError: ","$p0")
}
})
}
fun getPlaylist(): List<HwAudioPlayItem>? {
val playItemList: MutableList<HwAudioPlayItem> = ArrayList()
val audioPlayItem1 = HwAudioPlayItem()
val sound = Uri.parse("android.resource://yourpackagename/raw/soundfilename").toString() // soundfilename should not include file extension.
audioPlayItem1.audioId = "1000"
audioPlayItem1.singer = "Taoge"
audioPlayItem1.onlinePath =
"https://lfmusicservice.hwcloudtest.cn:18084/HMS/audio/Taoge-chengshilvren.mp3"
audioPlayItem1.setOnline(1)
audioPlayItem1.audioTitle = "chengshilvren"
playItemList.add(audioPlayItem1)
val audioPlayItem2 = HwAudioPlayItem()
audioPlayItem2.audioId = "1001"
audioPlayItem2.singer = "Taoge"
audioPlayItem2.onlinePath =
"https://lfmusicservice.hwcloudtest.cn:18084/HMS/audio/Taoge-dayu.mp3"
audioPlayItem2.setOnline(1)
audioPlayItem2.audioTitle = "dayu"
playItemList.add(audioPlayItem2)
val audioPlayItem3 = HwAudioPlayItem()
audioPlayItem3.audioId = "1002"
audioPlayItem3.singer = "Taoge"
audioPlayItem3.onlinePath =
"https://lfmusicservice.hwcloudtest.cn:18084/HMS/audio/Taoge-wangge.mp3"
audioPlayItem3.setOnline(1)
audioPlayItem3.audioTitle = "wangge"
playItemList.add(audioPlayItem3)
return playItemList
}
After making the changes, we will be able to play sounds in our projects. I had to add online sounds available, if you want to use sounds added in your project, then you should use ‘sound’ variable I have given example of, and change ‘audioPlayItem.setOnline(1)’ to ‘audioPlayItem.setOnline(0)’. Also change ‘audioPlayItem.onlinePath’ to ‘audioPlayItem.filePath’. Then you should be able to play imported sound files too. By the way, that’s all for Audio Kit as well! We didn’t need to implement any play/pause or seekbar features as we just want to hear the sound and get done with it.
So we completed our guide for how to implement a Scene Kit 3D Scene View and Audio Kit to play sounds in our projects in Kotlin. If you have any questions or suggestions, feel free to contact me. Thanks for reading this far and I hope it was useful for you!
References
Scene Kit - HMS Core - HUAWEI Developer
Audio Kit - Audio Development Component - HUAWEI Developer
Can we install in non-huawei devices will it support?
sujith.e said:
Can we install in non-huawei devices will it support?
Click to expand...
Click to collapse
For Scene Kit compability our options are as these:
As for the Audio Kit, only Huawei devices are supported, referenced from: https://developer.huawei.com/consum.../HMSCore-Guides/introduction-0000001050749665
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we can learn how to save the hospital details by scanning the barcode and saving the details in your contacts directory using Huawei Scan Kit. Due to busy days like journey, office work and personal work, users are not able to save many details. So, this app helps you to save the hospital information by just one scan of barcode from your phone such as Hospital Name, Contact Number, Email address, Website etc.
So, I will provide a series of articles on this Patient Tracking App, in upcoming articles I will integrate other Huawei Kits.
If you are new to this application, follow my previous articles.
https://forums.developer.huawei.com/forumPortal/en/topic/0201902220661040078
https://forums.developer.huawei.com/forumPortal/en/topic/0201908355251870119
https://forums.developer.huawei.com/forumPortal/en/topic/0202914346246890032
https://forums.developer.huawei.com/forumPortal/en/topic/0202920411340450018
https://forums.developer.huawei.com/forumPortal/en/topic/0202926518891830059
What is scan kit?
HUAWEI Scan Kit scans and parses all major 1D and 2D barcodes and generates QR codes, helps you to build quickly barcode scanning functions into your apps.
HUAWEI Scan Kit automatically detects, magnifies and identifies barcodes from a distance and also it can scan a very small barcode in the same way. It supports 13 different formats of barcodes, as follows.
1D barcodes: EAN-8, EAN-13, UPC-A, UPC-E, Codabar, Code 39, Code 93, Code 128 and ITF
2D barcodes: QR Code, Data Matrix, PDF 417 and Aztec
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Must have a Huawei phone with HMS 4.0.0.300 or later.
3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 installed.
4. Minimum API Level 19 is required.
5. Required EMUI 9.0.0 and later version devices.
How to integrate HMS Dependencies
1. First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.
2. Create a project in android studio, refer Creating an Android Studio Project.
3. Generate a SHA-256 certificate fingerprint.
4. To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.
Note: Project Name depends on the user created name.
5. Create an App in AppGallery Connect.
6. Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
7. Enter SHA-256 certificate fingerprint and click tick icon, as follows.
Note: Above steps from Step 1 to 7 is common for all Huawei Kits.
8. Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
Java:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
9. Add the below plugin and dependencies in build.gradle(Module) file.
Java:
apply plugin: 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.5.0.300'
// Scan Kit
implementation 'com.huawei.hms:scan:1.2.5.300'
10. Now Sync the gradle.
11. Add the required permission to the AndroidManifest.xml file.
Java:
<!-- Camera permission -->
<uses-permission android:name="android.permission.CAMERA" />
<!-- File read permission -->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Let us move to development
I have created a project on Android studio with empty activity let's start coding.
In the ScanActivity.kt we can find the button click.
Java:
class ScanActivity : AppCompatActivity() {
companion object{
private val CUSTOMIZED_VIEW_SCAN_CODE = 102
}
private var resultText: TextView? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_scan)
resultText = findViewById<View>(R.id.result) as TextView
requestPermission()
}
fun onCustomizedViewClick(view: View?) {
resultText!!.text = ""
this.startActivityForResult(Intent(this, BarcodeScanActivity::class.java), CUSTOMIZED_VIEW_SCAN_CODE)
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode != RESULT_OK || data == null) {
return
}
// Get return value of HmsScan from the value returned by the onActivityResult method by ScanUtil.RESULT as key value.
val obj: HmsScan? = data.getParcelableExtra(ScanUtil.RESULT)
try {
val json = JSONObject(obj!!.originalValue)
// Log.e("Scan","Result "+json.toString())
val name = json.getString("hospital name")
val phone = json.getString("phone")
val mail = json.getString("email")
val web = json.getString("site")
val i = Intent(Intent.ACTION_INSERT_OR_EDIT)
i.type = ContactsContract.Contacts.CONTENT_ITEM_TYPE
i.putExtra(ContactsContract.Intents.Insert.NAME, name)
i.putExtra(ContactsContract.Intents.Insert.PHONE, phone)
i.putExtra(ContactsContract.Intents.Insert.EMAIL, mail)
i.putExtra(ContactsContract.Intents.Insert.COMPANY, web)
startActivity(i)
} catch (e: JSONException) {
e.printStackTrace()
Toast.makeText(this, "JSON exception", Toast.LENGTH_SHORT).show()
} catch (e: Exception) {
e.printStackTrace()
Toast.makeText(this, "Exception", Toast.LENGTH_SHORT).show()
}
}
private fun requestPermission() {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
requestPermissions(arrayOf(android.Manifest.permission.CAMERA, READ_EXTERNAL_STORAGE),1001)
}
}
@SuppressLint("MissingSuperCall")
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String?>, grantResults: IntArray) {
if (permissions == null || grantResults == null || grantResults.size < 2 || grantResults[0] != PackageManager.PERMISSION_GRANTED || grantResults[1] != PackageManager.PERMISSION_GRANTED) {
requestPermission()
}
}
}
In the BarcodeScanActivity.kt we can find the code to scan barcode.
Java:
class BarcodeScanActivity : AppCompatActivity() {
companion object {
private var remoteView: RemoteView? = null
//val SCAN_RESULT = "scanResult"
var mScreenWidth = 0
var mScreenHeight = 0
//scan view finder width and height is 350dp
val SCAN_FRAME_SIZE = 300
}
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_barcode_scan)
// 1. get screen density to calculate viewfinder's rect
val dm = resources.displayMetrics
val density = dm.density
// 2. get screen size
mScreenWidth = resources.displayMetrics.widthPixels
mScreenHeight = resources.displayMetrics.heightPixels
val scanFrameSize = (SCAN_FRAME_SIZE * density).toInt()
// 3. Calculate viewfinder's rect, it is in the middle of the layout.
// set scanning area(Optional, rect can be null. If not configure, default is in the center of layout).
val rect = Rect()
rect.left = mScreenWidth / 2 - scanFrameSize / 2
rect.right = mScreenWidth / 2 + scanFrameSize / 2
rect.top = mScreenHeight / 2 - scanFrameSize / 2
rect.bottom = mScreenHeight / 2 + scanFrameSize / 2
// Initialize RemoteView instance and set calling back for scanning result.
remoteView = RemoteView.Builder().setContext(this).setBoundingBox(rect).setFormat(HmsScan.ALL_SCAN_TYPE).build()
remoteView?.onCreate(savedInstanceState)
remoteView?.setOnResultCallback(OnResultCallback { result -> //judge the result is effective
if (result != null && result.size > 0 && result[0] != null && !TextUtils.isEmpty(result[0].getOriginalValue())) {
val intent = Intent()
intent.putExtra(ScanUtil.RESULT, result[0])
setResult(RESULT_OK, intent)
this.finish()
}else{
Log.e("Barcode","Barcode: No barcode recognized ")
}
})
// Add the defined RemoteView to page layout.
val params = FrameLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)
val frameLayout = findViewById<FrameLayout>(R.id.rim1)
frameLayout.addView(remoteView, params)
}
// Manage remoteView lifecycle
override fun onStart() {
super.onStart()
remoteView?.onStart()
}
override fun onResume() {
super.onResume()
remoteView?.onResume()
}
override fun onPause() {
super.onPause()
remoteView?.onPause()
}
override fun onDestroy() {
super.onDestroy()
remoteView?.onDestroy()
}
override fun onStop() {
super.onStop()
remoteView?.onStop()
}
}
In the activity_scan.xml we can create the UI screen.
XML:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
android:gravity="center"
tools:context=".scan.ScanActivity">
<Button
android:id="@+id/btn_click"
android:layout_width="wrap_content"
android:layout_height="50dp"
android:textAllCaps="false"
android:textSize="20sp"
android:layout_gravity="center"
android:text="Click to Scan"
android:onClick="onCustomizedViewClick"
tools:ignore="OnClick" />
<TextView
android:id="@+id/result"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textSize="18sp"
android:layout_marginTop="80dp"
android:textColor="#C0F81E"/>
</LinearLayout>
In the activity_barcode_scan.xml we can create the frame layout.
XML:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".scan.BarcodeScanActivity">
// customize layout for camera preview to scan
<FrameLayout
android:id="@+id/rim1"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#C0C0C0" />
// customize scanning mask
<ImageView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
android:layout_centerHorizontal="true"
android:alpha="0.1"
android:background="#FF000000"/>
// customize scanning view finder
<ImageView
android:id="@+id/scan_view_finder"
android:layout_width="300dp"
android:layout_height="300dp"
android:layout_centerInParent="true"
android:layout_centerHorizontal="true"
android:background="#1f00BCD4"
tools:ignore="MissingConstraints" />
</RelativeLayout>
Demo
Find the demo in attachment or click here for original content.
Tips and Tricks
1. Make sure you are already registered as Huawei developer.
2. Set minSDK version to 19 or later, otherwise you will get AndriodManifest merge issue.
3. Make sure you have added the agconnect-services.json file to app folder.
4. Make sure you have added SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
Conclusion
In this article, we can learn how to save the hospital details by scanning the barcode and saving the details in your contacts directory using Huawei Scan Kit. Due to busy days like journey, office work and personal work, users are not able to save many details. So, this app helps you to save the hospital information by just one scan of barcode from your phone such as Hospital Name, Contact Number, Email address, Website etc.
Reference
Scan Kit - Customized View
Scan Kit - Training Video
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we can learn how to integrate Rewarded Ads feature of Huawei Ads Kit into the android app. So, Rewarded ads are full-screen video ads that allow users to view in exchange for in-app rewards.
Ads Kit
Huawei Ads provides to developers a wide-ranging capabilities to deliver good quality ads content to users. This is the best way to reach the target audience easily and can measure user productivity. It is very useful when we publish a free app and want to earn some money from it.
HMS Ads Kit has 7 types of Ads kits. Now we can implement Rewarded Ads in this application.
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Must have a Huawei phone with HMS 4.0.0.300 or later.
3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 and above installed.
4. Minimum API Level 24 is required.
5. Required EMUI 9.0.0 and later version devices.
How to integrate HMS Dependencies
1. First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.
2. Create a project in android studio, refer Creating an Android Studio Project.
3. Generate a SHA-256 certificate fingerprint.
4. To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.
Note: Project Name depends on the user created name.
5. Create an App in AppGallery Connect.
6. Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
7. Enter SHA-256 certificate fingerprint and click Save button, as follows.
Note: Above steps from Step 1 to 7 is common for all Huawei Kits.
8. Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
Java:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.6.0.300'
9. Add the below plugin and dependencies in build.gradle(Module) file.
Java:
apply plugin: id 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.6.0.300'
// Huawei Ads Kit
implementation 'com.huawei.hms:ads-lite:13.4.51.300'
10. Now Sync the gradle.
11. Add the required permission to the AndroidManifest.xml file.
Java:
// Ads Kit
<uses-permission android:name="android.permission.INTERNET" />
Let us move to development
I have created a project on Android studio with empty activity let us start coding.
In the MainActivity.kt we can find the business logic for Ads.
Java:
class MainActivity : AppCompatActivity() {
companion object {
private const val PLUS_SCORE = 1
private const val MINUS_SCORE = 5
private const val RANGE = 2
}
private var rewardedTitle: TextView? = null
private var scoreView: TextView? = null
private var reStartButton: Button? = null
private var watchAdButton: Button? = null
private var rewardedAd: RewardAd? = null
private var score = 1
private val defaultScore = 10
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = getString(R.string.reward_ad)
rewardedTitle = findViewById(R.id.text_reward)
rewardedTitle!!.setText(R.string.reward_ad_title)
// Load a rewarded ad.
loadRewardAd()
// Load a score view.
loadScoreView()
// Load the button for watching a rewarded ad.
loadWatchButton()
// Load the button for starting a game.
loadPlayButton()
}
// Load a rewarded ad.
private fun loadRewardAd() {
if (rewardedAd == null) {
rewardedAd = RewardAd([email protected], getString(R.string.ad_id_reward))
}
val rewardAdLoadListener: RewardAdLoadListener = object : RewardAdLoadListener() {
override fun onRewardAdFailedToLoad(errorCode: Int) {
showToast("onRewardAdFailedToLoad errorCode is :$errorCode");
}
override fun onRewardedLoaded() {
showToast("onRewardedLoaded")
}
}
rewardedAd!!.loadAd(AdParam.Builder().build(), rewardAdLoadListener)
}
// Display a rewarded ad.
private fun rewardAdShow() {
if (rewardedAd!!.isLoaded) {
rewardedAd!!.show([email protected], object : RewardAdStatusListener() {
override fun onRewardAdClosed() {
showToast("onRewardAdClosed")
loadRewardAd()
}
override fun onRewardAdFailedToShow(errorCode: Int) {
showToast("onRewardAdFailedToShow errorCode is :$errorCode")
}
override fun onRewardAdOpened() {
showToast("onRewardAdOpened")
}
override fun onRewarded(reward: Reward) {
// You are advised to grant a reward immediately and at the same time, check whether the reward
// takes effect on the server. If no reward information is configured, grant a reward based on the
// actual scenario.
val addScore = if (reward.amount == 0) defaultScore else reward.amount
showToast("Watch video show finished , add $addScore scores")
score += addScore
setScore(score)
loadRewardAd()
}
})
}
}
// Set a Score
private fun setScore(score: Int) {
scoreView!!.text = "Score:$score"
}
// Load the button for watching a rewarded ad
private fun loadWatchButton() {
watchAdButton = findViewById(R.id.show_video_button)
watchAdButton!!.setOnClickListener(View.OnClickListener { rewardAdShow() })
}
// Load the button for starting a game
private fun loadPlayButton() {
reStartButton = findViewById(R.id.play_button)
reStartButton!!.setOnClickListener(View.OnClickListener { play() })
}
private fun loadScoreView() {
scoreView = findViewById(R.id.score_count_text)
scoreView!!.text = "Score:$score"
}
// Used to play a game
private fun play() {
// If the score is 0, a message is displayed, asking users to watch the ad in exchange for scores.
if (score == 0) {
Toast.makeText([email protected], "Watch video ad to add score", Toast.LENGTH_SHORT).show()
return
}
// The value 0 or 1 is returned randomly. If the value is 1, the score increases by 1. If the value is 0, the
// score decreases by 5. If the score is a negative number, the score is set to 0.
val random = Random().nextInt(RANGE)
if (random == 1) {
score += PLUS_SCORE
Toast.makeText([email protected], "You win!", Toast.LENGTH_SHORT).show()
} else {
score -= MINUS_SCORE
score = if (score < 0) 0 else score
Toast.makeText([email protected], "You lose!", Toast.LENGTH_SHORT).show()
}
setScore(score)
}
private fun showToast(text: String) {
runOnUiThread {
Toast.makeText([email protected], text, Toast.LENGTH_SHORT).show()
}
}
}
In the activity_main.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<TextView
android:id="@+id/text_reward"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="16dp"
android:textAlignment="center"
android:textSize="20sp"
android:text="This is rewarded ads sample"/>
<Button
android:id="@+id/play_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@+id/text_reward"
android:layout_centerHorizontal="true"
android:layout_marginTop="20dp"
android:text="Play" />
<Button
android:id="@+id/show_video_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@+id/play_button"
android:layout_centerHorizontal="true"
android:layout_marginTop="20dp"
android:text="Watch Video" />
<TextView
android:id="@+id/score_count_text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@+id/show_video_button"
android:layout_centerHorizontal="true"
android:layout_marginTop="30dp"
android:textAppearance="?android:attr/textAppearanceLarge" />
</RelativeLayout>
Demo
Tips and Tricks
1. Make sure you are already registered as Huawei developer.
2. Set minSDK version to 24 or later, otherwise you will get AndriodManifest merge issue.
3. Make sure you have added the agconnect-services.json file to app folder.
4. Make sure you have added SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
Conclusion
In this article, we have learned how to integrate the Huawei Analytics Kit and Ads Kit in Book Reading app. So, I will provide the series of articles on this Book Reading App, in upcoming articles will integrate other Huawei Kits.
I hope you have read this article. If you found it is helpful, please provide likes and comments.
Reference
Ads Kit - Rewarded Ads
Ads Kit – Training Video