Introduction
Flutter is a mobile application development kit for crafting high-quality native experiences on iOS and Android platforms in record time. Even though Flutter is enough to build great mobile apps, interactivity, such as map integration, is needed in order to increase the user experience.
Huawei Map Kit
Huawei Map Kit is a development kit and map service developed by Huawei to easily integrate map-based functions into your apps. The kit currently covers map data of more than 200 countries and regions, supports 40+ languages, provides UI elements such as markers, shapes, and layers to customize your map, and also enables users to interact with the map in your app through gestures and buttons in different scenarios.
With the recently released Huawei Map Kit Flutter Plugin, Huawei developers now can use these features and integrate map-based functions to their Flutter projects. Hence, in this article, to explore the kit and Huawei Services, we will try to build a mobile app featuring Huawei Map using the plugin and Flutter SDK.
HMS Core Github: https://github.com/HMS-Core/hms-flutter-plugin/tree/master/flutter-hms-map
Required Configurations
Before we get started, to use Huawei Map Kit, and also other Huawei Mobile Services, you should be a Huawei Developer Account holder. For more detailed information on Developer Accounts and how to apply for them, please refer to this link.
Creating an App
· Sign in to AppGallery Connect using your Huawei ID and create a new project to work with, by clicking My projects>Add Project button.
· Click Add App button to add a new application to your project by filling the required fields such as name, category, and default language.
· Map Kit APIs of your app is enabled by default, but just to be sure, you can check it from the Manage APIs tab of your project on AppGallery Connect. You can also refer to Enabling Services article if you need any help.
· Open your Android Studio and create a Flutter application. The package name of your Flutter application should be the same with the package name of your app which you created on AppGallery Connect.
· Android requires a Signing Certificate to verify the authenticity of apps. Thus, you need to generate a Signing Certificate for your app. If you don’t know how to generate a Signing Certificate please click here for related article. Copy your generated Keystore file to your android/app directory of your project.
A side note: Flutter project structure have folders, such as ios and android folders, which belongs to different platforms, by nature, yet Android Studio treats them as a Flutter project folders and throws errors on these files. For this reason, before changing anything related to Android platform, you should right click to your android folder on your project directory and select Flutter > Open Android module in Android Studio. You can easily modify the files and also generate Signing Certificates from the Android Studio window that opened after selection.
· After generating your Signing Certificate (Keystore) you should extract SHA-256 fingerprint using keytool, which provided by JDK, and add to the AppGallery Connect by navigating to the App Information section of your app. You may also refer to Generating Fingerprint from a Keystore and Add Fingerprint certificate to AppGallery Connect articles for further help.
Integrating HMS and Map Plugin to Your Flutter Project
You also need to configure your Flutter application in order to communicate with Huawei to use Map Kit.
· Add the Huawei Map Kit Flutter Plugin as a dependency to your project’s pubspec.yaml file and run flutter pub get command to integrate Map Plugin to your project.
Code:
dependencies:
flutter:
sdk: flutter
huawei_map: ^4.0.4+300
· Download the agconnect-services.json file from the App Information section of the AppGallery Connect and copy it to your android/app directory of your project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Your project’s android directory should look like this after adding both agconnect-services.json and keystore file.
· Add the Maven repository address and AppGallery Connect plugin to the project level build.gradle (android/build.gradle) file.
Code:
buildscript {
repositories {
//other repositories
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
//other dependencies
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
}
}
allprojects {
repositories {
//other repositories
maven { url 'https://developer.huawei.com/repo/' }
}
}
· Open your app level build.gradle (android/app/build.gradle) file and add the AppGallery Connect plugin.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect' //Added Line
apply plugin: 'kotlin-android'
· In the same file (android/app/build.gradle), add the signing configurations, and change the minSdkVersion of your project as shown below.
Code:
android {
/*
* Other configurations
*/
defaultConfig {
applicationId "<package_name>" //Your unique package name
minSdkVersion 19 //Change minSdkVersion to 19
targetSdkVersion 28
versionCode flutterVersionCode.toInteger()
versionName flutterVersionName
}
signingConfigs {
config{
storeFile file('<keystore_file>')
storePassword '<keystore_password>'
keyAlias '<key_alias>'
keyPassword '<key_password>'
}
}
buildTypes {
debug {
signingConfig signingConfigs.config
}
release {
signingConfig signingConfigs.config
}
}
}
· Finally, to call capabilities of Huawei Map Kit, apply for the following permissions for your app in your AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
— To obtain the current device location following permissions are also needed to be declared in your AndroidManifest.xml file (on Android 6.0 and later versions, you need to apply for these permissions dynamically). But we won’t use the current location in our app, this step is optional.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION"/>
Using Huawei Map Kit Flutter Plugin
Creating a Map
Now that we are ready to use Huawei’s Map features, let’s implement a simple map.
Huawei Map Kit Flutter Plugin provides a single widget, called HuaweiMap, for developers to easily create and manage map fragments. By using this widget, developers can enable or disable attributes or gestures of a map, set initial markers, circles, or other shapes, decide the map type, and also set an initial camera position to focus an area when the map is ready.
Let’s choose a random location from İstanbul as an initial camera position. While declaring the initial camera position, the zoom level, which indicates the value of magnification, and target, which indicates the latitude and longitude of the location, is required. You can find my target coordinates and zoom level below, which we will use while creating our map.
Code:
static const LatLng _center = const LatLng(41.027470, 28.999339);
static const double _zoom = 12;
Since now we have an initial position, we should implement the map itself. We will first create a simple Scaffold and set a simple AppBar, then create a HuaweiMap object as a Scaffold’s body. HuaweiMap object, as mentioned before, has different attributes which you can see below. The following code will create a HuaweiMap object that is full-screen, scrollable, tiltable, and also shows the buildings or traffic.
Code:
class HomeScreen extends StatefulWidget {
@override
_HomeScreenState createState() => _HomeScreenState();
}
class _HomeScreenState extends State<HomeScreen> {
static const LatLng _center = const LatLng(41.027470, 28.999339);
static const double _zoom = 12;
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: AppBar(
title: Text("Map Demo"),
centerTitle: true,
backgroundColor: Colors.red,
),
body: HuaweiMap(
initialCameraPosition: CameraPosition(
target: _center,
zoom: _zoom,
),
mapType: MapType.normal,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: false,
rotateGesturesEnabled: true,
myLocationButtonEnabled: false,
myLocationEnabled: false,
trafficEnabled: true,
),
);
}
}
Considering creating a ‘clean’ map, I disabled myLocationEnabled, myLocationButtonEnabled and zoomControlsEnabled attributes of the map, but do not forget to explore these attributes by trying yourself since they are great to boost your user experience of your app.
Huawei Map
Resizing the Map
A full-screen map is not always useful in some scenarios, thus, since HuaweiMap is a standalone widget, we can resize the map by wrapping it to Container or ConstrainedBox widgets.
For this project, I will create a layout in Scaffold by using Expanded, Column, and Container widgets. The following code shows a HuaweiMap widget which fills only one-third of a screen.
Code:
class HomeScreen extends StatefulWidget {
@override
_HomeScreenState createState() => _HomeScreenState();
}
class _HomeScreenState extends State<HomeScreen> {
static const LatLng _center = const LatLng(41.027470, 28.999339);
static const double _zoom = 12;
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: AppBar(
title: Text("Locations"),
centerTitle: true,
backgroundColor: Colors.red,
),
body: Column(
children: [
Expanded(
child: Padding(
padding: EdgeInsets.all(8),
child: Container(
decoration: BoxDecoration(
border: Border.all(color: Colors.green, width: 2)),
child: HuaweiMap(
initialCameraPosition: CameraPosition(
target: _center,
zoom: _zoom,
),
mapType: MapType.normal,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: false,
rotateGesturesEnabled: true,
myLocationButtonEnabled: false,
myLocationEnabled: false,
trafficEnabled: true,
),
),
),
),
Expanded(flex: 2, child: Container()),
],
),
);
}
}
Adding Interactivity by Using Markers and CameraUpdate
Let’s assume that we are building an app that shows different restaurant locations as markers on our HuaweiMap object. To do this, we will set some initial markers to our map using HuaweiMap widget’s markers field.
Markers field of the widget takes a Set of markers, hence we should first create a set of Marker objects. Then use them as initial markers for our HuaweiMap widget.
As you know, the two-third of our screen is empty, to fill the space we will create some Card widgets that hold the location’s name, motto, and address as a String. To reduce the redundant code blocks, I create a separate widget, called LocationCard, which returns a styled custom Card widget. To not lose the scope of this article, I will not share the steps of how to create a custom card widget but you may find its code from the project’s GitHub link.
Code:
class HomeScreen extends StatefulWidget {
@override
_HomeScreenState createState() => _HomeScreenState();
}
class _HomeScreenState extends State<HomeScreen> {
static const LatLng _center = const LatLng(41.027470, 28.999339);
static const double _zoom = 12;
//Marker locations
static const LatLng _location1 = const LatLng(41.0329109, 28.9840904);
static const LatLng _location2 = const LatLng(41.0155957, 28.9827176);
static const LatLng _location3 = const LatLng(41.0217315, 29.0111898);
//Set of markers
Set<Marker> _markers = {
Marker(markerId: MarkerId("Location1"), position: _location1),
Marker(markerId: MarkerId("Location2"), position: _location2),
Marker(markerId: MarkerId("Location3"), position: _location3),
};
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: AppBar(
title: Text("Locations"),
centerTitle: true,
backgroundColor: Colors.red,
),
body: Column(
children: [
Expanded(
child: Padding(
padding: EdgeInsets.all(8),
child: Container(
decoration: BoxDecoration(
border: Border.all(color: Colors.green, width: 2)),
child: HuaweiMap(
initialCameraPosition: CameraPosition(
target: _center,
zoom: _zoom,
),
mapType: MapType.normal,
tiltGesturesEnabled: true,
buildingsEnabled: true,
compassEnabled: true,
zoomControlsEnabled: false,
rotateGesturesEnabled: true,
myLocationButtonEnabled: false,
myLocationEnabled: false,
trafficEnabled: true,
markers: _markers, //Using the set
),
),
),
),
//Styled Card widgets
Expanded(flex: 2, child: Padding(
padding: EdgeInsets.all(8),
child: SingleChildScrollView(
child: Column(
children: [
LocationCard(
title: "Location 1",
motto: "A Fine Dining Restaurant",
address:
"Avrupa Yakası, Cihangir, 34433 Beyoğlu/İstanbul Türkiye",
),
LocationCard(
title: "Location 2",
motto: "A Restaurant with an Extraordinary View",
address:
"Avrupa Yakası, Hoca Paşa, 34110 Fatih/İstanbul Türkiye",
),
LocationCard(
title: "Location 3",
motto: "A Casual Dining Restaurant",
address:
"Anadolu Yakası, Aziz Mahmut Hüdayi, 34672 Üsküdar/İstanbul Türkiye",
)
],
),
),),
)],
),
);
}
}
Now we have some custom cards beneath the map object which also has some initial markers. We will use these custom cards as a button to zoom in the desired marker with a smooth camera animation. To do so, users can easily tab a card to see the zoomed-in location on a Huawei map and explore the surrounding without leaving the page.
Resized Map with Location Cards
Before turning cards into a button we should first set a HuaweiMapController object in order to provide a controller for HuaweiMap, then use this controller on HuaweiMap widgets onMapCreated field to pair map and its controller. Below, I created a controller, and with the help of a simple function, use it in our HuaweiMap object.
Code:
HuaweiMapController mapController;
void _onMapCreated(HuaweiMapController controller) {
mapController = controller;
}
/*
This section only shows the added line. Remaining code is not changed.
*/
child: HuaweiMap(
initialCameraPosition: CameraPosition(
target: _center,
zoom: _zoom,
),
onMapCreated: _onMapCreated, // Added Line
mapType: MapType.normal,
tiltGesturesEnabled: true,
/*
Rest of the code
*/
We now have a controller for non-user camera moves, so let’s use the controller. I wrapped the LocationCards with InkWell widget to provide an onTap functionality. There are several useful methods in plugins CameraUpdate class which enables us to zoom in, zoom out, or change camera position. We will use the newLatLngZoom method to zoom in the stated location then, by using the controller and animateCamera method, we will animate the camera move to our new camera location. You can find the wrapped LocationCard with the CameraUpdate and controller.
Code:
InkWell(
onTap: () {
CameraUpdate cameraUpdate =
CameraUpdate.newLatLngZoom(
_location1, _zoomMarker);
mapController.animateCamera(cameraUpdate);
},
child: LocationCard(
title: "Location 1",
motto: "A Fine Dining Restaurant",
address:
"Avrupa Yakası, Cihangir, 34433 Beyoğlu/İstanbul Türkiye",
)),
The used _zoomMarker variable is a constant double and has a value of 18. Also, the used _location1 variable is the variable we set while creating our markers.
After implementing these steps, tap a card and you will see a smooth camera move animation with the change of zoom level in your HuaweiMap widget. Voila!
As I mentioned before, you can also set some circles, polylines, or polygons similar to the markers. Furthermore, you can add some on-click actions both to your map and shapes or markers you set. Do not forget to explore other functionalities that Huawei Map Kit Flutter Plugin offers.
I am leaving the project’s GitHub link in case you want to check or try this example by yourself. You may also find LocationCard’s code and other minor adjustments that I made, from the link.
https://github.com/SerdarCanDev/FlutterHuaweiMapTutorial
Conclusion
Since Huawei created its own services, the demand for support to cross-platform frameworks such as Flutter, React Native, Cordova or Xamarin, is increased. To meet these demands Huawei continuously releasing plugins and updates in order to support its developers. We already learned how to use Huawei’s Map Kit in our Flutter projects, yet there are several more official plugins for Huawei Services to explore.
For further reading, I will provide some links in the “References and Further Reading” section including an article which showcases another service. You may also ask any question related to this article in the comments section.
https://developer.huawei.com/consumer/en/hms/huawei-MapKit
https://pub.dev/publishers/developer.huawei.com/packages
Sending Push Notifications on Flutter with Huawei Push Kit Plugin:
https://medium.com/huawei-developers/sending-push-notifications-on-flutter-with-huawei-push-kit-plugin-534787862b4d
Related
In this article I will talk about HUAWEI Scene Kit. HUAWEI Scene Kit is a lightweight rendering engine that features high performance and low consumption. It provides advanced descriptive APIs for us to edit, operate, and render 3D materials. Scene Kit adopts physically based rendering (PBR) pipelines to achieve realistic rendering effects. With this Kit, we only need to call some APIs to easily load and display complicated 3D objects on Android phones.
It was announced before with just SceneView feature. But, in the Scene Kit SDK 5.0.2.300 version, they have announced Scene Kit with new features FaceView and ARView. With these new features, the Scene Kit has made the integration of Plane Detection and Face Tracking features much easier.
At this stage, the following question may come to your mind “since there are ML Kit and AR Engine, why are we going to use Scene Kit?” Let’s give the answer to this question with an example.
Differences Between Scene Kit and AR Engine or ML Kit:
For example, we have a Shopping application. And let’s assume that our application has a feature in the glasses purchasing part that the user can test the glasses using AR to see how the glasses looks like in real. Here, we do not need to track facial gestures using the Facial expression tracking feature provided by AR Engine. All we have to do is render a 3D object on the user’s eye. Face Tracking is enough for this. So if we used AR Engine, we would have to deal with graphics libraries like OpenGL. But by using the Scene Kit FaceView, we can easily add this feature to our application without dealing with any graphics library. Because the feature here is a basic feature and the Scene Kit provides this to us.
So what distinguishes AR Engine or ML Kit from Scene Kit is AR Engine and ML Kit provide more detailed controls. However, Scene Kit only provides the basic features (I’ll talk about these features later). For this reason, its integration is much simpler.
Let’s examine what these features provide us.SceneView
With SceneView, we are able to load and render 3D materials in common scenes.
It allows us to:
Load and render 3D materials.
Load the cubemap texture of a skybox to make the scene look larger and more impressive than it actually is.
Load lighting maps to mimic real-world lighting conditions through PBR pipelines.
Swipe on the screen to view rendered materials from different angles.
ARView:
ARView uses the plane detection capability of AR Engine, together with the graphics rendering capability of Scene Kit, to provide us with the capability of loading and rendering 3D materials in common AR scenes.
With ARView, we can:
Load and render 3D materials in AR scenes.
Set whether to display the lattice plane (consisting of white lattice points) to help select a plane in a real-world view.
Tap an object placed onto the lattice plane to select it. Once selected, the object will change to red. Then we can move, resize, or rotate it.
FaceView:
FaceView can use the face detection capability provided by ML Kit or AR Engine to dynamically detect faces. Along with the graphics rendering capability of Scene Kit, FaceView provides us with superb AR scene rendering dedicated for faces.
With FaceView we can:
Dynamically detect faces and apply 3D materials to the detected faces.
As I mentioned above ARView uses the plane detection capability of AR Engine and the FaceView uses the face detection capability provided by either ML Kit or AR Engine. When using the FaceView feature, we can use the SDK we want by specifying which SDK to use in the layout.
Here, we should consider the devices to be supported when choosing the SDK. You can see the supported devices in the table below. Also for more detailed information you can visit this page. (In addition to the table on this page, the Scene Kit’s SceneView feature also supports P40 Lite devices.)
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Also, I think it is useful to mention some important working principles of Scene Kit:
Scene Kit
Provides a Full-SDK, which we can integrate into our app to access 3D graphics rendering capabilities, even though our app runs on phones without HMS Core.
Uses the Entity Component System (ECS) to reduce coupling and implement multi-threaded parallel rendering.
Adopts real-time PBR pipelines to make rendered images look like in a real world.
Supports the general-purpose GPU Turbo to significantly reduce power consumption.
Demo App
Let’s learn in more detail by integrating these 3 features of the Scene Kit with a demo application that we will develop in this section.
To configure the Maven repository address for the HMS Core SDK add the below code to project level build.gradle.
Go to
〉 project level build.gradle > buildscript > repositories
〉project level build.gradle > allprojects > repositories
Code:
maven { url 'https://developer.huawei.com/repo/' }
After that go to
module level build.gradle > dependencies
then add build dependencies for the Full-SDK of Scene Kit in the dependencies block.
Code:
implementation 'com.huawei.scenekit:full-sdk:5.0.2.302'
Note: When adding build dependencies, replace the version here “full-sdk: 5.0.2.302” with the latest Full-SDK version. You can find all the SDK and Full-SDK version numbers in Version Change History.
Then click the Sync Now as shown below
After the build is successfully completed, add the following line to the manifest.xml file for Camera permission.
Code:
<uses-permission android:name="android.permission.CAMERA" />
Now our project is ready to development. We can use all the functionalities of Scene Kit.
Let’s say this demo app is a shopping app. And I want to use Scene Kit features in this application. We’ll use the Scene Kit’s ARView feature in the “office” section of our application to test how a plant and a aquarium looks on our desk.
And in the sunglasses section, we’ll use the FaceView feature to test how sunglasses look on our face.
Finally, we will use the SceneView feature in the shoes section of our application. We’ll test a shoe to see how it looks.
We will need materials to test these properties, let’s get these materials first. I will use 3D models that you can download from the links below. You can use the same or different materials if you want.
Capability: ARView, Used Models: Plant , Aquarium
Capability: FaceView, Used Model: Sunglasses
Capability: SceneView, Used Model: Shoe
Note: I used 3D models in “.glb” format as asset in ARView and FaceView features. However, these links I mentioned contain 3D models in “.gltf” format. I converted “.gltf” format files to “.glb” format. Therefore, you can obtain a 3D model in “.glb” format by uploading all the files (textures, scene.bin and scene.gltf) of the 3D models downloaded from these links to an online converter website. You can use any online conversion website for the conversion process.
All materials must be stored in the assets directory. Thus, we place the materials under app> src> main> assets in our project. After placing it, our file structure will be as follows.
After adding the materials, we will start by adding the ARView feature first. Since we assume that there are office supplies in the activity where we will use the ARView feature, let’s create an activity named OfficeActivity and first develop its layout.
Note: Activities must extend the Activity class. Update the activities that extend the AppCompatActivity with Activity”
Example: It should be “OfficeActivity extends Activity”.
ARView
In order to use the ARView feature of the Scene Kit, we add the following ARView code to the layout (activity_office.xml file).
Code:
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent">
</com.huawei.hms.scene.sdk.ARView>
Overview of the activity_office.xml file:
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="bottom"
tools:context=".OfficeActivity">
<com.huawei.hms.scene.sdk.ARView
android:id="@+id/ar_view"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<LinearLayout
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_centerInParent="true"
android:layout_centerHorizontal="true"
android:layout_centerVertical="true"
android:gravity="bottom"
android:layout_marginBottom="30dp"
android:orientation="horizontal">
<Button
android:id="@+id/button_flower"
android:layout_width="110dp"
android:layout_height="wrap_content"
android:onClick="onButtonFlowerToggleClicked"
android:text="Load Flower"/>
<Button
android:id="@+id/button_aquarium"
android:layout_width="110dp"
android:layout_height="wrap_content"
android:onClick="onButtonAquariumToggleClicked"
android:text="Load Aquarium"/>
</LinearLayout>
</RelativeLayout>
We specified 2 buttons, one for the aquarium and the other for loading a plant. Now, let’s do the initializations from OfficeActivity and activate the ARView feature in our application. First, let’s override the onCreate () function to obtain the ARView and the button that will trigger the code of object loading.
Code:
private ARView mARView;
private Button mButtonFlower;
private boolean isLoadFlowerResource = false;
private boolean isLoadAquariumResource = false;
private Button mButtonAquarium;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_office);
mARView = findViewById(R.id.ar_view);
mButtonFlower = findViewById(R.id.button_flower);
mButtonAquarium = findViewById(R.id.button_aquarium);
Toast.makeText(this, "Please move the mobile phone slowly to find the plane", Toast.LENGTH_LONG).show();
}
Then add the method that will be triggered when the buttons are clicked. Here we will check the loading status of the object. We will clean or load the object according to the its situation.
For plant button:
Code:
public void onButtonFlowerToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadFlowerResource) {
// Load 3D model.
mARView.loadAsset("ARView/flower.glb");
float[] scale = new float[] { 0.15f, 0.15f, 0.15f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.500f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadFlowerResource = true;
mButtonFlower.setText("Clear Flower");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadFlowerResource = false;
mButtonFlower.setText("Load Flower");
}
}
For the aquarium button:
Code:
public void onButtonAquariumToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadAquariumResource) {
// Load 3D model.
mARView.loadAsset("ARView/aquarium.glb");
float[] scale = new float[] { 0.015f, 0.015f, 0.015f };
float[] rotation = new float[] { 0.0f, 0.0f, 0.0f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadAquariumResource = true;
mButtonAquarium.setText("Clear Aquarium");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadAquariumResource = false;
mButtonAquarium.setText("Load Aquarium");
}
}
Now let’s talk about what we do with the codes here, line by line. First, we set the ARView.enablePlaneDisplay() function to true, and if a plane is defined in the real world, the program will appear a lattice plane here.
Code:
mARView.enablePlaneDisplay(true);
Then we check whether the object has been loaded or not. If it is not loaded, we specify the path to the 3D model we selected with the mARView.loadAsset() function and load it. (assets> ARView> flower.glb)
Code:
mARView.loadAsset("ARView/flower.glb");
Then we create and initialize scale and rotation arrays for the starting position. For now, we are entering hardcoded values here. For the future versions, by holding the screen, etc. We can set a starting position.
Note: The Scene Kit ARView feature already allows us to move, adjust the size and change the direction of the object we have created on the screen. For this, we should select the object we created and move our finger on the screen to change the position, size or direction of the object.
Here we can adjust the direction or size of the object by adjusting the rotation and scale values.(These values will be used as parameter of setInitialPose() function)
Note: These values can be changed according to used model. To find the appropriate values, you should try yourself. For details of these values see the document of ARView setInitialPose() function.
Code:
float[] scale = new float[] { 0.15f, 0.15f, 0.15f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.500f, 0.0f };
Then we set the scale and rotation values we created as the starting position.
Code:
mARView.setInitialPose(scale, rotation);
After this process, we set the boolean value to indicate that the object has been created and we update the text of the button.
Code:
isLoadResource = true;
mButton.setText(R.string.btn_text_clear_resource);
If the object is already loaded, we clear the resource and load the empty object so that we remove the object from the screen.
Code:
mARView.clearResource();
mARView.loadAsset("");
Then we set the boolean value again and done by updating the button text.
Code:
isLoadResource = false;
mButton.setText(R.string.btn_text_load);
Finally, we should not forget to override the following methods as in the code to ensure synchronization.
Code:
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
The overview of OfficeActivity.java should be as follows.
Code:
import android.app.Activity;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.Toast;
import com.huawei.hms.scene.sdk.ARView;
public class OfficeActivity extends Activity {
private ARView mARView;
private Button mButtonFlower;
private boolean isLoadFlowerResource = false;
private boolean isLoadAquariumResource = false;
private Button mButtonAquarium;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_office);
mARView = findViewById(R.id.ar_view);
mButtonFlower = findViewById(R.id.button_flower);
mButtonAquarium = findViewById(R.id.button_aquarium);
Toast.makeText(this, "Please move the mobile phone slowly to find the plane", Toast.LENGTH_LONG).show();
}
/**
* Synchronously call the onPause() method of the ARView.
*/
@Override
protected void onPause() {
super.onPause();
mARView.onPause();
}
/**
* Synchronously call the onResume() method of the ARView.
*/
@Override
protected void onResume() {
super.onResume();
mARView.onResume();
}
/**
* If quick rebuilding is allowed for the current activity, destroy() of ARView must be invoked synchronously.
*/
@Override
protected void onDestroy() {
super.onDestroy();
mARView.destroy();
}
public void onButtonFlowerToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadFlowerResource) {
// Load 3D model.
mARView.loadAsset("ARView/flower.glb");
float[] scale = new float[] { 0.15f, 0.15f, 0.15f };
float[] rotation = new float[] { 0.707f, 0.0f, -0.500f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadFlowerResource = true;
mButtonFlower.setText("Clear Flower");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadFlowerResource = false;
mButtonFlower.setText("Load Flower");
}
}
public void onButtonAquariumToggleClicked(View view) {
mARView.enablePlaneDisplay(true);
if (!isLoadAquariumResource) {
// Load 3D model.
mARView.loadAsset("ARView/aquarium.glb");
float[] scale = new float[] { 0.015f, 0.015f, 0.015f };
float[] rotation = new float[] { 0.0f, 0.0f, 0.0f, 0.0f };
// (Optional) Set the initial status.
mARView.setInitialPose(scale, rotation);
isLoadAquariumResource = true;
mButtonAquarium.setText("Clear Aquarium");
} else {
// Clear the resources loaded in the ARView.
mARView.clearResource();
mARView.loadAsset("");
isLoadAquariumResource = false;
mButtonAquarium.setText("Load Aquarium");
}
}
}
In this way, we added the ARView feature of Scene Kit to our application. We can now use the ARView feature. Now let’s test the ARView part on a device that supports the Scene Kit ARView feature.
Let’s place plants and aquariums on our table as below and see how it looks.
In order for ARView to recognize the ground, first you need to turn the camera slowly until the plane points you see in the photo appear on the screen. After the plane points appear on the ground, we specify that we will add plants by clicking the load flower button. Then we can add the plant by clicking the point on the screen where we want to add the plant. When we do the same by clicking the aquarium button, we can add an aquarium.
I placed an aquarium and plants on my table. You can test how it looks by placing plants or aquariums on your table or anywhere. You can see how it looks in the photo below.
Note: “Clear Flower” and “Clear Aquarium” buttons will remove the objects we have placed on the screen.
After creating the objects, we select the object we want to move, change its size or direction as you can see in the picture below. Under normal conditions, the color of the selected object will turn into red. (The color of some models doesn’t change. For example, when the aquarium model is selected, the color of the model doesn’t change to red.)
If we want to change the size of the object after selecting it, we can zoom in out by using our two fingers. In the picture above you can see that I changed plants sizes. Also we can move the selected object by dragging it. To change its direction, we can move our two fingers in a circular motion.
More information, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201380649772650451&fid=0101187876626530001
Is Scene Kit free of charge?
About This Document
The COVID-19 outbreak has changed the shopping habits of consumers. A great proportion of consumers are turning to live streaming e-commerce, a trend that is currently enjoying great success. To achieve a high transaction volume, a live streamer needs to spend a lot of time preparing information about products, discounts, and usage tips. The background used in the live streaming room is another deciding factor. A background that doesn't change is likely to bore the audience, especially when their interest in the products is weak, and cause them to leave the live stream. Unless the live streamer is incredibly charismatic, it's very difficult to keep the audience interested in all products being advertised. As a result, the audience size may decrease during the course of the live stream.
With the image segmentation service launched by HUAWEI ML Kit, various static and dynamic backgrounds are available and can be changed in real time according to product categories or custom requirements, making live streaming more lively and interesting. This service can crop out the portrait of the live streamer based on semantic segmentation. For example, when introducing a household product, the live streamer can change to a home background in real time. This helps give the audience a more immersive shopping experience.
Functional Description
This demo is developed based on the image segmentation and hand keypoint detection services launched by HUAWEI ML Kit, and demonstrates the function of allowing the user to change the background by waving his/her hand. To avoid misoperations, the background only changes when the hand waving motion reaches a certain threshold. Once custom backgrounds are loaded, the user can wave their hand over the phone from right to left to go back to the last background, or wave their hand from left to right to go to the next background. Dynamic backgrounds are also supported. If you want to allow users to change the background with custom hand gestures or implement other hand gesture effects, you can integrate HUAWEI ML Kit's hand keypoint detection service to develop such features.
Now, let's take a detailed look at the development process.
Development Process
1. Add the AppGallery Connect plug-in and the Maven repository.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Perform integration in full SDK mode.
Code:
dependencies{
// Import the base SDK for image segmentation.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.4.300'
// Import the multiclass segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-multiclass-model:2.0.4.300'
// Import the portrait segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.4.300'
// Import the base SDK for gesture detection.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
}
3. Add configurations to the file header.
Add apply plugin: 'com.huawei.agconnect' after apply plugin: 'com.android.application'.
4. Add the function of automatically updating the machine learning model.
Code:
Add the following configurations to the AndroidManifest.xml file:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="imgseg,handkeypoint" />
...
</manifest>
5. Create an image segmentation analyzer.
Code:
MLImageSegmentationAnalyzer imageSegmentationAnalyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer(); // Image segmentation analyzer.
MLHandKeypointAnalyzer handKeypointAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(); // Gesture detection analyzer.
MLCompositeAnalyzer analyzer = new MLCompositeAnalyzer.Creator()
.add(imageSegmentationAnalyzer)
.add(handKeypointAnalyzer)
.create();
6. Create a detection result processing class.
Code:
public class ImageSegmentAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLImageSegmentation> {
@Override
public void transactResult(MLAnalyzer.Result<MLImageSegmentation> results) {
SparseArray<MLImageSegmentation> items = results.getAnalyseList();
// Process the detection result as required.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when detection completes.
}
}
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// Process the detection result as required.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when detection completes.
}
}
7. Set the detection result processor, and bind it to the analyzer.
Code:
imageSegmentationAnalyzer.setTransactor(new ImageSegmentAnalyzerTransactor());
handKeypointAnalyzer.setTransactor(new HandKeypointTransactor());
8. Create a LensEngine instance.
Code:
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context,analyzer)
// Use the front- or rear-facing camera. LensEngine.BACK_LENS indicates the rear-facing camera, and LensEngine.FRONT_LENS indicates the front-facing camera.
.setLensType(LensEngine.FRONT_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
9. Start the camera, read video streams, and perform detection.
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = new SurfaceView(this);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Handle exceptions.
}
10. After the detection is complete, stop the analyzer, and release detection resources.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Handle exceptions.
}
}
if (lensEngine != null) {
lensEngine.release();
}
Summary
ML Kit's background changing function can be quickly developed by importing packages, performing gesture detection, analyzing, and processing the detection result. Performing image segmentation allows you to do even more. For example, you can apply front-end rendering technologies to prevent bullet comments from covering the people on the screen or use graphical assets to create exquisite images of various sizes. With semantic segmentation, you can accurately segment any object from an image. In addition to portraits, food items, pets, buildings, landscapes, flowers, and grasses can also be segmented. With these functions integrated in your app, users no longer need to use professional photo editing software.
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Article Introduction
This article provides the demonstration of document skew correction in flutter platform using Huawei ML Kit plugin.
Huawei ML Kit
HUAWEI ML Kit allows your apps to easily leverage Huawei's long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries. Thanks to Huawei's technology accumulation, ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
Huawei ML Kit Docuement Skew Correction
Huawei ML Kit Docuement Skew Correction service can automatically identify the location of a document in an image and adjust the shooting angle to the angle facing the document, even if the document is tilted. In addition, you can perform document skew correction at custom boundary points.
Use Cases
HMS ML Kit document skew correction service can be widely used in so many daily life scenarios like;
If a paper document needs to be backed up in an electronic version but the document in the image is tilted, you can use this service to correct the document position.
When recording a card, you can take a photo of the front side of the card without directly facing the card.
When the road signs on both sides of the road cannot be accurately identified due to the tilted position during the trip, you can use this service to take photos of the front side of the road signs to facilitate travel.
Development Preparation
Environment Setup
Android Studio 3.0 or later
Java JDK 1.8 or later
This document suppose that you have already done the flutter SKD setup on your PC and installed the Flutter and Dart plugin inside your Android Studio.
Project Setup
Create a project in App Gallery Connect
Configuring the Signing Certificate Fingerprint inside project settings
Enable the ML Kit API from the Manage Api Settings inside App Gallery Console
Create a Flutter Project using Android Studio
Copy the agconnect-services.json file to the android/app directory of your project
Maven Repository Configuration
1. Open the build.gradle file in the android directory of your project.
Navigate to the buildscript section and configure the Maven repository address and agconnect plugin for the HMS SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
/*
* <Other dependencies>
*/
classpath 'com.huawei.agconnect:agcp:1.4.2.301'
}
}
Go to allprojects and configure the Maven repository address for the HMS SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
2. Open the build.gradle file in the android/app/ directory and add apply plugin: 'com.huawei.agconnect' line after other apply entries.
Code:
apply plugin: 'com.android.application'
apply from: "$flutterRoot/packages/flutter_tools/gradle/flutter.gradle"
apply plugin: 'com.huawei.agconnect'
3. Set your package name in defaultConfig > applicationId and set minSdkVersion to 19 or higher. Package name must match with the package_name entry in agconnect-services.json file.
4. Set multiDexEnabled to true so the app won't crash. Because ML Plugin has many API's.
Code:
defaultConfig {
applicationId "<package_name>"
minSdkVersion 19
multiDexEnabled true
/*
* <Other configurations>
*/
}
Adding the ML Kit Plugin
In your Flutter project directory, find and open your pubspec.yaml file and add the huawei_ml library to dependencies.
Code:
dependencies:
huawei_ml: {library version}
You can reference the latest versino of Huawei ML Kit from the below URL.
huawei_ml | Flutter Package
HUAWEI Ml Kit plugin for Flutter. It provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps.
pub.dev
After adding the plugin, you have to update the package info using below command
Code:
[project_path]> flutter pub get
Adding the Permissions
Code:
<uses-permission android:name="android.permission.CAMERA " />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
Development Process
The following build function contruct the basic UI for the app which demostrates the document skew correction by taking the document photos from camera and gallery.
Code:
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.end,
children: <Widget>[
Container(
margin: const EdgeInsets.only(
bottom: 10.0, top: 10.0, right: 10.0, left: 10.0),
child: Image.file(
File(_imagePath),
),
),
Padding(
padding: EdgeInsets.symmetric(vertical: 20),
child: CustomButton(
onQueryChange: () => {
log('Skew button pressed'),
skewCorrection(),
},
buttonLabel: 'Skew Correction',
disabled: _imagePath.isEmpty,
),
),
CustomButton(
onQueryChange: () => {
log('Camera button pressed'),
_initiateSkewCorrectionByCamera(),
},
buttonLabel: 'Camera',
disabled: false,
),
Padding(
padding: EdgeInsets.symmetric(vertical: 20),
child: CustomButton(
onQueryChange: () => {
log('Gallery button pressed'),
_initiateSkewCorrectionByGallery(),
},
buttonLabel: 'Gallery',
disabled: false,
),
),
],
),
),
);
}
The following function is responsible for the skew correction operation. First, create the analyzer for skew detection and then the skew detection result can be received asynchronously. After getting the skew detection results, you can get the corrected image by using asynchronous call. After the completion of operation, analyzer should be stopped properly.
Code:
void skewCorrection() async {
if (_imagePath.isNotEmpty) {
// Create an analyzer for skew detection.
MLDocumentSkewCorrectionAnalyzer analyzer =
new MLDocumentSkewCorrectionAnalyzer();
// Get skew detection result asynchronously.
MLDocumentSkewDetectResult detectionResult =
await analyzer.asyncDocumentSkewDetect(_imagePath);
// After getting skew detection results, you can get the corrected image by
// using asynchronous call
MLDocumentSkewCorrectionResult corrected =
await analyzer.asyncDocumentSkewResult();
// After recognitions, stop the analyzer.
bool result = await analyzer.stopDocumentSkewCorrection();
var path = await FlutterAbsolutePath.getAbsolutePath(corrected.imagePath);
setState(() {
_imagePath = path;
});
}
}
Conclusion
This article explains how to do the document skew correction using the Huawei ML Kit in Flutter platform. Developers can have a quick reference to jump start with the Huawei ML Kit.
References
https://developer.huawei.com/consum...S-Plugin-Guides/introduction-0000001051432503
GitHub
https://github.com/mudasirsharif/Huawei-ML-Kit-Document-Skew-Correction
Introduction
In this article, we will learn how to implement Huawei Awareness kit features with Local Notification service to send notification when certain condition met. We can create our own conditions to be met and observe them and notify to the user even when the application is not running.
The Awareness Kit is quite a comprehensive detection and event SDK. If you're developing a context-aware app for Huawei devices, this is definitely the library for you.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What can we do using Awareness kit?
With HUAWEI Awareness Kit, you can obtain a lot of different contextual information about users’ location, behavior, weather, current time, device status, ambient light, audio device status and makes it easier to provide more refined user experience.
Basic Usage
There are quite a few awareness "modules" in this SDK: Time Awareness, Location Awareness, Behavior Awareness, Beacon Awareness, Audio Device Status Awareness, Ambient Light Awareness, and Weather Awareness. Read on to find out how and when to use them.
Each of these modules has two modes: capture, which is an on-demand information retrieval; and barrier, which triggers an action when a specified condition is met.
Use case
The Barrier API allows us to set specific barriers for specific conditions in our app and when these conditions are satisfies, our app will be notified, so we could take action based on our conditions. In this sample, when user starts the activity and app notifies to the user “please connect the head set to listen music” while doing activity.
Table of content
1. Project setup
2. Headset capture API
3. Headset Barrier API
4. Flutter Local notification plugin
Advantages
1. Converged: Multi-dimensional and evolvable awareness capabilities can be called in a unified manner.
2. Accurate: The synergy of hardware and software makes data acquisition more accurate and efficient.
3. Fast: On-chip processing of local service requests and nearby access of cloud services promise a faster service response.
4. Economical: Sharing awareness capabilities avoids separate interactions between apps and the device, reducing system resource consumption. Collaborating with the EMUI (or Magic UI) and Kirin chip, Awareness Kit can even achieve optimal performance with the lowest power consumption.
Requirements
1. Any operating system(i.e. MacOS, Linux and Windows)
2. Any IDE with Flutter SDK installed (i.e. IntelliJ, Android Studio and VsCode etc.)
3. A little knowledge of Dart and Flutter.
4. A Brain to think
5. Minimum API Level 24 is required.
6. Required EMUI 5.0 and later version devices.
Setting up the Awareness kit
1. Firstly create a developer account in AppGallery Connect. After create your developer account, you can create a new project and new app. For more information check this link.
2. Generating a Signing certificate fingerprint, follow below command.
Code:
keytool -genkey -keystore <application_project_dir>\android\app\<signing_certificate_fingerprint_filename>.jks
-storepass <store_password> -alias <alias> -keypass<key_password> -keysize 2048 -keyalg RSA -validity 36500
3. The above command creates the keystore file in appdir/android/app.
4. Now we need to obtain the SHA256 key, follow the command.
Code:
keytool -list -v -keystore <application_project_dir>\android\app\<signing_certificate_fingerprint_filename>.jks
5. Enable the Awareness kit in the Manage API section and add the plugin.
6. After configuring project, we need to download agconnect-services.json file in your project and add into your project.
7. After that follow the URL for cross-platform plugins. Download required plugins.
8. The following dependencies for HMS usage need to be added to build.gradle file under the android directory.
Code:
buildscript {
ext.kotlin_version = '1.3.50'
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
dependencies {
classpath 'com.android.tools.build:gradle:3.5.0'
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
9. Then add the following line of code to the build.gradle file under the android/app directory.
Code:
apply plugin: 'com.huawei.agconnect'
10. Add the required permissions to the AndroidManifest.xml file under app>src>main folder.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="android.permission.BLUETOOTH" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.INTERNET" />
11. After completing all the above steps, you need to add the required kits’ Flutter plugins as dependencies to pubspec.yaml file. You can find all the plugins in pub.dev with the latest versions.
Code:
huawei_awareness:
path: ../huawei_awareness/
12. To display local notification, we need to add flutter local notification plugin.
Code:
flutter_local_notifications: ^3.0.1+6
After adding them, run flutter pub get command. Now all the plugins are ready to use.
Note: Set multiDexEnabled to true in the android/app directory, so that app will not crash.
Code Implementation
Use Capture and Barrier API to get Headset status
This service will helps in your application before starting activity, it will remind you to connect the headset to play music.
We need to request the runtime permissions before calling any service.
Code:
@override
void initState() {
checkPermissions();
requestPermissions();
super.initState();
}
void checkPermissions() async {
locationPermission = await AwarenessUtilsClient.hasLocationPermission();
backgroundLocationPermission =
await AwarenessUtilsClient.hasBackgroundLocationPermission();
activityRecognitionPermission =
await AwarenessUtilsClient.hasActivityRecognitionPermission();
if (locationPermission &&
backgroundLocationPermission &&
activityRecognitionPermission) {
setState(() {
permissions = true;
notifyAwareness();
});
}
}
void requestPermissions() async {
if (locationPermission == false) {
bool status = await AwarenessUtilsClient.requestLocationPermission();
setState(() {
locationPermission = status;
});
}
if (backgroundLocationPermission == false) {
bool status =
await AwarenessUtilsClient.requestBackgroundLocationPermission();
setState(() {
locationPermission = status;
});
}
if (activityRecognitionPermission == false) {
bool status =
await AwarenessUtilsClient.requestActivityRecognitionPermission();
setState(() {
locationPermission = status;
});
checkPermissions();
}
}
Capture API: Now all the permissions are allowed, once app launched if we want to check the headset status, then we need to call the Capture API using getHeadsetStatus(), only one time this service will cal.
Code:
checkHeadsetStatus() async {
HeadsetResponse response = await AwarenessCaptureClient.getHeadsetStatus();
setState(() {
switch (response.headsetStatus) {
case HeadsetStatus.Disconnected:
_showNotification(
"Music", "Please connect headset before start activity");
break;
}
});
log(response.toJson(), name: "captureHeadset");
}
Barrier API: If you want to set some conditions into your app, then we need to use Barrier API. This service keep on listening event once satisfies the conditions automatically it will notifies to user. for example we mentioned some condition like we need to play music once activity starts, now user connects the headset automatically it will notifies to the user headset connected and music playing.
First we need to set barrier condition, it means the barrier will triggered when conditions satisfies.
Code:
String headSetBarrier = "HeadSet";
AwarenessBarrier headsetBarrier = HeadsetBarrier.keeping(
barrierLabel: headSetBarrier, headsetStatus: HeadsetStatus.Connected);
Add the barrier using updateBarriers() this method will return whether barrier added or not.
Code:
bool status =await AwarenessBarrierClient.updateBarriers(barrier: headsetBarrier);
If status returns true it means barrier successfully added, now we need to declare StreamSubscription to listen event, it will keep on update the data once condition satisfies it will trigger.
Code:
if(status){
log("Headset Barrier added.");
StreamSubscription<dynamic> subscription;
subscription = AwarenessBarrierClient.onBarrierStatusStream.listen((event) {
if (mounted) {
setState(() {
switch (event.presentStatus) {
case HeadsetStatus.Connected:
_showNotification("Cool HeadSet", "Headset Connected,Want to listen some music?");
isPlaying = true;
print("Headset Status: Connected");
break;
case HeadsetStatus.Disconnected:
_showNotification("HeadSet", "Headset Disconnected, your music stopped");
print("Headset Status: Disconnected");
isPlaying = false;
break;
case HeadsetStatus.Unknown:
_showNotification("HeadSet", "Your headset Unknown");
print("Headset Status: Unknown");
isPlaying = false;
break;
}
});
}
}, onError: (error) {
log(error.toString());
});
}else{
log("Headset Barrier not added.");
}
Need of Local Push Notification?
1. We can Schedule notification.
2. No web request is required to display Local notification.
3. No limit of notification per user.
4. Originate from the same device and displayed on the same device.
5. Alert the user or remind the user to perform some task.
This package provides us the required functionality of Local Notification. Using this package we can integrate our app with Local Notification in android and ios app both.
Add the following permission to integrate your app with the ability of scheduled notification.
Code:
<uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" />
<uses-permission android:name="android.permission.VIBRATE" />
Add inside application.
Code:
<receiver android:name="com.dexterous.flutterlocalnotifications.ScheduledNotificationBootReceiver">
<intent-filter>
<action android:name="android.intent.action.BOOT_COMPLETED"/>
</intent-filter>
</receiver>
<receiver android:name="com.dexterous.flutterlocalnotifications.ScheduledNotificationReceiver" />
Let’s create Flutter local notification object.
Code:
FlutterLocalNotificationsPlugin localNotification = FlutterLocalNotificationsPlugin();
@override
void initState() {
// TODO: implement initState
super.initState();
var initializationSettingsAndroid =
AndroidInitializationSettings('ic_launcher');
var initializationSettingsIOs = IOSInitializationSettings();
var initSetttings = InitializationSettings(
android: initializationSettingsAndroid, iOS: initializationSettingsIOs);
localNotification.initialize(initSetttings);
}
Now create logic for display simple notification.
Code:
Future _showNotification(String title, String description) async {
var androidDetails = AndroidNotificationDetails(
"channelId", "channelName", "content",
importance: Importance.high);
var iosDetails = IOSNotificationDetails();
var generateNotification =
NotificationDetails(android: androidDetails, iOS: iosDetails);
await localNotification.show(
0, title, description, generateNotification);
}
Demo
Tips and Tricks
1. Download latest HMS Flutter plugin.
2. Set minSDK version to 24 or later.
3. HMS Core APK 4.0.2.300 is required.
4. Currently this plugin not supporting background task.
5. Do not forget to click pug get after adding dependencies.
Conclusion
In this article, we have learned how to use Barrier API of Huawei Awareness Kit with a Local Notification to observe the changes in environmental factors even when the application is not running.
As you may notice, the permanent notification indicating that the application is running in the background is not dismissible by the user which can be annoying.
Based on requirement we can utilize different APIs, Huawei Awareness Kit has many other great features that we can use with foreground services in our applications.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Awareness Kit URL.
Awareness Capture API Article URL.
Original Source
What's it like to date a programmer?
John is a Huawei programmer. His girlfriend Jenny, a teacher, has an interesting answer to that question: "Thanks to my programmer boyfriend, my course ranked among the most popular online courses at my school".
Let's go over how this came to be. Due to COVID-19, the school where Jenny taught went entirely online. Jenny, who was new to live streaming, wanted her students to experience the full immersion of traveling to Tokyo, New York, Paris, the Forbidden City, Catherine Palace, and the Louvre Museum, so that they could absorb all of the relevant geographic and historical knowledge related to those places. But how to do so?
Jenny was stuck on this issue, but John quickly came to her rescue.
After analyzing her requirements in detail, John developed a tailored online course app that brings its users an uncannily immersive experience. It enables users to change the background while live streaming. The video imagery within the app looks true-to-life, as each pixel is labeled, and the entire body image — down to a single strand of hair — is completely cut out.
Actual Effects
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
How to ImplementChanging live-streaming backgrounds by gesture can be realized by using image segmentation and hand gesture recognition in HUAWEI ML Kit.
The image segmentation service segments specific elements from static images or dynamic video streams, with 11 types of image elements supported: human bodies, sky scenes, plants, foods, cats and dogs, flowers, water, sand, buildings, mountains, and others.
The hand gesture recognition service offers two capabilities: hand keypoint detection and hand gesture recognition. Hand keypoint detection is capable of detecting 21 hand keypoints (including fingertips, knuckles, and wrists) and returning positions of the keypoints. The hand gesture recognition capability detects and returns the positions of all rectangular areas of the hand from images and videos, as well as the type and confidence of a gesture. This capability can recognize 14 different gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time video streams.
Development Process1. Add the AppGallery Connect plugin and the Maven repository.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Integrate required services in the full SDK mode.
Code:
dependencies{
// Import the basic SDK of image segmentation.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.4.300'
// Import the multiclass segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-multiclass-model:2.0.4.300'
// Import the human body segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.4.300'
// Import the basic SDK of hand gesture recognition.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
// Import the model package of hand keypoint detection.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
}
3. Add configurations in the file header.
Add apply plugin: 'com.huawei.agconnect' after apply plugin: 'com.android.application'.
4. Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file:
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="imgseg,handkeypoint" />
...
</manifest>
5. Create an image segmentation analyzer.
Code:
MLImageSegmentationAnalyzer imageSegmentationAnalyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();// Image segmentation analyzer.
MLHandKeypointAnalyzer handKeypointAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();// Hand gesture recognition analyzer.
MLCompositeAnalyzer analyzer = new MLCompositeAnalyzer.Creator()
.add(imageSegmentationAnalyzer)
.add(handKeypointAnalyzer)
.create();
6. Create a class for processing the recognition result.
Code:
public class ImageSegmentAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLImageSegmentation> {
@Override
public void transactResult(MLAnalyzer.Result<MLImageSegmentation> results) {
SparseArray<MLImageSegmentation> items = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
7. Set the detection result processor to bind the analyzer to the result processor.
Code:
imageSegmentationAnalyzer.setTransactor(new ImageSegmentAnalyzerTransactor());
handKeypointAnalyzer.setTransactor(new HandKeypointTransactor());
8. Create a LensEngine object.
Code:
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context,analyzer)
// Set the front or rear camera mode. LensEngine.BACK_LENS indicates the rear camera, and LensEngine.FRONT_LENS indicates the front camera.
.setLensType(LensEngine.FRONT_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
9. Start the camera, read video streams, and start recognition.
Code:
// Implement other logics of the SurfaceView control by yourself.
SurfaceView mSurfaceView = new SurfaceView(this);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
10. Stop the analyzer and release the recognition resources when recognition ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
For more details, as follows:
Our official website
Our Development Documentation page, to find the documents you need
Experience the easy-integration process on Codelabs
GitHub to download demos and sample codes
Stack Overflow to solve any integration problem
Original Source
muraliameakula said:
What's it like to date a programmer?
John is a Huawei programmer. His girlfriend Jenny, a teacher, has an interesting answer to that question: "Thanks to my programmer boyfriend, my course ranked among the most popular online courses at my school".
Let's go over how this came to be. Due to COVID-19, the school where Jenny taught went entirely online. Jenny, who was new to live streaming, wanted her students to experience the full immersion of traveling to Tokyo, New York, Paris, the Forbidden City, Catherine Palace, and the Louvre Museum, so that they could absorb all of the relevant geographic and historical knowledge related to those places. But how to do so?
Jenny was stuck on this issue, but John quickly came to her rescue.
After analyzing her requirements in detail, John developed a tailored online course app that brings its users an uncannily immersive experience. It enables users to change the background while live streaming. The video imagery within the app looks true-to-life, as each pixel is labeled, and the entire body image — down to a single strand of hair — is completely cut out.
Actual Effects
How to ImplementChanging live-streaming backgrounds by gesture can be realized by using image segmentation and hand gesture recognition in HUAWEI ML Kit.
The image segmentation service segments specific elements from static images or dynamic video streams, with 11 types of image elements supported: human bodies, sky scenes, plants, foods, cats and dogs, flowers, water, sand, buildings, mountains, and others.
The hand gesture recognition service offers two capabilities: hand keypoint detection and hand gesture recognition. Hand keypoint detection is capable of detecting 21 hand keypoints (including fingertips, knuckles, and wrists) and returning positions of the keypoints. The hand gesture recognition capability detects and returns the positions of all rectangular areas of the hand from images and videos, as well as the type and confidence of a gesture. This capability can recognize 14 different gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time video streams.
Development Process1. Add the AppGallery Connect plugin and the Maven repository.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Integrate required services in the full SDK mode.
Code:
dependencies{
// Import the basic SDK of image segmentation.
implementation 'com.huawei.hms:ml-computer-vision-segmentation:2.0.4.300'
// Import the multiclass segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-multiclass-model:2.0.4.300'
// Import the human body segmentation model package.
implementation 'com.huawei.hms:ml-computer-vision-image-segmentation-body-model:2.0.4.300'
// Import the basic SDK of hand gesture recognition.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.4.300'
// Import the model package of hand keypoint detection.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.4.300'
}
3. Add configurations in the file header.
Add apply plugin: 'com.huawei.agconnect' after apply plugin: 'com.android.application'.
4. Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file:
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="imgseg,handkeypoint" />
...
</manifest>
5. Create an image segmentation analyzer.
Code:
MLImageSegmentationAnalyzer imageSegmentationAnalyzer = MLAnalyzerFactory.getInstance().getImageSegmentationAnalyzer();// Image segmentation analyzer.
MLHandKeypointAnalyzer handKeypointAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer();// Hand gesture recognition analyzer.
MLCompositeAnalyzer analyzer = new MLCompositeAnalyzer.Creator()
.add(imageSegmentationAnalyzer)
.add(handKeypointAnalyzer)
.create();
6. Create a class for processing the recognition result.
Code:
public class ImageSegmentAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLImageSegmentation> {
@Override
public void transactResult(MLAnalyzer.Result<MLImageSegmentation> results) {
SparseArray<MLImageSegmentation> items = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = results.getAnalyseList();
// Process the recognition result as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
7. Set the detection result processor to bind the analyzer to the result processor.
Code:
imageSegmentationAnalyzer.setTransactor(new ImageSegmentAnalyzerTransactor());
handKeypointAnalyzer.setTransactor(new HandKeypointTransactor());
8. Create a LensEngine object.
Code:
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context,analyzer)
// Set the front or rear camera mode. LensEngine.BACK_LENS indicates the rear camera, and LensEngine.FRONT_LENS indicates the front camera.
.setLensType(LensEngine.FRONT_LENS)
.applyDisplayDimension(1280, 720)
.applyFps(20.0f)
.enableAutomaticFocus(true)
.create();
9. Start the camera, read video streams, and start recognition.
Code:
// Implement other logics of the SurfaceView control by yourself.
SurfaceView mSurfaceView = new SurfaceView(this);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
10. Stop the analyzer and release the recognition resources when recognition ends.
Code:
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
// Exception handling.
}
}
if (lensEngine != null) {
lensEngine.release();
}
For more details, as follows:
Our official website
Our Development Documentation page, to find the documents you need
Experience the easy-integration process on Codelabs
GitHub to download demos and sample codes
Stack Overflow to solve any integration problem
Original Source
Click to expand...
Click to collapse
can i integrate any video background?