Body Skeleton Tracking with Huawei AR Engine - Huawei Developers

Introduction
The use of Augmented Reality (AR) is increasing every day in many areas from shopping to games, from design to education and more. If we ask what augmented reality is, we will get an answer from wikipedia exactly as follows.
What is the AR?
Augmented reality(AR) is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory.
However, today we will talk about the advantages of AR Engine, the augmented reality SDK offered by HUAWEI, rather than this classic definition. We will talk about the differences with the ML Kit, which looks similar but different, offered by HUAWEI, and we will develop a demo application using the HUAWEI AR Engine. And in this way, we will learn how to easily integrate augmented reality to our application by using HUAWEI AR Engine.
What is the HUAWEI AR Engine?
HUAWEI AR Engine is a platform for building augmented reality (AR) apps on Android smartphones. It is based on the HiSilicon chipset, and integrates AR core algorithms to provide basic AR capabilities such as motion tracking, environment tracking, body tracking, and face tracking, allowing our app to bridge virtual world with the real world, for a brand new visually interactive user experience. AR Engine accurately understands and provides virtual and physical convergence capabilities for our applications.
AR Engine Advantages
Normally, integrating augmented reality features to our application is a very complicated and laborious task. However, companies have offered SDKs to make this complex work easy. Apple AR Kit, Google AR Core and HUAWEI AR Engine, which is our main topic to develop applications today, are examples of these SDKs. However, there are differences between these SDKs such as performance, capability, and supported devices.
For example, while Google AR Core does not support face tracking and human body tracking, it does support AR Engine and AR Kit. Also, while AR Engine and AR Kit support hand gestures, AR Core does not.
Other advantages of HUAWEI AR Engine enables your device to understand how people move. HUAWEI AR Engine can assist in placing a virtual object or applying special effect on a hand by locating hand locations and recognizing specific gestures. With the depth component, motion tracking capability can track 21 hand skeleton points to implement precise interactive controls and special effect overlays. Regarding body recognition, the capability can track 23 body skeleton points with specific names (Left Hand etc.) to detect human posture in real time. AR Engine supports use with third party applications and the Depth Api. In addition to all these, HUAWEI AR Engine supports these features for both front and back cameras. Also, it is planned to add the feature to provide directions for certain locations in the coming days.
With the AR Engine, HUAWEI mobile phones provide interaction capabilities such as face, gesture, and body recognition, and more than 240 APIs, in addition to the basic motion tracking and environment tracking capabilities.
Differences from HUAWEI ML Kit
HUAWEI AR Engine Body Tracking and ML Kit vision skeleton recognition may look the same. However, there is quite a difference between them. ML Kit provides for some general purpose capabilities while the AR Engine tracks skeleton information in the AR scenario. Skeleton tracking and motion tracking are both enabled in AR Engine. So, AR Engine has various information from the coordinate system. But the service in the ML Kit cannot do the same. Because they served for different purposes.
The service in the AR Engine is used to create an AR app while the service in the ML Kit can only track the skeleton in the image in the smart phone coordinate system. They are different ways to implement these two services. Also, the models are different.
● ● ●
Demo App Development
We will create a simple demo application by using HUAWEI AR Engine’s Body tracking capability. In this demo application, I will try to draw lines that represent the body skeleton on the human body viewed by the camera. First you need to provide software and hardware requirements.
Hardware Requirements
The current version of HUAWEI AR Engine supports only HUAWEI devices. So, you need a HUAWEI phone that supports HUAWEI AR Engine, can be connected to a computer via a USB cable, and whose camera works properly.(You can see the supported devices in the below table)
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Software Requirements
Java JDK (1.8 or later).
Android Studio (3.1 or later).
Latest HUAWEI AR Engine SDK, which is available on HUAWEI Developers.
Latest HUAWEI AR Engine APK, which is available in HUAWEI AppGallery and has been installed on the phone.
After providing the requirements, you need to add OpenGL to graphic rendering and the HUAWEI AR Engine SDK dependencies to your app level build.gradle
Code:
dependencies{
//HUAWEI AR Engine SDK dependency
implementation 'com.huawei.hms:arenginesdk:2.12.0.1'
//opengl graphics library dependency
implementation 'de.javagl:obj:0.3.0'
}
To using the camera you need to add camera permission in your AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.CAMERA" />
Now we are ready for development of our app. Before enter to the development we should know the general process of using the HUAWEI AR Engine SDK. Our demo application AR Engine process has to follow steps in the photo.
Note: The general AR Engine usage process will follow steps in the photo. Development steps are not related with steps in the photo.
Now if you have a look at the general process of AR Engine, we can continue to development.
Note: Remember, if you get confused in these processes, take a look at this picture again after development is complete and you will fully understand the general usage process of HUAWEI AR Engine SDK.
First we need to create a class for body rendering shader utilities. We will use this utility class to create vertex and fragment shaders for body rendering. (We will be take advantage of OpenGL computer graphics library functions to create shaders)
Code:
import android.opengl.GLES20;
import android.util.Log;
/**
* This class provides code and programs related to body rendering shader.
*/
class BodyShaderUtil {
private static final String TAG = BodyShaderUtil.class.getSimpleName();
/**
* Newline character.
*/
public static final String LS = System.lineSeparator();
/**
* Code for the vertex shader.
*/
public static final String BODY_VERTEX =
"uniform vec4 inColor;" + LS
+ "attribute vec4 inPosition;" + LS
+ "uniform float inPointSize;" + LS
+ "varying vec4 varColor;" + LS
+ "uniform mat4 inProjectionMatrix;" + LS
+ "uniform float inCoordinateSystem;" + LS
+ "void main() {" + LS
+ " vec4 position = vec4(inPosition.xyz, 1.0);" + LS
+ " if (inCoordinateSystem == 2.0) {" + LS
+ " position = inProjectionMatrix * position;" + LS
+ " }" + LS
+ " gl_Position = position;" + LS
+ " varColor = inColor;" + LS
+ " gl_PointSize = inPointSize;" + LS
+ "}";
/**
* Code for the segment shader.
*/
public static final String BODY_FRAGMENT =
"precision mediump float;" + LS
+ "varying vec4 varColor;" + LS
+ "void main() {" + LS
+ " gl_FragColor = varColor;" + LS
+ "}";
private BodyShaderUtil() {
}
/**
* Create a shader.
*
* @return Shader program.
*/
static int createGlProgram() {
int vertex = loadShader(GLES20.GL_VERTEX_SHADER, BODY_VERTEX);
if (vertex == 0) {
return 0;
}
int fragment = loadShader(GLES20.GL_FRAGMENT_SHADER, BODY_FRAGMENT);
if (fragment == 0) {
return 0;
}
int program = GLES20.glCreateProgram();
if (program != 0) {
GLES20.glAttachShader(program, vertex);
GLES20.glAttachShader(program, fragment);
GLES20.glLinkProgram(program);
int[] linkStatus = new int[1];
GLES20.glGetProgramiv(program, GLES20.GL_LINK_STATUS, linkStatus, 0);
if (linkStatus[0] != GLES20.GL_TRUE) {
Log.e(TAG, "Could not link program " + GLES20.glGetProgramInfoLog(program));
GLES20.glDeleteProgram(program);
program = 0;
}
}
return program;
}
private static int loadShader(int shaderType, String source) {
int shader = GLES20.glCreateShader(shaderType);
if (0 != shader) {
GLES20.glShaderSource(shader, source);
GLES20.glCompileShader(shader);
int[] compiled = new int[1];
GLES20.glGetShaderiv(shader, GLES20.GL_COMPILE_STATUS, compiled, 0);
if (compiled[0] == 0) {
Log.e(TAG, "glError: Could not compile shader " + shaderType);
Log.e(TAG, "glError: " + GLES20.glGetShaderInfoLog(shader));
GLES20.glDeleteShader(shader);
shader = 0;
}
}
return shader;
}
}
Then we need to create an interface to rendering body AR type related data. We will implement this interface in the classes to be displayed. Then, we will call the overridden methods of the classes that implement this interface from the BodyRenderManager class, which will be created from the onCreate method of our activity.
You can see that we passed ARBody type Collection to onDrawFrame method. There are two reasons for this. First reason is the HUAWEI AR Engine can identify two human bodies at a time by default, and it always returns two body objects. The second reason is that we will draw the body skeleton in overridden onDrawFrame methods. We are using HUAWEI ARBody class because it returns the tracking result during body skeleton tracking, including body skeleton data which will be used on drawing.
Code:
import com.huawei.hiar.ARBody;
import java.util.Collection;
/**
* Rendering body AR type related data.
*/
interface BodyRelatedDisplay {
/**
* Init render.
*/
void init();
/**
* Render objects, call per frame.
*
* @param bodies ARBodies.
* @param projectionMatrix Camera projection matrix.
*/
void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix);
}
Now we need to create display classes to pass the data to OpenGL ES.
The body skeleton display class:
(We will use this class to pass skeleton data to be rendered and displayed on the screen to OpenGL ES)
Code:
import java.nio.FloatBuffer;
import java.util.Collection;
/**
* Obtain and pass the skeleton data to openGL ES, which will render the data and displays it on the screen.
*/
public class BodySkeletonDisplay implements BodyRelatedDisplay {
private static final String TAG = BodySkeletonDisplay.class.getSimpleName();
// Number of bytes occupied by each 3D coordinate. Float data occupies 4 bytes.
// Each skeleton point represents a 3D coordinate.
private static final int BYTES_PER_POINT = 4 * 3;
private static final int INITIAL_POINTS_SIZE = 150;
private static final float DRAW_COORDINATE = 2.0f;
private int mVbo;
private int mVboSize;
private int mProgram;
private int mPosition;
private int mProjectionMatrix;
private int mColor;
private int mPointSize;
private int mCoordinateSystem;
private int mNumPoints = 0;
private int mPointsNum = 0;
private FloatBuffer mSkeletonPoints;
/**
* Create a body skeleton shader on the GL thread.
* This method is called when {@link BodyRenderManager#onSurfaceCreated}.
*/
@Override
public void init() {
ShaderUtil.checkGlError(TAG, "Init start.");
int[] buffers = new int[1];
GLES20.glGenBuffers(1, buffers, 0);
mVbo = buffers[0];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
mVboSize = INITIAL_POINTS_SIZE * BYTES_PER_POINT;
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVboSize, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Before create gl program.");
createProgram();
ShaderUtil.checkGlError(TAG, "Init end.");
}
private void createProgram() {
ShaderUtil.checkGlError(TAG, "Create gl program start.");
mProgram = BodyShaderUtil.createGlProgram();
mColor = GLES20.glGetUniformLocation(mProgram, "inColor");
mPosition = GLES20.glGetAttribLocation(mProgram, "inPosition");
mPointSize = GLES20.glGetUniformLocation(mProgram, "inPointSize");
mProjectionMatrix = GLES20.glGetUniformLocation(mProgram, "inProjectionMatrix");
mCoordinateSystem = GLES20.glGetUniformLocation(mProgram, "inCoordinateSystem");
ShaderUtil.checkGlError(TAG, "Create gl program end.");
}
private void updateBodySkeleton() {
ShaderUtil.checkGlError(TAG, "Update Body Skeleton data start.");
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
mNumPoints = mPointsNum;
if (mVboSize < mNumPoints * BYTES_PER_POINT) {
while (mVboSize < mNumPoints * BYTES_PER_POINT) {
// If the size of VBO is insufficient to accommodate the new point cloud, resize the VBO.
mVboSize *= 2;
}
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVboSize, null, GLES20.GL_DYNAMIC_DRAW);
}
GLES20.glBufferSubData(GLES20.GL_ARRAY_BUFFER, 0, mNumPoints * BYTES_PER_POINT, mSkeletonPoints);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Update Body Skeleton data end.");
}
/**
* Update the node data and draw by using OpenGL.
* This method is called when {@link BodyRenderManager#onDrawFrame}.
*
* @param bodies Body data.
* @param projectionMatrix projection matrix.
*/
@Override
public void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix) {
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = DRAW_COORDINATE;
}
findValidSkeletonPoints(body);
updateBodySkeleton();
drawBodySkeleton(coordinate, projectionMatrix);
}
}
}
private void drawBodySkeleton(float coordinate, float[] projectionMatrix) {
ShaderUtil.checkGlError(TAG, "Draw body skeleton start.");
GLES20.glUseProgram(mProgram);
GLES20.glEnableVertexAttribArray(mPosition);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
// The size of the vertex attribute is 4, and each vertex has four coordinate components.
GLES20.glVertexAttribPointer(
mPosition, 4, GLES20.GL_FLOAT, false, BYTES_PER_POINT, 0);
GLES20.glUniform4f(mColor, 0.0f, 0.0f, 1.0f, 1.0f);
GLES20.glUniformMatrix4fv(mProjectionMatrix, 1, false, projectionMatrix, 0);
// Set the size of the skeleton points.
GLES20.glUniform1f(mPointSize, 30.0f);
GLES20.glUniform1f(mCoordinateSystem, coordinate);
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, mNumPoints);
GLES20.glDisableVertexAttribArray(mPosition);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Draw body skeleton end.");
}
private void findValidSkeletonPoints(ARBody arBody) {
int index = 0;
int[] isExists;
int validPointNum = 0;
float[] points;
float[] skeletonPoints;
// Determine whether the data returned by the algorithm is 3D human
// skeleton data or 2D human skeleton data, and obtain valid skeleton points.
if (arBody.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
isExists = arBody.getSkeletonPointIsExist3D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint3D();
} else {
isExists = arBody.getSkeletonPointIsExist2D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint2D();
}
// Save the three coordinates of each joint point(each point has three coordinates).
for (int i = 0; i < isExists.length; i++) {
if (isExists[i] != 0) {
points[index++] = skeletonPoints[3 * i];
points[index++] = skeletonPoints[3 * i + 1];
points[index++] = skeletonPoints[3 * i + 2];
validPointNum++;
}
}
mSkeletonPoints = FloatBuffer.wrap(points);
mPointsNum = validPointNum;
}
}
The body skeleton line display class:
(And this class will be used to pass the skeleton point connection data to OpenGL ES for rendering on the screen)
Code:
import java.nio.FloatBuffer;
import java.util.Collection;
/**
* Gets the skeleton point connection data and pass it to OpenGL ES for rendering on the screen.
*/
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {
private static final String TAG = BodySkeletonLineDisplay.class.getSimpleName();
// Number of bytes occupied by each 3D coordinate. Float data occupies 4 bytes.
// Each skeleton point represents a 3D coordinate.
private static final int BYTES_PER_POINT = 4 * 3;
private static final int INITIAL_BUFFER_POINTS = 150;
private static final float COORDINATE_SYSTEM_TYPE_3D_FLAG = 2.0f;
private static final int LINE_POINT_RATIO = 6;
private int mVbo;
private int mVboSize = INITIAL_BUFFER_POINTS * BYTES_PER_POINT;
private int mProgram;
private int mPosition;
private int mProjectionMatrix;
private int mColor;
private int mPointSize;
private int mCoordinateSystem;
private int mNumPoints = 0;
private int mPointsLineNum = 0;
private FloatBuffer mLinePoints;
/**
* Constructor.
*/
BodySkeletonLineDisplay() {
}
/**
* Create a body skeleton line shader on the GL thread.
* This method is called when {@link BodyRenderManager#onSurfaceCreated}.
*/
@Override
public void init() {
ShaderUtil.checkGlError(TAG, "Init start.");
int[] buffers = new int[1];
GLES20.glGenBuffers(1, buffers, 0);
mVbo = buffers[0];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
ShaderUtil.checkGlError(TAG, "Before create gl program.");
createProgram();
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVboSize, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Init end.");
}
private void createProgram() {
ShaderUtil.checkGlError(TAG, "Create gl program start.");
mProgram = BodyShaderUtil.createGlProgram();
mPosition = GLES20.glGetAttribLocation(mProgram, "inPosition");
mColor = GLES20.glGetUniformLocation(mProgram, "inColor");
mPointSize = GLES20.glGetUniformLocation(mProgram, "inPointSize");
mProjectionMatrix = GLES20.glGetUniformLocation(mProgram, "inProjectionMatrix");
mCoordinateSystem = GLES20.glGetUniformLocation(mProgram, "inCoordinateSystem");
ShaderUtil.checkGlError(TAG, "Create gl program end.");
}
private void drawSkeletonLine(float coordinate, float[] projectionMatrix) {
ShaderUtil.checkGlError(TAG, "Draw skeleton line start.");
GLES20.glUseProgram(mProgram);
GLES20.glEnableVertexAttribArray(mPosition);
GLES20.glEnableVertexAttribArray(mColor);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
// Set the width of the rendered skeleton line.
GLES20.glLineWidth(18.0f);
// The size of the vertex attribute is 4, and each vertex has four coordinate components.
GLES20.glVertexAttribPointer(
mPosition, 4, GLES20.GL_FLOAT, false, BYTES_PER_POINT, 0);
GLES20.glUniform4f(mColor, 1.0f, 0.0f, 0.0f, 1.0f);
GLES20.glUniformMatrix4fv(mProjectionMatrix, 1, false, projectionMatrix, 0);
// Set the size of the points.
GLES20.glUniform1f(mPointSize, 100.0f);
GLES20.glUniform1f(mCoordinateSystem, coordinate);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, mNumPoints);
GLES20.glDisableVertexAttribArray(mPosition);
GLES20.glDisableVertexAttribArray(mColor);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Draw skeleton line end.");
}
/**
* Rendering lines between body bones.
* This method is called when {@link BodyRenderManager#onDrawFrame}.
*
* @param bodies Bodies data.
* @param projectionMatrix Projection matrix.
*/
@Override
public void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix) {
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = COORDINATE_SYSTEM_TYPE_3D_FLAG;
}
updateBodySkeletonLineData(body);
drawSkeletonLine(coordinate, projectionMatrix);
}
}
}
/**
* Update body connection data.
*/
private void updateBodySkeletonLineData(ARBody body) {
findValidConnectionSkeletonLines(body);
ShaderUtil.checkGlError(TAG, "Update body skeleton line data start.");
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
mNumPoints = mPointsLineNum;
if (mVboSize < mNumPoints * BYTES_PER_POINT) {
while (mVboSize < mNumPoints * BYTES_PER_POINT) {
// If the storage space is insufficient, allocate double the space.
mVboSize *= 2;
}
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVboSize, null, GLES20.GL_DYNAMIC_DRAW);
}
GLES20.glBufferSubData(GLES20.GL_ARRAY_BUFFER, 0, mNumPoints * BYTES_PER_POINT, mLinePoints);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Update body skeleton line data end.");
}
private void findValidConnectionSkeletonLines(ARBody arBody) {
mPointsLineNum = 0;
int[] connections = arBody.getBodySkeletonConnection();
float[] linePoints = new float[LINE_POINT_RATIO * connections.length];
float[] coors;
int[] isExists;
if (arBody.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coors = arBody.getSkeletonPoint3D();
isExists = arBody.getSkeletonPointIsExist3D();
} else {
coors = arBody.getSkeletonPoint2D();
isExists = arBody.getSkeletonPointIsExist2D();
}
// Filter out valid skeleton connection lines based on the returned results,
// which consist of indexes of two ends, for example, [p0,p1;p0,p3;p0,p5;p1,p2].
// The loop takes out the 3D coordinates of the end points of the valid connection
// line and saves them in sequence.
for (int j = 0; j < connections.length; j += 2) {
if (isExists[connections[j]] != 0 && isExists[connections[j + 1]] != 0) {
linePoints[mPointsLineNum * 3] = coors[3 * connections[j]];
linePoints[mPointsLineNum * 3 + 1] = coors[3 * connections[j] + 1];
linePoints[mPointsLineNum * 3 + 2] = coors[3 * connections[j] + 2];
linePoints[mPointsLineNum * 3 + 3] = coors[3 * connections[j + 1]];
linePoints[mPointsLineNum * 3 + 4] = coors[3 * connections[j + 1] + 1];
linePoints[mPointsLineNum * 3 + 5] = coors[3 * connections[j + 1] + 2];
mPointsLineNum += 2;
}
}
mLinePoints = FloatBuffer.wrap(linePoints);
}
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202387404360340481&fid=0101187876626530001&channelname=HuoDong59&ha_source=xda

How much time it takes to integrate this service ?

Related

How to Integrate Huawei Map Kit Javascript Api to cross-platforms

More information like this, you can visit HUAWEI Developer Forum​
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202330537081990041&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Article Introduction
In this article we are going to cover HUAWEI Map Kit JavaScript API introduction. Next we going to implementing HUAWEI Map in Ionic/Cordova project. Lastly we will implement HUAWEI Map Kit JavaScript into Native Application.
Technology Introduction
HUAWEI Map Kit provides JavaScript APIs for you to easily build map apps applicable to browsers.
It provides the basic map display, map interaction, route planning, place search, geocoding, and other functions to meet requirements of most developers.
Restriction
Before using the service, you need to apply for an API key on the HUAWEI Developers website. For details, please refer to "Creating an API Key" in API Console Operation Guide. To enhance the API key security, you are advised to restrict the API key. You can configure restrictions by app and API on the API key editing page.
Generating API Key
Go to HMS API Services > Credentials and click Create credential.
Click API key to generate new API Key.
In the dialog box that is displayed, click Restrict to set restrictions on the key to prevent unauthorized use or quota theft. This step is optional.
The restrictions include App restrictions and API restriction.
App restrictions: control which websites or apps can use your key. Set up to one app restriction per key.
API restrictions: specify the enabled APIs that this key can call.
After setup App restriction and API restriction API key will generate.
The API key is successfully created. Copy API Key to use in your project.
Huawei Web Map API introduction
1. Make a Basic Map
Code:
function loadMapScript() {
const apiKey = encodeURIComponent(
"API_KEY"
);
const src = `https://mapapi.cloud.huawei.com/mapjs/v1/api/js?callback=initMap&key=${apiKey}`;
const mapScript = document.createElement("script");
mapScript.setAttribute("src", src);
document.head.appendChild(mapScript);
}
function initMap() { }
function initMap() {
const mapOptions = {};
mapOptions.center = { lat: 48.856613, lng: 2.352222 };
mapOptions.zoom = 8;
mapOptions.language = "ENG";
const map = new HWMapJsSDK.HWMap(
document.getElementById("map"),
mapOptions
);
}
loadMapScript();
Note: Please update API_KEY with the key which you have generated. In script url we are declaring callback function, which will automatically initiate once Huawei Map Api loaded successfully.
2. Map Interactions
Map Controls
Code:
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 10;
scaleControl
mapOptions.scaleControl = true; // Set to display the scale.
mapOptions.scaleControlOptions = {
units: "imperial" // Set the scale unit to inch.
};
zoomSlider
Code:
mapOptions.zoomSlider = true ; // Set to display the zoom slider.
zoomControl
Code:
mapOptions.zoomControl = false; // Set not to display the zoom button.
rotateControl (Manage Compass)
Code:
mapOptions.rotateControl = true; // Set to display the compass.
navigationControl
Code:
mapOptions.navigationControl = true; // Set to display the pan button.
copyrightControl
Code:
mapOptions.copyrightControl = true; // Set to display the copyright information.
mapOptions.copyrightControlOptions = {value: "HUAWEI",} // Set the copyright information.
locationControl
Code:
mapOptions.locationControl= true; // Set to display the current location.
Camera
Map moving: You can call the map.panTo(latLng)
Map shift: You can call the map.panBy(x, y)
Zoom: You can use the map.setZoom(zoom) method to set the zoom level of a map.
Area control: You can use map.fitBounds(bounds) to set the map display scope.
Map Events
Map click event:
Code:
map.on('click', () => {
map.zoomIn();
});
Map center change event:
Code:
map.onCenterChanged(centerChangePost);
function centerChangePost() {
var center = map.getCenter();
alert( 'Lng:'+map.getCenter().lng+'
'+'Lat:'+map.getCenter().lat);
}
Map heading change event:
Code:
map.onHeadingChanged(headingChangePost);
function headingChangePost() {
alert('Heading Changed!');
}
Map zoom level change event:
Code:
map.onZoomChanged(zoomChangePost);
function zoomChangePost() {
alert('Zoom Changed!')
}
3. Drawing on Map
Marker:
You can add markers to a map to identify locations such as stores and buildings, and provide additional information with information windows.
Code:
var map;
var mMarker;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
mMarker = new HWMapJsSDK.HWMarker({
map: map,
position: {lat: 48.85, lng: 2.35},
zIndex: 10,
label: 'A',
icon: {
opacity: 0.5
}
});
}
Marker Result:
Marker Clustering:
The HMS Core Map SDK allows you to cluster markers to effectively manage them on the map at different zoom levels. When a user zooms in on the map to a high level, all markers are displayed on the map. When the user zooms out, the markers are clustered on the map for orderly display.
Code:
var map;
var markers = [];
var markerCluster;
var locations = [
{lat: 51.5145160, lng: -0.1270060},
{ lat : 51.5064490, lng : -0.1244260 },
{ lat : 51.5097080, lng : -0.1200450 },
{ lat : 51.5090680, lng : -0.1421420 },
{ lat : 51.4976080, lng : -0.1456320 },
···
{ lat : 51.5061590, lng : -0.140280 },
{ lat : 51.5047420, lng : -0.1470490 },
{ lat : 51.5126760, lng : -0.1189760 },
{ lat : 51.5108480, lng : -0.1208480 }
];
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 3;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
generateMarkers(locations);
markerCluster = new HWMapJsSDK.HWMarkerCluster(map, markers);
}
function generateMarkers(locations) {
for (let i = 0; i < locations.length; i++) {
var opts = {
position: locations[i]
};
markers.push(new HWMapJsSDK.HWMarker(opts));
}
}
Cluster markers Result:
Information Window:
The HMS Core Map SDK supports the display of information windows on the map. There are two types of information windows: One is to display text or image independently, and the other is to display text or image in a popup above a marker. The information window provides details about a marker.
Code:
var infoWindow;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
var map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
infoWindow = new HWMapJsSDK.HWInfoWindow({
map,
position: {lat: 48.856613, lng: 2.352222},
content: 'This is to show mouse event of another marker',
offset: [0, -40],
});
}
Info window Result:
Ground Overlay
The builder function of GroundOverlay uses the URL, LatLngBounds, and GroundOverlayOptions of an image as the parameters to display the image in a specified area on the map. The sample code is as follows:
Code:
var map;
var mGroundOverlay;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
var imageBounds = {
north: 49,
south: 48.5,
east: 2.5,
west: 1.5,
};
mGroundOverlay = new HWMapJsSDK.HWGroundOverlay(
// Path to a local image or URL of an image.
'huawei_logo.png',
imageBounds,
{
map: map,
opacity: 1,
zIndex: 1
}
);
}
Marker Result:
Ionic / Cordova Map Implementation
In this part of article we are supposed to add Huawei Map Javascript API’s.
Update Index.html to implment Huawei Map JS scripts:
You need to update src/index.html and include Huawei map javacript cloud script url.
Code:
function loadMapScript() {
const apiKey = encodeURIComponent(
"API_KEY"
);
const src = `https://mapapi.cloud.huawei.com/mapjs/v1/api/js?callback=initMap&key=${apiKey}`;
const mapScript = document.createElement("script");
mapScript.setAttribute("src", src);
document.head.appendChild(mapScript);
}
function initMap() { }
loadMapScript();
Make new Map page:
Code:
ionic g page maps
Update maps.page.ts file and update typescript:
Code:
import { Component, OnInit, ChangeDetectorRef } from "@angular/core";
import { Observable } from "rxjs";
declare var HWMapJsSDK: any;
declare var cordova: any;
@Component({
selector: "app-maps",
templateUrl: "./maps.page.html",
styleUrls: ["./maps.page.scss"],
})
export class MapsPage implements OnInit {
map: any;
baseLat = 24.713552;
baseLng = 46.675297;
ngOnInit() {
this.showMap(his.baseLat, this.baseLng);
}
ionViewWillEnter() {
}
ionViewDidEnter() {
}
showMap(lat = this.baseLat, lng = this.baseLng) {
const mapOptions: any = {};
mapOptions.center = { lat: lat, lng: lng };
mapOptions.zoom = 10;
mapOptions.language = "ENG";
this.map = new HWMapJsSDK.HWMap(document.getElementById("map"), mapOptions);
this.map.setCenter({ lat: lat, lng: lng });
}
}
Ionic / Cordova App Result:
Native Application Huawei JS API Implementation
In this part of article we are supposed to add javascript based Huawei Map html version into our Native through webview. This part of implementation will be helpful for developer who required very minimal implementation of map.
Make assets/www/map.html file
Add the following HTML code inside map.html file:
Code:
var map;
var mMarker;
var infoWindow;
function initMap() {
const LatLng = { lat: 24.713552, lng: 46.675297 };
const mapOptions = {};
mapOptions.center = LatLng;
mapOptions.zoom = 10;
mapOptions.scaleControl = true;
mapOptions.locationControl= true;
mapOptions.language = "ENG";
map = new HWMapJsSDK.HWMap(
document.getElementById("map"),
mapOptions
);
map.setCenter(LatLng);
mMarker = new HWMapJsSDK.HWMarker({
map: map,
position: LatLng,
zIndex: 10,
label: 'A',
icon: {
opacity: 0.5
}
});
mMarker.addListener('click', () => {
infoWindow.open();
});
infoWindow = new HWMapJsSDK.HWInfoWindow({
map,
position: LatLng,
content: 'This is to info window of marker',
offset: [0, -40],
});
infoWindow.close();
}
Add the webview in your layout:
Code:
< WebView
android:id="@+id/webView_map"
android:layout_width="match_parent"
android:layout_height="match_parent"
/>
Update your Activity class to call html file
Code:
class MainActivity : AppCompatActivity() {
lateinit var context: Context
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
context = this
val mWebview = findViewById(R.id.webView_map)
mWebview.webChromeClient = WebChromeClient()
mWebview.webViewClient = WebViewClient()
mWebview.settings.javaScriptEnabled = true
mWebview.settings.setAppCacheEnabled(true)
mWebview.settings.mediaPlaybackRequiresUserGesture = true
mWebview.settings.domStorageEnabled = true
mWebview.loadUrl("file:///android_asset/www/map.html")
}
}
Internet permission:
Don’t forget to add internet permissions in androidmanifest.xml file.
Code:
< uses-permission android:name="android.permission.INTERNET" />
< uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
< uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
Native app Result:
References:
Huawei Map JavaScript API:
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/javascript-api-introduction-0000001050164063
Complete Ionic JS Map Project:
https://github.com/salmanyaqoob/Ionic-All-HMS-Kits
Conclusion
Huawei Map JavaSript Api will be helpful for JavaScript developers to implement Huawei Map on cross platforms like “Cordova, Ionic, React-Native” and also helpful for the Native developers to implement under his projects. Developers can also able to implement Huawei Maps on websites.
Thank you very much, very helpful.

Create and Monitor Geofences with HuaweiMap in Xamarin.Android Application

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
A geofence is a virtual perimeter set on a real geographic area. Combining a user position with a geofence perimeter, it is possible to know if the user is inside the geofence or if he is exiting or entering the area.
In this article, we will discuss how to use the geofence to notify the user when the device enters/exits an area using the HMS Location Kit in a Xamarin.Android application. We will also add and customize HuaweiMap, which includes drawing circles, adding pointers, and using nearby searches in search places. We are going to learn how to use the below features together:
Geofence
Reverse Geocode
HuaweiMap
Nearby Search
First of all, you need to be a registered Huawei Mobile Developer and create an application in Huawei App Console in order to use HMS Map Location and Site Kits. You can follow there steps in to complete the configuration that required for development.
Configuring App Information in AppGallery Connect --> shorturl.at/rL347
Creating Xamarin Android Binding Libraries --> shorturl.at/rBP46
Integrating the HMS Map Kit Libraries for Xamarin --> shorturl.at/vAHPX
Integrating the HMS Location Kit Libraries for Xamarin --> shorturl.at/dCX07
Integrating the HMS Site Kit Libraries for Xamarin --> shorturl.at/bmDX6
Integrating the HMS Core SDK --> shorturl.at/qBISV
Setting Package in Xamarin --> shorturl.at/brCU1
When we create our Xamarin.Android application in the above steps, we need to make sure that the package name is the same as we entered the Console. Also, don’t forget the enable them in Console.
Manifest & Permissions
We have to update the application’s manifest file by declaring permissions that we need as shown below.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
Also, add a meta-data element to embed your app id in the application tag, it is required for this app to authenticate on the Huawei’s cloud server. You can find this id in agconnect-services.json file.
Code:
<meta-data android:name="com.huawei.hms.client.appid" android:value="appid=YOUR_APP_ID" />
Request location permission
Code:
private void RequestPermissions()
{
if (ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessCoarseLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessFineLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.WriteExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.ReadExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.Internet) != (int)Permission.Granted)
{
ActivityCompat.RequestPermissions(this,
new System.String[]
{
Manifest.Permission.AccessCoarseLocation,
Manifest.Permission.AccessFineLocation,
Manifest.Permission.WriteExternalStorage,
Manifest.Permission.ReadExternalStorage,
Manifest.Permission.Internet
},
100);
}
else
GetCurrentPosition();
}
Add a Map
Add a <fragment> element to your activity’s layout file, activity_main.xml. This element defines a MapFragment to act as a container for the map and to provide access to the HuaweiMap object.
Code:
<fragment
android:id="@+id/mapfragment"
class="com.huawei.hms.maps.MapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
Implement the IOnMapReadyCallback interface to MainActivity and override OnMapReady method which is triggered when the map is ready to use. Then use GetMapAsync to register for the map callback.
We request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
Code:
public class MainActivity : AppCompatActivity, IOnMapReadyCallback
{
...
public void OnMapReady(HuaweiMap map)
{
hMap = map;
hMap.UiSettings.MyLocationButtonEnabled = true;
hMap.UiSettings.CompassEnabled = true;
hMap.UiSettings.ZoomControlsEnabled = true;
hMap.UiSettings.ZoomGesturesEnabled = true;
hMap.MyLocationEnabled = true;
hMap.MapClick += HMap_MapClick;
if (selectedCoordinates == null)
selectedCoordinates = new GeofenceModel { LatLng = CurrentPosition, Radius = 30 };
}
}
As you can see above, with the UiSettings property of the HuaweiMap object we set my location button, enable compass, etc. Now when the app launch, directly get the current location and move the camera to it. In order to do that we use FusedLocationProviderClient that we instantiated and call LastLocation API.
LastLocation API returns a Task object that we can check the result by implementing the relevant listeners for success and failure.In success listener we are going to move the map’s camera position to the last known position.
Code:
private void GetCurrentPosition()
{
var locationTask = fusedLocationProviderClient.LastLocation;
locationTask.AddOnSuccessListener(new LastLocationSuccess(this));
locationTask.AddOnFailureListener(new LastLocationFail(this));
}
...
public class LastLocationSuccess : Java.Lang.Object, IOnSuccessListener
{
...
public void OnSuccess(Java.Lang.Object location)
{
Toast.MakeText(mainActivity, "LastLocation request successful", ToastLength.Long).Show();
if (location != null)
{
MainActivity.CurrentPosition = new LatLng((location as Location).Latitude, (location as Location).Longitude);
mainActivity.RepositionMapCamera((location as Location).Latitude, (location as Location).Longitude);
}
}
}
To change the position of the camera, we must specify where we want to move the camera, using a CameraUpdate. The Map Kit allows us to create many different types of CameraUpdate using CameraUpdateFactory.
There are some methods for the camera position changes as we see above. Simply these are:
NewLatLng: Change camera’s latitude and longitude, while keeping other properties
NewLatLngZoom: Changes the camera’s latitude, longitude, and zoom, while keeping other properties
NewCameraPosition: Full flexibility in changing the camera position
We are going to use NewCameraPosition. A CameraPosition can be obtained with a CameraPosition.Builder. And then we can set target, bearing, tilt and zoom properties.
Code:
public void RepositionMapCamera(double lat, double lng)
{
var cameraPosition = new CameraPosition.Builder();
cameraPosition.Target(new LatLng(lat, lng));
cameraPosition.Zoom(1000);
cameraPosition.Bearing(45);
cameraPosition.Tilt(20);
CameraUpdate cameraUpdate = CameraUpdateFactory.NewCameraPosition(cameraPosition.Build());
hMap.MoveCamera(cameraUpdate);
}
Creating Geofence
In this part, we will choose the location where we want to set geofence in two different ways. The first is to select the location by clicking on the map, and the second is to search for nearby places by keyword and select one after placing them on the map with the marker.
Set the geofence location by clicking on the map
It is always easier to select a location by seeing it. After this section, we are able to set a geofence around the clicked point when the map’s clicked. We attached the Click event to our map in the OnMapReady method. In this Click event, we will add a marker to the clicked point and draw a circle around it.
Also, we will use the Seekbar at the bottom of the page to adjust the circle radius. We set selectedCoordinates variable when adding the marker. Let’s create the following method to create the marker:
Code:
private void HMap_MapClick(object sender, HuaweiMap.MapClickEventArgs e)
{
selectedCoordinates.LatLng = e.P0;
if (circle != null)
{
circle.Remove();
circle = null;
}
AddMarkerOnMap();
}
void AddMarkerOnMap()
{
if (marker != null) marker.Remove();
var markerOption = new MarkerOptions()
.InvokeTitle("You are here now")
.InvokePosition(selectedCoordinates.LatLng);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
marker = hMap.AddMarker(markerOption);
bool isInfoWindowShown = marker.IsInfoWindowShown;
if (isInfoWindowShown)
marker.HideInfoWindow();
else
marker.ShowInfoWindow();
}
Adding MapInfoWindowAdapter class to our project for rendering the custom info model. And implement HuaweiMap.IInfoWindowAdapter interface to it. When an information window needs to be displayed for a marker, methods provided by this adapter are called in any case.
Now let’s create a custom info window layout and named it as map_info_view.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<Button
android:text="Add geofence"
android:width="100dp"
style="@style/Widget.AppCompat.Button.Colored"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/btnInfoWindow" />
</LinearLayout>
And return it after customizing it in GetInfoWindow() method. The full code of the adapter is below:
Code:
internal class MapInfoWindowAdapter : Java.Lang.Object, HuaweiMap.IInfoWindowAdapter
{
private MainActivity activity;
private GeofenceModel selectedCoordinates;
private View addressLayout;
public MapInfoWindowAdapter(MainActivity currentActivity){activity = currentActivity;}
public View GetInfoContents(Marker marker){return null;}
public View GetInfoWindow(Marker marker)
{
if (marker == null)
return null;
selectedCoordinates = new GeofenceModel { LatLng = new LatLng(marker.Position.Latitude, marker.Position.Longitude) };
View mapInfoView = activity.LayoutInflater.Inflate(Resource.Layout.map_info_view, null);
var radiusBar = activity.FindViewById<SeekBar>(Resource.Id.radiusBar);
if (radiusBar.Visibility == Android.Views.ViewStates.Invisible)
{
radiusBar.Visibility = Android.Views.ViewStates.Visible;
radiusBar.SetProgress(30, true);
}
activity.FindViewById<SeekBar>(Resource.Id.radiusBar)?.SetProgress(30, true);
activity.DrawCircleOnMap(selectedCoordinates);
Button button = mapInfoView.FindViewById<Button>(Resource.Id.btnInfoWindow);
button.Click += btnInfoWindow_ClickAsync;
return mapInfoView;
}
}
Now we create a method to arrange a circle around the marker that representing the geofence radius. Create a new DrawCircleOnMap method in MainActivity for this. To construct a circle, we must specify the Center and Radius. Also, I set other properties like StrokeColor etc.
Code:
public void DrawCircleOnMap(GeofenceModel geoModel)
{
if (circle != null)
{
circle.Remove();
circle = null;
}
CircleOptions circleOptions = new CircleOptions()
.InvokeCenter(geoModel.LatLng)
.InvokeRadius(geoModel.Radius)
.InvokeFillColor(Color.Argb(50, 0, 14, 84))
.InvokeStrokeColor(Color.Yellow)
.InvokeStrokeWidth(15);
circle = hMap.AddCircle(circleOptions);
}
private void radiusBar_ProgressChanged(object sender, SeekBar.ProgressChangedEventArgs e)
{
selectedCoordinates.Radius = e.Progress;
DrawCircleOnMap(selectedCoordinates);
}
We will use SeekBar to change the radius of the circle. As the value changes, the drawn circle will expand or shrink.
Reverse Geocoding
Now let’s handle the click event of the info window.
But before open that window, we need to reverse geocoding selected coordinates to getting a formatted address. HUAWEI Site Kit provides us a set of HTTP API including the one that we need, reverseGeocode.
Let’s add the GeocodeManager class to our project and update it as follows:
Code:
public async Task<Site> ReverseGeocode(double lat, double lng)
{
string result = "";
using (var client = new HttpClient())
{
MyLocation location = new MyLocation();
location.Lat = lat;
location.Lng = lng;
var root = new ReverseGeocodeRequest();
root.Location = location;
var settings = new JsonSerializerSettings();
settings.ContractResolver = new LowercaseSerializer();
var json = JsonConvert.SerializeObject(root, Formatting.Indented, settings);
var data = new StringContent(json, Encoding.UTF8, "application/json");
var url = "siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=" + Android.Net.Uri.Encode(ApiKey);
var response = await client.PostAsync(url, data);
result = response.Content.ReadAsStringAsync().Result;
}
return JsonConvert.DeserializeObject<ReverseGeocodeResponse>(result).sites.FirstOrDefault();
}
In the above code, we request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=APIKEY
Click to expand...
Click to collapse
Request model:
Code:
public class MyLocation
{
public double Lat { get; set; }
public double Lng { get; set; }
}
public class ReverseGeocodeRequest
{
public MyLocation Location { get; set; }
}
Note that the JSON response contains three root elements:
“returnCode”: For details, please refer to Result Codes.
“returnDesc”: description
“sites” contains an array of geocoded address information
Generally, only one entry in the “sites” array is returned for address lookups, though the geocoder may return several results when address queries are ambiguous.
Add the following codes to our MapInfoWindowAdapter where we get results from the Reverse Geocode API and set the UI elements.
Code:
private async void btnInfoWindow_ClickAsync(object sender, System.EventArgs e)
{
addressLayout = activity.LayoutInflater.Inflate(Resource.Layout.reverse_alert_layout, null);
GeocodeManager geocodeManager = new GeocodeManager(activity);
var addressResult = await geocodeManager.ReverseGeocode(selectedCoordinates.LatLng.Latitude, selectedCoordinates.LatLng.Longitude);
if (addressResult.ReturnCode != 0)
return;
var address = addressResult.Sites.FirstOrDefault();
var txtAddress = addressLayout.FindViewById<TextView>(Resource.Id.txtAddress);
var txtRadius = addressLayout.FindViewById<TextView>(Resource.Id.txtRadius);
txtAddress.Text = address.FormatAddress;
txtRadius.Text = selectedCoordinates.Radius.ToString();
AlertDialog.Builder builder = new AlertDialog.Builder(activity);
builder.SetView(addressLayout);
builder.SetTitle(address.Name);
builder.SetPositiveButton("Save", (sender, arg) =>
{
selectedCoordinates.Conversion = GetSelectedConversion();
GeofenceManager geofenceManager = new GeofenceManager(activity);
geofenceManager.AddGeofences(selectedCoordinates);
});
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
AlertDialog alert = builder.Create();
alert.Show();
}
Now, after selecting the conversion, we can complete the process by calling the AddGeofence method in the GeofenceManager class by pressing the save button in the dialog window.
Code:
public void AddGeofences(GeofenceModel geofenceModel)
{
//Set parameters
geofenceModel.Id = Guid.NewGuid().ToString();
if (geofenceModel.Conversion == 5) //Expiration value that indicates the geofence should never expire.
geofenceModel.Timeout = Geofence.GeofenceNeverExpire;
else
geofenceModel.Timeout = 10000;
List<IGeofence> geofenceList = new List<IGeofence>();
//Geofence Service
GeofenceService geofenceService = LocationServices.GetGeofenceService(activity);
PendingIntent pendingIntent = CreatePendingIntent();
GeofenceBuilder somewhereBuilder = new GeofenceBuilder()
.SetUniqueId(geofenceModel.Id)
.SetValidContinueTime(geofenceModel.Timeout)
.SetRoundArea(geofenceModel.LatLng.Latitude, geofenceModel.LatLng.Longitude, geofenceModel.Radius)
.SetDwellDelayTime(10000)
.SetConversions(geofenceModel.Conversion); ;
//Create geofence request
geofenceList.Add(somewhereBuilder.Build());
GeofenceRequest geofenceRequest = new GeofenceRequest.Builder()
.CreateGeofenceList(geofenceList)
.Build();
//Register geofence
var geoTask = geofenceService.CreateGeofenceList(geofenceRequest, pendingIntent);
geoTask.AddOnSuccessListener(new CreateGeoSuccessListener(activity));
geoTask.AddOnFailureListener(new CreateGeoFailListener(activity));
}
In the AddGeofence method, we need to set the geofence request parameters, like the selected conversion, unique Id and timeout according to conversion, etc. with GeofenceBuilder. We create GeofenceBroadcastReceiver and display a toast message when a geofence action occurs.
Code:
[BroadcastReceiver(Enabled = true)]
[IntentFilter(new[] { "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY" })]
class GeofenceBroadcastReceiver : BroadcastReceiver
{
public static readonly string ActionGeofence = "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY";
public override void OnReceive(Context context, Intent intent)
{
if (intent != null)
{
var action = intent.Action;
if (action == ActionGeofence)
{
GeofenceData geofenceData = GeofenceData.GetDataFromIntent(intent);
if (geofenceData != null)
{
Toast.MakeText(context, "Geofence triggered: " + geofenceData.ConvertingLocation.Latitude +"\n" + geofenceData.ConvertingLocation.Longitude + "\n" + geofenceData.Conversion.ToConversionName(), ToastLength.Long).Show();
}
}
}
}
}
After that in CreateGeoSuccessListener and CreateGeoFailureListener that we implement IOnSuccessListener and IOnFailureListener respectively, we display a toast message to the user like this:
Code:
public class CreateGeoFailListener : Java.Lang.Object, IOnFailureListener
{
public void OnFailure(Java.Lang.Exception ex)
{
Toast.MakeText(mainActivity, "Geofence request failed: " + GeofenceErrorCodes.GetErrorMessage((ex as ApiException).StatusCode), ToastLength.Long).Show();
}
}
public class CreateGeoSuccessListener : Java.Lang.Object, IOnSuccessListener
{
public void OnSuccess(Java.Lang.Object data)
{
Toast.MakeText(mainActivity, "Geofence request successful", ToastLength.Long).Show();
}
}
Set geofence location using Nearby Search
On the main layout when the user clicks the Search Nearby Places button, a search dialog like below appears:
Create search_alert_layout.xml with a search input In Main Activity, create click event of that button and open an alert dialog after it’s view is set to search_alert_layout. And make NearbySearch when clicking the Search button:
Code:
private void btnGeoWithAddress_Click(object sender, EventArgs e)
{
search_view = base.LayoutInflater.Inflate(Resource.Layout.search_alert_layout, null);
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.SetView(search_view);
builder.SetTitle("Search Location");
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
search_view.FindViewById<Button>(Resource.Id.btnSearch).Click += btnSearchClicked;
alert = builder.Create();
alert.Show();
}
private void btnSearchClicked(object sender, EventArgs e)
{
string searchText = search_view.FindViewById<TextView>(Resource.Id.txtSearch).Text;
GeocodeManager geocodeManager = new GeocodeManager(this);
geocodeManager.NearbySearch(CurrentPosition, searchText);
}
We pass search text and Current Location into the GeocodeManager NearbySearch method as parameters. We need to modify GeoCodeManager class and add nearby search method to it.
Code:
public void NearbySearch(LatLng currentLocation, string searchText)
{
ISearchService searchService = SearchServiceFactory.Create(activity, Android.Net.Uri.Encode("YOUR_API_KEY"));
NearbySearchRequest nearbySearchRequest = new NearbySearchRequest();
nearbySearchRequest.Query = searchText;
nearbySearchRequest.Language = "en";
nearbySearchRequest.Location = new Coordinate(currentLocation.Latitude, currentLocation.Longitude);
nearbySearchRequest.Radius = (Integer)2000;
nearbySearchRequest.PageIndex = (Integer)1;
nearbySearchRequest.PageSize = (Integer)5;
nearbySearchRequest.PoiType = LocationType.Address;
searchService.NearbySearch(nearbySearchRequest, new QuerySuggestionResultListener(activity as MainActivity));
}
And to handle the result we must create a listener and implement the ISearchResultListener interface to it.
Code:
public class NearbySearchResultListener : Java.Lang.Object, ISearchResultListener
{
public void OnSearchError(SearchStatus status)
{
Toast.MakeText(context, "Error Code: " + status.ErrorCode + " Error Message: " + status.ErrorMessage, ToastLength.Long);
}
public void OnSearchResult(Java.Lang.Object results)
{
NearbySearchResponse nearbySearchResponse = (NearbySearchResponse)results;
if (nearbySearchResponse != null && nearbySearchResponse.TotalCount > 0)
context.SetSearchResultOnMap(nearbySearchResponse.Sites);
}
}
In OnSearchResult method, NearbySearchResponse object return. We will insert markers to the mapper element in this response. The map will look like this:
In Main Activity create a method named SetSearchResultOnMap and pass IList<Site> as a parameter to insert multiple markers on the map.
Code:
public void SetSearchResultOnMap(IList<Com.Huawei.Hms.Site.Api.Model.Site> sites)
{
hMap.Clear();
if (searchMarkers != null && searchMarkers.Count > 0)
foreach (var item in searchMarkers)
item.Remove();
searchMarkers = new List<Marker>();
for (int i = 0; i < sites.Count; i++)
{
MarkerOptions marker1Options = new MarkerOptions()
.InvokePosition(new LatLng(sites[i].Location.Lat, sites[i].Location.Lng))
.InvokeTitle(sites[i].Name).Clusterable(true);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
var marker1 = hMap.AddMarker(marker1Options);
searchMarkers.Add(marker1);
RepositionMapCamera(sites[i].Location.Lat, sites[i].Location.Lng);
}
hMap.SetMarkersClustering(true);
alert.Dismiss();
}
Now, we add markers as we did above. But here we use SetMarkersClustering(true) to consolidates markers into clusters when zooming out of the map.
You can download the source code from below:
github.com/stugcearar/HMSCore-Xamarin-Android-Samples/tree/master/LocationKit/HMS_Geofence
Also if you have any questions, ask away in Huawei Developer Forums.
Errors
If your location permission set “Allowed only while in use instead” of ”Allowed all the time” below exception will be thrown.
int GEOFENCE_INSUFFICIENT_PERMISSION
Insufficient permission to perform geofence-related operations.
You can see all result codes including errors, in here for Location service.
You can find result codes with details here for Geofence request.

Health Kit | Data Controller Sample

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone, in this article, we’ll develop an android application using the Huawei Health kit’s data controller feature. Lets get start it.
About the Service
HUAWEI Health Kit (Health Kit for short) allows ecosystem apps to access fitness and health data of users based on their HUAWEI ID and authorization. For consumers, Health Kit provides a mechanism for fitness and health data storage and sharing based on flexible authorization. For developers and partners, Health Kit provides a data platform and fitness and health open capabilities, so that they can build related apps and services based on a multitude of data types. Health Kit connects the hardware devices and ecosystem apps to provide consumers with health care, workout guidance, and ultimate service experience.
Configure your project on AppGallery Connect
Registering a Huawei ID
You need to register a Huawei ID to use the plugin. If you don’t have one, follow the instructions here.
Preparations for Integrating HUAWEI HMS Core
First of all, you need to integrate Huawei Mobile Services with your application. I will not get into details about how to integrate your application but you can use this tutorial as step by step guide.
Add required dependency to the app-level build.gradle file.
Code:
defaultConfig {
minSdkVersion 24
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
resConfigs "en", "zh-rCN", "tr"
}
}
dependencies {
implementation 'com.huawei.hms:health:5.0.3.300'
}
Lets add the required permissions to the AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
Applying for Health Kit
You should select the data access permissions that must be applied for the product.
For more detail you should visit: https://developer.huawei.com/consumer/en/doc/apply-kitservice-0000001050071707-V5
Developing Your App
Signing In and Applying for Scopes
The developer’s app calls the related APIs to display HUAWEI ID sign-in screen and authorization screen. The app can only access data upon user authorization. The user can select the data types to be authorized and grant only some data permissions.
Code:
public class HealthkitActivity extends AppCompatActivity {
private static final String TAG = "KitConnectActivity";
// Request code for displaying the authorization screen using the startActivityForResult method.
// The value can be defined by developers.
private static final int REQUEST_SIGN_IN_LOGIN = 1002;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_healthkit);
signIn();
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
// Handle the sign-in response.
handleSignInResult(requestCode, data);
}
private void signIn() {
Log.i(TAG, "begin sign in");
List<Scope> scopeList = new ArrayList<>();
// Add scopes to apply for. The following only shows an example.
// Developers need to add scopes according to their specific needs.
// View and save steps in HUAWEI Health Kit.
scopeList.add(new Scope(Scopes.HEALTHKIT_STEP_BOTH));
// View and save height and weight in HUAWEI Health Kit.
scopeList.add(new Scope(Scopes.HEALTHKIT_HEIGHTWEIGHT_BOTH));
// View and save the heart rate data in HUAWEI Health Kit.
scopeList.add(new Scope(Scopes.HEALTHKIT_HEARTRATE_BOTH));
// Used for recording real-time steps in HUAWEI Health Kit.
// scopeList.add(new Scope(Scopes.HEALTHKIT_STEP_REALTIME));
// Used for recording real-time heartRate in HUAWEI Health Kit.
//scopeList.add(new Scope(Scopes.HEALTHKIT_HEARTRATE_REALTIME));
// View and save activityRecord in HUAWEI Health Kit.
// scopeList.add(new Scope(Scopes.HEALTHKIT_ACTIVITY_RECORD_BOTH));
// Configure authorization parameters.
HuaweiIdAuthParamsHelper authParamsHelper =
new HuaweiIdAuthParamsHelper(HuaweiIdAuthParams.DEFAULT_AUTH_REQUEST_PARAM);
HuaweiIdAuthParams authParams =
authParamsHelper.setIdToken().setAccessToken().setScopeList(scopeList).createParams();
// Initialize the HuaweiIdAuthService object.
final HuaweiIdAuthService authService = HuaweiIdAuthManager.getService(getApplicationContext(), authParams);
Task<AuthHuaweiId> authHuaweiIdTask = authService.silentSignIn();
final Context context = this;
// Add the callback for the call result.
authHuaweiIdTask.addOnSuccessListener(new OnSuccessListener<AuthHuaweiId>() {
@Override
public void onSuccess(AuthHuaweiId huaweiId) {
// The silent sign-in is successful.
Log.i(TAG, "silentSignIn success");
Toast.makeText(context, "silentSignIn success", Toast.LENGTH_LONG).show();
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception exception) {
// The silent sign-in fails.
// This indicates that the authorization has not been granted by the current account.
if (exception instanceof ApiException) {
ApiException apiException = (ApiException) exception;
Log.i(TAG, "sign failed status:" + apiException.getStatusCode());
Toast.makeText(context, "sign failed status:" + apiException.getStatusCode(), Toast.LENGTH_LONG).show();
Log.i(TAG, "begin sign in by intent");
Toast.makeText(context, "begin sign in by intent", Toast.LENGTH_LONG).show();
// Call the sign-in API using the getSignInIntent() method.
Intent signInIntent = authService.getSignInIntent();
startActivityForResult(signInIntent, REQUEST_SIGN_IN_LOGIN);
}
}
});
}
private void handleSignInResult(int requestCode, Intent data) {
// Handle only the authorized responses
if (requestCode != REQUEST_SIGN_IN_LOGIN) {
return;
}
// Obtain the authorization response from the intent.
HuaweiIdAuthResult result = HuaweiIdAuthAPIManager.HuaweiIdAuthAPIService.parseHuaweiIdFromIntent(data);
if (result != null) {
Log.d(TAG, "handleSignInResult status = " + result.getStatus() + ", result = " + result.isSuccess());
Toast.makeText(this, "handleSignInResult status = "+ result.getStatus() + ", result = " + result.isSuccess(), Toast.LENGTH_LONG).show();
if (result.isSuccess()) {
Log.d(TAG, "sign in is success");
Toast.makeText(this, "sign in is success", Toast.LENGTH_LONG).show();
// Obtain the authorization result.
HuaweiIdAuthResult authResult =
HuaweiIdAuthAPIManager.HuaweiIdAuthAPIService.parseHuaweiIdFromIntent(data);
}
}
}
}
For details about the sign-in process, please refer to HUAWEI Account Kit Development Guide.
DataController
After integrating Health Kit, the app is able to call ten methods in DataController to perform operations on the fitness and health data. The methods include:
insert: inserts data.
delete: deletes data.
update: updates data.
read: reads data.
readTodaySummation: queries the statistical data of the current day.
readDailySummation: queries the statistical data of multiple days.
clearAll: clears data of the app from the device and cloud.
Inserting the User’s Fitness and Health Data
Insert the user’s fitness and health data into the Health platform.
Code:
public class HealthDataControllerActivity extends AppCompatActivity {
private static final String TAG = "DataController";
// Object of controller for fitness and health data, providing APIs for read/write, batch read/write, and listening
private DataController dataController;
// Internal context object of the activity
private Context context;
// PendingIntent, required when registering or unregistering a listener within the data controller
private PendingIntent pendingIntent;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_health_data_controller);
context = this;
logInfoView = (TextView) findViewById(R.id.data_controller_log_info);
logInfoView.setMovementMethod(ScrollingMovementMethod.getInstance());
initDataController();
syncAllData = findViewById(R.id.syncAllData);
syncAllData.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
syncAllData(dataController);
}
});
}
/**
* Initialize a data controller object.
*/
private void initDataController() {
// Obtain and set the read & write permissions for DT_CONTINUOUS_STEPS_DELTA and DT_INSTANTANEOUS_HEIGHT.
// Use the obtained permissions to obtain the data controller object.
HiHealthOptions hiHealthOptions = HiHealthOptions.builder()
.addDataType(DataType.DT_CONTINUOUS_STEPS_DELTA, HiHealthOptions.ACCESS_READ)
.addDataType(DataType.DT_CONTINUOUS_STEPS_DELTA, HiHealthOptions.ACCESS_WRITE)
.addDataType(DataType.DT_INSTANTANEOUS_HEIGHT, HiHealthOptions.ACCESS_READ)
.addDataType(DataType.DT_INSTANTANEOUS_HEIGHT, HiHealthOptions.ACCESS_WRITE)
.build();
AuthHuaweiId signInHuaweiId = HuaweiIdAuthManager.getExtendedAuthResult(hiHealthOptions);
dataController = HuaweiHiHealth.getDataController(context, signInHuaweiId);
}
/**
* Use the data controller to add a sampling dataset.
*
* @param view (indicating a UI object)
* @throws ParseException (indicating a failure to parse the time string)
*/
public void insertData(View view) throws ParseException {
// 1. Build a DataCollector object.
DataCollector dataCollector = new DataCollector.Builder().setPackageName(context)
.setDataType(DataType.DT_CONTINUOUS_STEPS_DELTA)
.setDataStreamName("STEPS_DELTA")
.setDataGenerateType(DataCollector.DATA_TYPE_RAW)
.build();
// 2. Create a sampling dataset set based on the data collector.
final SampleSet sampleSet = SampleSet.create(dataCollector);
// 3. Build the start time, end time, and incremental step count for a DT_CONTINUOUS_STEPS_DELTA sampling point.
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss");
Date startDate = dateFormat.parse("2020-09-23 09:00:00");
//Date startDate = dateFormat.parse(start_Date.getText().toString());
Date endDate = dateFormat.parse("2020-09-23 09:05:00");
//Date startDate = dateFormat.parse(end_Date.getText().toString());
/*try {
// Enter the start time and end time. The standard UNIX timestamp is used for storage, without considering the time zone differences.
startDate = dateFormat.parse("2020-03-17 09:00:00");
endDate = dateFormat.parse("2020-03-17 09:05:00");
} catch (ParseException e) {
logger("Time parsing error");
}*/
int stepsDelta = 1000;
// 4. Build a DT_CONTINUOUS_STEPS_DELTA sampling point.
SamplePoint samplePoint = sampleSet.createSamplePoint()
.setTimeInterval(startDate.getTime(), endDate.getTime(), TimeUnit.MILLISECONDS);
samplePoint.getFieldValue(Field.FIELD_STEPS_DELTA).setIntValue(stepsDelta);
// 5. Save a DT_CONTINUOUS_STEPS_DELTA sampling point to the sampling dataset.
// You can repeat steps 3 through 5 to add more sampling points to the sampling dataset.
sampleSet.addSample(samplePoint);
// 6. Call the data controller to insert the sampling dataset into the Health platform.
Task<Void> insertTask = dataController.insert(sampleSet);
// 7. Calling the data controller to insert the sampling dataset is an asynchronous operation.
// Therefore, a listener needs to be registered to monitor whether the data insertion is successful or not.
insertTask.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void result) {
logger("Success insert an SampleSet into HMS core");
showSampleSet(sampleSet);
logger(SPLIT);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
printFailureMessage(e, "insert");
}
});
}
Deleting the User’s Fitness and Health Data
Only historical data that has been inserted by the current app can be deleted from the Health platform.
Code:
/**
* Use the data controller to delete the sampling data by specific criteria.
*
* @param view (indicating a UI object)
* @throws ParseException (indicating a failure to parse the time string)
*/
public void deleteData(View view) throws ParseException {
// 1. Build the condition for data deletion: a DataCollector object.
DataCollector dataCollector = new DataCollector.Builder().setPackageName(context)
.setDataType(DataType.DT_CONTINUOUS_STEPS_DELTA)
.setDataStreamName("STEPS_DELTA")
.setDataGenerateType(DataCollector.DATA_TYPE_RAW)
.build();
// 2. Build the time range for the deletion: start time and end time.
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss");
Date startDate = dateFormat.parse("2020-08-27 09:00:00");
Date endDate = dateFormat.parse("2020-08-27 09:05:00");
// 3. Build a parameter object as the conditions for the deletion.
DeleteOptions deleteOptions = new DeleteOptions.Builder().addDataCollector(dataCollector)
.setTimeInterval(startDate.getTime(), endDate.getTime(), TimeUnit.MILLISECONDS)
.build();
// 4. Use the specified condition deletion object to call the data controller to delete the sampling dataset.
Task<Void> deleteTask = dataController.delete(deleteOptions);
// 5. Calling the data controller to delete the sampling dataset is an asynchronous operation.
// Therefore, a listener needs to be registered to monitor whether the data deletion is successful or not.
deleteTask.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void result) {
logger("Success delete sample data from HMS core");
logger(SPLIT);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
logger(errorCode + ": " + errorMsg);
printFailureMessage(e, "delete");
}
});
}
Updating the User’s Fitness and Health Data
Code:
/**
* Use the data controller to modify the sampling data by specific criteria.
*
* @param view (indicating a UI object)
* @throws ParseException (indicating a failure to parse the time string)
*/
public void updateData(View view) throws ParseException {
// 1. Build the condition for data update: a DataCollector object.
DataCollector dataCollector = new DataCollector.Builder().setPackageName(context)
.setDataType(DataType.DT_CONTINUOUS_STEPS_DELTA)
.setDataStreamName("STEPS_DELTA")
.setDataGenerateType(DataCollector.DATA_TYPE_RAW)
.build();
// 2. Build the sampling dataset for the update: create a sampling dataset
// for the update based on the data collector.
SampleSet sampleSet = SampleSet.create(dataCollector);
// 3. Build the start time, end time, and incremental step count for
// a DT_CONTINUOUS_STEPS_DELTA sampling point for the update.
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss");
Date startDate = dateFormat.parse("2020-08-27 09:00:00");
Date endDate = dateFormat.parse("2020-08-27 09:05:00");
int stepsDelta = 2000;
// 4. Build a DT_CONTINUOUS_STEPS_DELTA sampling point for the update.
SamplePoint samplePoint = sampleSet.createSamplePoint()
.setTimeInterval(startDate.getTime(), endDate.getTime(), TimeUnit.MILLISECONDS);
samplePoint.getFieldValue(Field.FIELD_STEPS_DELTA).setIntValue(stepsDelta);
// 5. Add an updated DT_CONTINUOUS_STEPS_DELTA sampling point to the sampling dataset for the update.
// You can repeat steps 3 through 5 to add more updated sampling points to the sampling dataset for the update.
sampleSet.addSample(samplePoint);
// 6. Build a parameter object for the update.
// Note: (1) The start time of the modified object updateOptions cannot be greater than the minimum
// value of the start time of all sample data points in the modified data sample set
// (2) The end time of the modified object updateOptions cannot be less than the maximum value of the
// end time of all sample data points in the modified data sample set
UpdateOptions updateOptions =
new UpdateOptions.Builder().setTimeInterval(startDate.getTime(), endDate.getTime(), TimeUnit.MILLISECONDS)
.setSampleSet(sampleSet)
.build();
// 7. Use the specified parameter object for the update to call the
// data controller to modify the sampling dataset.
Task<Void> updateTask = dataController.update(updateOptions);
// 8. Calling the data controller to modify the sampling dataset is an asynchronous operation.
// Therefore, a listener needs to be registered to monitor whether the data update is successful or not.
updateTask.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void result) {
logger("Success update sample data from HMS core");
logger(SPLIT);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
printFailureMessage(e, "update");
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
logger(errorCode + ": " + errorMsg);
}
});
}
Querying the User’s Fitness and Health Data
To read historical data from the Health platform, for example, to read the number of steps taken within a period of time, you can specify the read conditions in ReadOptions. For example, you can specify the data collector, data type, and detailed data. The dataset that matches the query criteria will be returned.
Code:
/**
* Use the data controller to query the sampling dataset by specific criteria.
*
* @param view (indicating a UI object)
* @throws ParseException (indicating a failure to parse the time string)
*/
public void readData(View view) throws ParseException {
// 1. Build the condition for data query: a DataCollector object.
DataCollector dataCollector = new DataCollector.Builder().setPackageName(context)
.setDataType(DataType.DT_CONTINUOUS_STEPS_DELTA)
.setDataStreamName("STEPS_DELTA")
.setDataGenerateType(DataCollector.DATA_TYPE_RAW)
.build();
// 2. Build the time range for the query: start time and end time.
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss");
Date startDate = dateFormat.parse("2020-08-27 09:00:00");
Date endDate = dateFormat.parse("2020-08-27 09:05:00");
try {
// Enter the start time and end time. The standard UNIX timestamp is used for storage, without considering the time zone differences. Data points within the specified timestamp range will be queried.
startDate = dateFormat.parse("2020-03-17 09:00:00");
endDate = dateFormat.parse("2020-03-17 09:05:00");
} catch (ParseException exception) {
logger("Time parsing error");
}
// 3. Build the condition-based query objec
ReadOptions readOptions = new ReadOptions.Builder().read(dataCollector)
.setTimeRange(startDate.getTime(), endDate.getTime(), TimeUnit.MILLISECONDS)
.build();
// 4. Use the specified condition query object to call the data controller to query the sampling dataset.
Task<ReadReply> readReplyTask = dataController.read(readOptions);
// 5. Calling the data controller to query the sampling dataset is an asynchronous operation.
// Therefore, a listener needs to be registered to monitor whether the data query is successful or not.
readReplyTask.addOnSuccessListener(new OnSuccessListener<ReadReply>() {
@Override
public void onSuccess(ReadReply readReply) {
logger("Success read an SampleSets from HMS core");
for (SampleSet sampleSet : readReply.getSampleSets()) {
showSampleSet(sampleSet);
}
logger(SPLIT);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
printFailureMessage(e, "read");
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
logger(errorCode + ": " + errorMsg);
}
});
}
Querying the Statistical Fitness and Health Data of the User of the Day
Code:
/**
* Use the data controller to query the summary data of the current day by data type.
*
* @param view (indicating a UI object)
*/
public void readToday(View view) {
// 1. Use the specified data type (DT_CONTINUOUS_STEPS_DELTA) to call the data controller to query
// the summary data of this data type of the current day.
Task<SampleSet> todaySummationTask = dataController.readTodaySummation(DataType.DT_CONTINUOUS_STEPS_DELTA);
// 2. Calling the data controller to query the summary data of the current day is an
// asynchronous operation. Therefore, a listener needs to be registered to monitor whether
// the data query is successful or not.
// Note: In this example, the inserted data time is fixed at 2020-08-27 09:05:00.
// When commissioning the API, you need to change the inserted data time to the current date
// for data to be queried.
todaySummationTask.addOnSuccessListener(new OnSuccessListener<SampleSet>() {
@Override
public void onSuccess(SampleSet sampleSet) {
logger("Success read today summation from HMS core");
if (sampleSet != null) {
showSampleSet(sampleSet);
}
logger(SPLIT);
}
});
todaySummationTask.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
printFailureMessage(e, "readTodaySummation");
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
logger(errorCode + ": " + errorMsg);
}
});
}
Querying the Statistical Fitness and Health Data of the User of Multiple Days
Code:
/**
* Querying the Summary Fitness and Health Data of the User on the Local Device of the Current Day
*
* @param view (indicating a UI object)
*/
public void currentDay(View view) {
//Call the DataController to query the statistical value of the DT_CONTINUOUS_STEPS_DELTA data type of the current day.
// The query time range starts from 00:00:00 of the day and ends at the system timestamp when the API is called.
// Calling this API will query all data points with the start time or end time being in the specified time range.
// The sum value of the queried data points will be returned.
int endTime = 20200827;
int startTime = 20200818;
Task<SampleSet> daliySummationTask =dataController.readDailySummation(DataType.DT_CONTINUOUS_STEPS_DELTA, startTime, endTime);
//Calling the data controller to query the summary data of the current day is an asynchronous operation.
// Therefore, a listener needs to be registered to monitor whether the data query is successful or not.
daliySummationTask.addOnSuccessListener(new OnSuccessListener<SampleSet>() {
@Override
public void onSuccess(SampleSet sampleSet) {
logger("Success read daily summation from HMS core");
if (sampleSet != null) {
showSampleSet(sampleSet);
}
logger(SPLIT);
}
});
daliySummationTask.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
logger("readTodaySummation" + e.toString());
}
});
}
Clearing the User’s Fitness and Health Data from the Device and Cloud
Call the clearAll method of the DataController to delete data inserted by the current app from the device and cloud
Code:
/**
* Clear all user data from the device and cloud.
*
* @param view (indicating a UI object)
*/
public void clearCloudData(View view) {
// 1. Call the clearAll method of the data controller to delete data
// inserted by the current app from the device and cloud.
Task<Void> clearTask = dataController.clearAll();
// 2. Calling the data controller to clear user data from the device and cloud is an asynchronous operation.
// Therefore, a listener needs to be registered to monitor whether the clearance is successful or not.
clearTask.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void result) {
logger("clearAll success");
logger(SPLIT);
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
printFailureMessage(e, "clearAll");
}
});
}
We successfully integrated Huawei Health Kit’s Data Controller feature into our project. Here’s the result.
Resources:
https://developer.huawei.com/consumer/en/doc/datacontroller-develop-0000001050071677-V5
Related Links
Original post: https://medium.com/huawei-developers/health-kit-data-controller-sample-a9d29b3ba651
Hi Nice information, does Huawei Health Kit provide auto sync data from health devices ( fit bit ) or we need to gather data and provide them to HMS health kit.

Building an Android QUIC REST Client with HQUIC [kotlin]

In a previous post I've shown you how to use the HQUIC kit to perform a simple GET request to download the latest local news by using a third party API. At this point everything is ok, but, what if I want to send a request with headers? or, how can I perform a POST request?. If you have made the same questions, please keep reading.
Previous requirements
An Android Studio project
Integrating the HQUIC SDK
HQUC will perform HTTP requests over the QUIC protocol to let your users enjoy faster connections with lower bandwidth. If the remote server does not support QUIC, the kit will use HTTP V2 instead, so, you just need to code once.
To add the HQUIC kit to your app, add the next dependency to your app level build.gradle file
Code:
implementation 'com.huawei.hms:hquic-provider:5.0.0.300'
Sync your project and you will be ready to use the HQUIC SDK. We will reuse the HQUICService class provided on the HQUIC sample code, but with little modifications
Code:
class HQUICService(val context: Context) {
private val TAG = "HQUICService"
private val DEFAULT_PORT = 443
private val DEFAULT_ALTERNATEPORT = 443
private val executor: Executor = Executors.newSingleThreadExecutor()
private var cronetEngine: CronetEngine? = null
private var callback: UrlRequest.Callback? = null
/**
* Asynchronous initialization.
*/
init {
HQUICManager.asyncInit(
context,
object : HQUICManager.HQUICInitCallback {
override fun onSuccess() {
Log.i(TAG, "HQUICManager asyncInit success")
}
override fun onFail(e: Exception?) {
Log.w(TAG, "HQUICManager asyncInit fail")
}
})
}
/**
* Create a Cronet engine.
*
* @param url URL.
* @return cronetEngine Cronet engine.
*/
private fun createCronetEngine(url: String): CronetEngine? {
if (cronetEngine != null) {
return cronetEngine
}
val builder = CronetEngine.Builder(context)
builder.enableQuic(true)
builder.addQuicHint(getHost(url), DEFAULT_PORT, DEFAULT_ALTERNATEPORT)
cronetEngine = builder.build()
return cronetEngine
}
/**
* Construct a request
*
* @param url Request URL.
* @param method method Method type.
* @return UrlRequest urlrequest instance.
*/
private fun builRequest(
url: String,
method: String,
headers: HashMap<String, String>?,
body:ByteArray?
): UrlRequest? {
val cronetEngine: CronetEngine? = createCronetEngine(url)
val requestBuilder = cronetEngine?.newUrlRequestBuilder(url, callback, executor)
requestBuilder?.apply {
setHttpMethod(method)
if(method=="POST"){
body?.let {
setUploadDataProvider(UploadDataProviders.create(ByteBuffer.wrap(it)), executor) }
}
headers?.let{
for (key in it.keys) {
addHeader(key, headers[key])
}
}
return build()
}
return null
}
/**
* Send a request to the URL.
*
* @param url Request URL.
* @param method Request method type.
*/
fun sendRequest(url: String, method: String, headers: HashMap<String, String>?=null,body:ByteArray?=null) {
Log.i(TAG, "callURL: url is " + url + "and method is " + method)
val urlRequest: UrlRequest? = builRequest(url, method, headers,body)
urlRequest?.apply { urlRequest.start() }
}
/**
* Parse the domain name to obtain the host name.
*
* @param url Request URL.
* @return host Host name.
*/
private fun getHost(url: String): String? {
var host: String? = null
try {
val url1 = URL(url)
host = url1.host
} catch (e: MalformedURLException) {
Log.e(TAG, "getHost: ", e)
}
return host
}
fun setCallback(mCallback: UrlRequest.Callback?) {
callback = mCallback
}
}
The sendRequest method has been modified to receive a HashMap with the headers, and a ByteArray with the Body payload. Note the sendRequest method if the body or the headers are not null, will be added to the request.
Code:
requestBuilder?.apply {
setHttpMethod(method)
if(method=="POST"){
body?.let {//Adding the request Body
setUploadDataProvider(UploadDataProviders.create(ByteBuffer.wrap(it)), executor) }
}
headers?.let{
for (key in it.keys) {//Adding all the headers
addHeader(key, headers[key])
}
}
With that modifications can perform an HTTP request by this way
Code:
val map=HashMap<String,String>()
map["Content-Type"] = "application/json"
val body=JSONObject().apply {
put("key1","value1")
put("key2","value2")
}
HQUICService(context).sendRequest(HOST,"POST",map,body.toString().toByteArray())
That's enough to send a request, but, what about the response? HQUIC provides an Abstract Class for listening the request events. All we need is inherit from UrlRequest.Callback. Let's do it.
Code:
class HQUICClient(context: Context) : UrlRequest.Callback() {
var hquicService: HQUICService? = null
val CAPACITY = 10240
val TAG="QUICClient"
val response=ByteArrayOutputStream()
var listener:QuicClientListener?=null
init {
hquicService = HQUICService(context)
hquicService?.setCallback(this)
}
fun makeRequest(url: String, method: String, headers: HashMap<String, String>?=null,body:ByteArray?=null){
hquicService?.sendRequest(url,method,headers,body)
}
override fun onRedirectReceived(
request: UrlRequest?,
info: UrlResponseInfo?,
newLocationUrl: String?
) {
request?.followRedirect()
}
override fun onResponseStarted(request: UrlRequest?, info: UrlResponseInfo?) {
val byteBuffer = ByteBuffer.allocateDirect(CAPACITY)
request?.read(byteBuffer)
}
override fun onReadCompleted(
request: UrlRequest?,
info: UrlResponseInfo?,
byteBuffer: ByteBuffer?
) {
byteBuffer?.apply {
response.write(array(),arrayOffset(),position())
response.flush()
}
request?.read(ByteBuffer.allocateDirect(CAPACITY))
}
override fun onSucceeded(request: UrlRequest?, info: UrlResponseInfo?) {
listener?.onSuccess(response.toByteArray())
}
override fun onFailed(request: UrlRequest?, info: UrlResponseInfo?, error: CronetException?) {
listener?.apply { onFailure(error.toString()) }
}
Remember, certain number of bytes can be readed per time, so for long responses, the method onReadCompleated will be called multiple times until the response has been successfully readed, or an error ocurs. When the operation is complete, the onSucceeded callback will be called and you will be able to parse the response. If the request fails, you will get an exception on the onFailed callback.
To report the request result, you can create a public interface
Code:
interface HQUICClientListener{
fun onSuccess(response: ByteArray)
fun onFailure(error: String)
}
And then, if the response is succesful, you can parse your byte array properly.
Code:
override fun onSuccess(response: ByteArray) {
//For text
Log.i(TAG, String(response))
//For images
BitmapFactory.decodeByteArray(response,0,response.size)
}
Conclusion
With HQUIC you can easily create a REST client for your android app, taking advantage of the QUIC features and keeping HTTP 2 compatibility.
Reference
HQUIC developer guide

Huawei AR Engine Face Tracking Feature

Here I will try to explain the Facial Expression Tracking feature of HUAWEI AR Engine as much as I can by developing a demo application. In addition, if you want to learn about the Body Tracking feature offered by the HUAWEI AR Engine and the comparison I made with its competitors, I recommend you to read the 1st article of this series that I wrote before.
This feature of the HUAWEI AR Engine provides meticulous control over the virtual character’s facial expressions by providing the calculated values of the facial poses and the parameter values corresponding to the expressions in real time. It provides this capability in order to track and obtain facial image information, comprehend facial expressions in real time, and convert the facial expressions into various expression parameters, thereby enabling the expressions of virtual characters to be controlled. In addition, AR Engine supports the recognition of 64 types of facial expressions covering eyes, eyebrows, eyeballs, mouth and tongue.
Also the Face Mesh feature of HUAWEI AR Engine, calculates the pose and mesh model data of a face in real time. The mesh model data changes to account for facial movements.
By providing high-precision face mesh modeling and tracking capabilities, HUAWEI AR Engine delivers a highly-realistic mesh model in real time, after obtaining face image information. The mesh model changes its location and shape in accordance with the face, for accurate real time responsivity.
Also, AR Engine provides a mesh with more than 4,000 vertices and 7,000 triangles to precisely outline face contours, and enhance the overall user experience.
Now I will develop a demo application and try to explain this feature and what it provides in more detail.
The figure below shows the general usage process of HUAWEI AR Engine SDK. We will start this process with ARSession, which we will start in Activity’s onResume function.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
While developing this application, we will start with the Engine Functionality section that you can see in the figure below. Then we will develop the render manager class, after that we will complete this article by writing the activity, ie UI part.
While developing this demo application, we will use the OpenGL library for rendering, as in my other article. For this, we will create a class called FaceRenderManager that implements OpenGL’s GLSurfaceView.Renderer interface. We will create and render shaders in the onSurfaceCreated method of this interface. First of all, we will start with the Face Geometry drawing part.
1- Face Geometry
In this section, we will draw the Face Geometry features.
a. Create and Attach Shaders
First of all, we define our vertex shader and fragment shader programs that we will use.
Code:
private static final String LS = System.lineSeparator();
private static final String FACE_GEOMETRY_VERTEX =
"attribute vec2 inTexCoord;" + LS
+ "uniform mat4 inMVPMatrix;" + LS
+ "uniform float inPointSize;" + LS
+ "attribute vec4 inPosition;" + LS
+ "uniform vec4 inColor;" + LS
+ "varying vec4 varAmbient;" + LS
+ "varying vec4 varColor;" + LS
+ "varying vec2 varCoord;" + LS
+ "void main() {" + LS
+ " varAmbient = vec4(1.0, 1.0, 1.0, 1.0);" + LS
+ " gl_Position = inMVPMatrix * vec4(inPosition.xyz, 1.0);" + LS
+ " varColor = inColor;" + LS
+ " gl_PointSize = inPointSize;" + LS
+ " varCoord = inTexCoord;" + LS
+ "}";
private static final String FACE_GEOMETRY_FRAGMENT =
"precision mediump float;" + LS
+ "uniform sampler2D inTexture;" + LS
+ "varying vec4 varColor;" + LS
+ "varying vec2 varCoord;" + LS
+ "varying vec4 varAmbient;" + LS
+ "void main() {" + LS
+ " vec4 objectColor = texture2D(inTexture, vec2(varCoord.x, 1.0 - varCoord.y));" + LS
+ " if(varColor.x != 0.0) {" + LS
+ " gl_FragColor = varColor * varAmbient;" + LS
+ " }" + LS
+ " else {" + LS
+ " gl_FragColor = objectColor * varAmbient;" + LS
+ " }" + LS
+ "}";
Well, we wrote our shader programs. These are our fragment and vertex shader programs to providing the code for certain programmable stages of the face rendering pipeline.
Now it is time to create shader programs. For this, we add the following code. When we call this method by giving the required shader type and shader source code as parameters, we first create an empty shader object, then provide the source code and compile it and get a referenced integer value. Then we will continue with the next steps with this referenced value.
Code:
private static int loadShader(int shaderType, String source) {
int shader = GLES20.glCreateShader(shaderType);
if (0 != shader) {
GLES20.glShaderSource(shader, source);
GLES20.glCompileShader(shader);
int[] compiled = new int[1];
GLES20.glGetShaderiv(shader, GLES20.GL_COMPILE_STATUS, compiled, 0);
if (compiled[0] == 0) {
Log.e(TAG, "glError: Could not compile shader " + shaderType);
Log.e(TAG, "GLES20 Error: " + GLES20.glGetShaderInfoLog(shader));
GLES20.glDeleteShader(shader);
shader = 0;
}
}
return shader;
}
We will now call this method to create both the vertex shader and the fragment shader to create the face geometry. For this, we will pass the FACE_GEOMETRY_VERTEX and FACE_GEOMETRY_FRAGMENT source codes to this method. Then we add the compiled shaders to the program object we created, to be linked later. After that, we link this program object to use it.
Code:
private static int createGlProgram() {
int vertex = loadShader(GLES20.GL_VERTEX_SHADER, FACE_GEOMETRY_VERTEX);
if (vertex == 0) {
return 0;
}
int fragment = loadShader(GLES20.GL_FRAGMENT_SHADER, FACE_GEOMETRY_FRAGMENT);
if (fragment == 0) {
return 0;
}
int program = GLES20.glCreateProgram();
if (program != 0) {
GLES20.glAttachShader(program, vertex);
GLES20.glAttachShader(program, fragment);
GLES20.glLinkProgram(program);
int[] linkStatus = new int[1];
GLES20.glGetProgramiv(program, GLES20.GL_LINK_STATUS, linkStatus, 0);
if (linkStatus[0] != GLES20.GL_TRUE) {
Log.e(TAG, "Could not link program: " + GLES20.glGetProgramInfoLog(program));
GLES20.glDeleteProgram(program);
program = 0;
}
}
return program;
}
Now let’s call the method we wrote and initialize the values we will use. We initialized these values because while drawing the frame (from the onDrawFrame method), we will use these values to draw the points.
Code:
private int mProgram;
private int mPositionAttribute;
private int mColorUniform;
private int mModelViewProjectionUniform;
private int mPointSizeUniform;
private int mTextureUniform;
private int mTextureCoordAttribute;
private void createProgram() {
ShaderUtil.checkGlError(TAG, "Create gl program start.");
mProgram = createGlProgram();
mPositionAttribute = GLES20.glGetAttribLocation(mProgram, "inPosition");
mColorUniform = GLES20.glGetUniformLocation(mProgram, "inColor");
mModelViewProjectionUniform = GLES20.glGetUniformLocation(mProgram, "inMVPMatrix");
mPointSizeUniform = GLES20.glGetUniformLocation(mProgram, "inPointSize");
mTextureUniform = GLES20.glGetUniformLocation(mProgram, "inTexture");
mTextureCoordAttribute = GLES20.glGetAttribLocation(mProgram, "inTexCoord");
ShaderUtil.checkGlError(TAG, "Create gl program end.");
}
Now we will start to create OpenGL ES regarding face geometry, including creating shader programs, using the functions we have written. In the next steps, we will call this function from the onSurfaceCreated function of the GLSurfaceView.Renderer interface to create OpenGL ES when the surface is created so that we can create OpenGL ES.
b. OpenGL Initialization for Face Geometry
First, we create 2 buffer objects. These buffers will hold our vertice information and our Triangle information. Then we will get visuals by updating them.
We bind the first buffer object we created for Vertex attributes to the array buffer. We specify the size of the buffer and do not put any data in it for now. Then we tell OpenGL that with DYNAMIC_DRAW we will frequently update the values in this buffer and therefore do not optimize these values. Finally, we unbind for optimization.
Code:
private static final int BUFFER_OBJECT_NUMBER = 2;
private int mVerticeId;
private int mVerticeBufferSize = 8000; // Initialize the size of the vertex VBO.
private int mTriangleId;
private int mTriangleBufferSize = 5000; // Initialize the size of the triangle VBO.
void init(Context context) {
ShaderUtil.checkGlError(TAG, "Init start.");
//Create Buffer objects
int[] buffers = new int[BUFFER_OBJECT_NUMBER];
GLES20.glGenBuffers(BUFFER_OBJECT_NUMBER, buffers, 0);
mVerticeId = buffers[0];
mTriangleId = buffers[1];
//Bind Array Buffer and set parameters
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticeId);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVerticeBufferSize * BYTES_PER_POINT, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
}
Then, by adding the following code to the above init function, we bind the second buffer objects we created to the vertex array index binding point with GL_ELEMENT_ARRAY_BUFFER.
Code:
void init(Context context) {
//...
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleId);
// Each floating-point number occupies 4 bytes.
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleBufferSize * 4, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
//...
}
After adding them, we add the following code to the init function, create a texture object and bind it to the GL_TEXTURE_2D point.
Code:
int[] texNames = new int[1];
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glGenTextures(1, texNames, 0);
mTextureName = texNames[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureName);
Now that we have binded the texture object, we add the following code to the init () function and call the createProgram () function that we created in the section “a”, attach the shaders and set the texture parameters.
Code:
void init(Context context) {
//...
int[] texNames = new int[1];
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glGenTextures(1, texNames, 0);
mTextureName = texNames[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureName);
createProgram();
Bitmap textureBitmap;
try (InputStream inputStream = context.getAssets().open("face_geometry.png")) {
textureBitmap = BitmapFactory.decodeStream(inputStream);
} catch (IllegalArgumentException | IOException e) {
Log.e(TAG, "Open bitmap error!");
return;
}
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR_MIPMAP_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, textureBitmap, 0);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
ShaderUtil.checkGlError(TAG, "Init end.");
}
The final version of the init () function:
Code:
void init(Context context) {
ShaderUtil.checkGlError(TAG, "Init start.");
int[] texNames = new int[1];
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glGenTextures(1, texNames, 0);
mTextureName = texNames[0];
int[] buffers = new int[BUFFER_OBJECT_NUMBER];
GLES20.glGenBuffers(BUFFER_OBJECT_NUMBER, buffers, 0);
mVerticeId = buffers[0];
mTriangleId = buffers[1];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticeId);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVerticeBufferSize * BYTES_PER_POINT, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleId);
// Each floating-point number occupies 4 bytes.
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleBufferSize * 4, null,
GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureName);
createProgram();
Bitmap textureBitmap;
try (InputStream inputStream = context.getAssets().open("face_geometry.png")) {
textureBitmap = BitmapFactory.decodeStream(inputStream);
} catch (IllegalArgumentException | IOException e) {
Log.e(TAG, "Open bitmap error!");
return;
}
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR_MIPMAP_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, textureBitmap, 0);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
ShaderUtil.checkGlError(TAG, "Init end.");
}
c. Draw Face Display
In this section, there are some steps for face drawing operations, that is, to update the face geometry data in buffer. First, we will get the face geometry. For this, we will use the ARFaceGeometry class offered by HUAWEI AR Engine. We can obtain this class using ARSession’s getAllTrackables () function. This way we can get the face (s) on the camera. Example: ArSession.getAllTrackables (ARFace.class)
After obtaining the ARFaceGeometry class, we will use this class to get the vertices, texture coordinates, triangle number and triangle indices of the faces seen by the camera. Then, using these data, we will update the data inside the buffer objects we created earlier and specified with “mVerticeId” and “mTriangleId”.
Now let’s create a function called updateFaceGeometryData that takes an object of type ARFaceGeometry as a parameter. And let’s write the codes of the processes mentioned in the next paragraph into this function.
Code:
private void updateFaceGeometryData(ARFaceGeometry faceGeometry) {
ShaderUtil.checkGlError(TAG, "Before update data.");
FloatBuffer faceVertices = faceGeometry.getVertices();
// Obtain the number of geometric vertices of a face.
mPointsNum = faceVertices.limit() / 3;
FloatBuffer textureCoordinates = faceGeometry.getTextureCoordinates();
// Obtain the number of geometric texture coordinates of the
// face (the texture coordinates are two-dimensional).
int texNum = textureCoordinates.limit() / 2;
Log.d(TAG, "Update face geometry data: texture coordinates size:" + texNum);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticeId);
if (mVerticeBufferSize < (mPointsNum + texNum) * BYTES_PER_POINT) {
while (mVerticeBufferSize < (mPointsNum + texNum) * BYTES_PER_POINT) {
// If the capacity of the vertex VBO buffer is insufficient, expand the capacity.
mVerticeBufferSize *= 2;
}
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVerticeBufferSize, null, GLES20.GL_DYNAMIC_DRAW);
}
GLES20.glBufferSubData(GLES20.GL_ARRAY_BUFFER, 0, mPointsNum * BYTES_PER_POINT, faceVertices);
GLES20.glBufferSubData(GLES20.GL_ARRAY_BUFFER, mPointsNum * BYTES_PER_POINT, texNum * BYTES_PER_COORD,
textureCoordinates);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
mTrianglesNum = faceGeometry.getTriangleCount();
IntBuffer faceTriangleIndices = faceGeometry.getTriangleIndices();
Log.d(TAG, "update face geometry data: faceTriangleIndices.size: " + faceTriangleIndices.limit());
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleId);
if (mTriangleBufferSize < mTrianglesNum * BYTES_PER_POINT) {
while (mTriangleBufferSize < mTrianglesNum * BYTES_PER_POINT) {
// If the capacity of the vertex VBO buffer is insufficient, expand the capacity.
mTriangleBufferSize *= 2;
}
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleBufferSize, null, GLES20.GL_DYNAMIC_DRAW);
}
GLES20.glBufferSubData(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0, mTrianglesNum * BYTES_PER_POINT, faceTriangleIndices);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "After update data.");
}
As you know, model, view and projection matrices are required for 3D rendering on the screen. So we need to create the matrix of the face for the model. For this, we will use the ARFace class of HUAWEI AR Engine. With the getPose () function of this class, we will obtain the ARPose type object. We obtain the model view matrix from this object. We obtain the projection matrix from the ARCamera object of the HUAWEI AR Engine and multiply them. In this way, we obtain the matrix to update the model view projection (MVP) data. Now let’s do them with the following function.
Code:
private static final float PROJECTION_MATRIX_NEAR = 0.1f;
private static final float PROJECTION_MATRIX_FAR = 100.0f;
// The size of the MVP matrix is 4 x 4.
private float[] mModelViewProjections = new float[16];
private void updateModelViewProjectionData(ARCamera camera, ARFace face) {
// The size of the projection matrix is 4 * 4.
float[] projectionMatrix = new float[16];
camera.getProjectionMatrix(projectionMatrix, 0, PROJECTION_MATRIX_NEAR, PROJECTION_MATRIX_FAR);
ARPose facePose = face.getPose();
// The size of viewMatrix is 4 * 4.
float[] facePoseViewMatrix = new float[16];
facePose.toMatrix(facePoseViewMatrix, 0);
Matrix.multiplyMM(mModelViewProjections, 0, projectionMatrix, 0, facePoseViewMatrix, 0);
}
As the last step of the face drawing phase, we complete the drawing with the following function. With this function, we will draw the geometric features of the face using the values we have created / defined up to this stage.
Note: These drawing functions will be called for each frame.
Code:
private void drawFaceGeometry() {
ShaderUtil.checkGlError(TAG, "Before draw.");
Log.d(TAG, "Draw face geometry: mPointsNum: " + mPointsNum + " mTrianglesNum: " + mTrianglesNum);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureName);
GLES20.glUniform1i(mTextureUniform, 0);
ShaderUtil.checkGlError(TAG, "Init texture.");
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_CULL_FACE);
// Draw point.
GLES20.glUseProgram(mProgram);
GLES20.glEnableVertexAttribArray(mPositionAttribute);
GLES20.glEnableVertexAttribArray(mTextureCoordAttribute);
GLES20.glEnableVertexAttribArray(mColorUniform);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticeId);
GLES20.glVertexAttribPointer(mPositionAttribute, POSITION_COMPONENTS_NUMBER, GLES20.GL_FLOAT, false, BYTES_PER_POINT, 0);
GLES20.glVertexAttribPointer(mTextureCoordAttribute, TEXCOORD_COMPONENTS_NUMBER, GLES20.GL_FLOAT, false, BYTES_PER_COORD, 0);
GLES20.glUniform4f(mColorUniform, 1.0f, 0.0f, 0.0f, 1.0f);
GLES20.glUniformMatrix4fv(mModelViewProjectionUniform, 1, false, mModelViewProjections, 0);
GLES20.glUniform1f(mPointSizeUniform, 5.0f); // Set the size of Point to 5.
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, mPointsNum);
GLES20.glDisableVertexAttribArray(mColorUniform);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Draw point.");
// Draw triangles.
GLES20.glEnableVertexAttribArray(mColorUniform);
// Clear the color and use the texture color to draw triangles.
GLES20.glUniform4f(mColorUniform, 0.0f, 0.0f, 0.0f, 0.0f);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleId);
// The number of input triangle points
GLES20.glDrawElements(GLES20.GL_TRIANGLES, mTrianglesNum * 3, GLES20.GL_UNSIGNED_INT, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
GLES20.glDisableVertexAttribArray(mColorUniform);
ShaderUtil.checkGlError(TAG, "Draw triangles.");
GLES20.glDisableVertexAttribArray(mTextureCoordAttribute);
GLES20.glDisableVertexAttribArray(mPositionAttribute);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glDisable(GLES20.GL_CULL_FACE);
ShaderUtil.checkGlError(TAG, "Draw after.");
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202387445597520482&fid=0101187876626530001&channelname=HuoDong59&ha_source=xda

Categories

Resources