Here I will try to explain the Facial Expression Tracking feature of HUAWEI AR Engine as much as I can by developing a demo application. In addition, if you want to learn about the Body Tracking feature offered by the HUAWEI AR Engine and the comparison I made with its competitors, I recommend you to read the 1st article of this series that I wrote before.
This feature of the HUAWEI AR Engine provides meticulous control over the virtual character’s facial expressions by providing the calculated values of the facial poses and the parameter values corresponding to the expressions in real time. It provides this capability in order to track and obtain facial image information, comprehend facial expressions in real time, and convert the facial expressions into various expression parameters, thereby enabling the expressions of virtual characters to be controlled. In addition, AR Engine supports the recognition of 64 types of facial expressions covering eyes, eyebrows, eyeballs, mouth and tongue.
Also the Face Mesh feature of HUAWEI AR Engine, calculates the pose and mesh model data of a face in real time. The mesh model data changes to account for facial movements.
By providing high-precision face mesh modeling and tracking capabilities, HUAWEI AR Engine delivers a highly-realistic mesh model in real time, after obtaining face image information. The mesh model changes its location and shape in accordance with the face, for accurate real time responsivity.
Also, AR Engine provides a mesh with more than 4,000 vertices and 7,000 triangles to precisely outline face contours, and enhance the overall user experience.
Now I will develop a demo application and try to explain this feature and what it provides in more detail.
The figure below shows the general usage process of HUAWEI AR Engine SDK. We will start this process with ARSession, which we will start in Activity’s onResume function.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
While developing this application, we will start with the Engine Functionality section that you can see in the figure below. Then we will develop the render manager class, after that we will complete this article by writing the activity, ie UI part.
While developing this demo application, we will use the OpenGL library for rendering, as in my other article. For this, we will create a class called FaceRenderManager that implements OpenGL’s GLSurfaceView.Renderer interface. We will create and render shaders in the onSurfaceCreated method of this interface. First of all, we will start with the Face Geometry drawing part.
1- Face Geometry
In this section, we will draw the Face Geometry features.
a. Create and Attach Shaders
First of all, we define our vertex shader and fragment shader programs that we will use.
Code:
private static final String LS = System.lineSeparator();
private static final String FACE_GEOMETRY_VERTEX =
"attribute vec2 inTexCoord;" + LS
+ "uniform mat4 inMVPMatrix;" + LS
+ "uniform float inPointSize;" + LS
+ "attribute vec4 inPosition;" + LS
+ "uniform vec4 inColor;" + LS
+ "varying vec4 varAmbient;" + LS
+ "varying vec4 varColor;" + LS
+ "varying vec2 varCoord;" + LS
+ "void main() {" + LS
+ " varAmbient = vec4(1.0, 1.0, 1.0, 1.0);" + LS
+ " gl_Position = inMVPMatrix * vec4(inPosition.xyz, 1.0);" + LS
+ " varColor = inColor;" + LS
+ " gl_PointSize = inPointSize;" + LS
+ " varCoord = inTexCoord;" + LS
+ "}";
private static final String FACE_GEOMETRY_FRAGMENT =
"precision mediump float;" + LS
+ "uniform sampler2D inTexture;" + LS
+ "varying vec4 varColor;" + LS
+ "varying vec2 varCoord;" + LS
+ "varying vec4 varAmbient;" + LS
+ "void main() {" + LS
+ " vec4 objectColor = texture2D(inTexture, vec2(varCoord.x, 1.0 - varCoord.y));" + LS
+ " if(varColor.x != 0.0) {" + LS
+ " gl_FragColor = varColor * varAmbient;" + LS
+ " }" + LS
+ " else {" + LS
+ " gl_FragColor = objectColor * varAmbient;" + LS
+ " }" + LS
+ "}";
Well, we wrote our shader programs. These are our fragment and vertex shader programs to providing the code for certain programmable stages of the face rendering pipeline.
Now it is time to create shader programs. For this, we add the following code. When we call this method by giving the required shader type and shader source code as parameters, we first create an empty shader object, then provide the source code and compile it and get a referenced integer value. Then we will continue with the next steps with this referenced value.
Code:
private static int loadShader(int shaderType, String source) {
int shader = GLES20.glCreateShader(shaderType);
if (0 != shader) {
GLES20.glShaderSource(shader, source);
GLES20.glCompileShader(shader);
int[] compiled = new int[1];
GLES20.glGetShaderiv(shader, GLES20.GL_COMPILE_STATUS, compiled, 0);
if (compiled[0] == 0) {
Log.e(TAG, "glError: Could not compile shader " + shaderType);
Log.e(TAG, "GLES20 Error: " + GLES20.glGetShaderInfoLog(shader));
GLES20.glDeleteShader(shader);
shader = 0;
}
}
return shader;
}
We will now call this method to create both the vertex shader and the fragment shader to create the face geometry. For this, we will pass the FACE_GEOMETRY_VERTEX and FACE_GEOMETRY_FRAGMENT source codes to this method. Then we add the compiled shaders to the program object we created, to be linked later. After that, we link this program object to use it.
Code:
private static int createGlProgram() {
int vertex = loadShader(GLES20.GL_VERTEX_SHADER, FACE_GEOMETRY_VERTEX);
if (vertex == 0) {
return 0;
}
int fragment = loadShader(GLES20.GL_FRAGMENT_SHADER, FACE_GEOMETRY_FRAGMENT);
if (fragment == 0) {
return 0;
}
int program = GLES20.glCreateProgram();
if (program != 0) {
GLES20.glAttachShader(program, vertex);
GLES20.glAttachShader(program, fragment);
GLES20.glLinkProgram(program);
int[] linkStatus = new int[1];
GLES20.glGetProgramiv(program, GLES20.GL_LINK_STATUS, linkStatus, 0);
if (linkStatus[0] != GLES20.GL_TRUE) {
Log.e(TAG, "Could not link program: " + GLES20.glGetProgramInfoLog(program));
GLES20.glDeleteProgram(program);
program = 0;
}
}
return program;
}
Now let’s call the method we wrote and initialize the values we will use. We initialized these values because while drawing the frame (from the onDrawFrame method), we will use these values to draw the points.
Code:
private int mProgram;
private int mPositionAttribute;
private int mColorUniform;
private int mModelViewProjectionUniform;
private int mPointSizeUniform;
private int mTextureUniform;
private int mTextureCoordAttribute;
private void createProgram() {
ShaderUtil.checkGlError(TAG, "Create gl program start.");
mProgram = createGlProgram();
mPositionAttribute = GLES20.glGetAttribLocation(mProgram, "inPosition");
mColorUniform = GLES20.glGetUniformLocation(mProgram, "inColor");
mModelViewProjectionUniform = GLES20.glGetUniformLocation(mProgram, "inMVPMatrix");
mPointSizeUniform = GLES20.glGetUniformLocation(mProgram, "inPointSize");
mTextureUniform = GLES20.glGetUniformLocation(mProgram, "inTexture");
mTextureCoordAttribute = GLES20.glGetAttribLocation(mProgram, "inTexCoord");
ShaderUtil.checkGlError(TAG, "Create gl program end.");
}
Now we will start to create OpenGL ES regarding face geometry, including creating shader programs, using the functions we have written. In the next steps, we will call this function from the onSurfaceCreated function of the GLSurfaceView.Renderer interface to create OpenGL ES when the surface is created so that we can create OpenGL ES.
b. OpenGL Initialization for Face Geometry
First, we create 2 buffer objects. These buffers will hold our vertice information and our Triangle information. Then we will get visuals by updating them.
We bind the first buffer object we created for Vertex attributes to the array buffer. We specify the size of the buffer and do not put any data in it for now. Then we tell OpenGL that with DYNAMIC_DRAW we will frequently update the values in this buffer and therefore do not optimize these values. Finally, we unbind for optimization.
Code:
private static final int BUFFER_OBJECT_NUMBER = 2;
private int mVerticeId;
private int mVerticeBufferSize = 8000; // Initialize the size of the vertex VBO.
private int mTriangleId;
private int mTriangleBufferSize = 5000; // Initialize the size of the triangle VBO.
void init(Context context) {
ShaderUtil.checkGlError(TAG, "Init start.");
//Create Buffer objects
int[] buffers = new int[BUFFER_OBJECT_NUMBER];
GLES20.glGenBuffers(BUFFER_OBJECT_NUMBER, buffers, 0);
mVerticeId = buffers[0];
mTriangleId = buffers[1];
//Bind Array Buffer and set parameters
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticeId);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVerticeBufferSize * BYTES_PER_POINT, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
}
Then, by adding the following code to the above init function, we bind the second buffer objects we created to the vertex array index binding point with GL_ELEMENT_ARRAY_BUFFER.
Code:
void init(Context context) {
//...
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleId);
// Each floating-point number occupies 4 bytes.
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleBufferSize * 4, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
//...
}
After adding them, we add the following code to the init function, create a texture object and bind it to the GL_TEXTURE_2D point.
Code:
int[] texNames = new int[1];
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glGenTextures(1, texNames, 0);
mTextureName = texNames[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureName);
Now that we have binded the texture object, we add the following code to the init () function and call the createProgram () function that we created in the section “a”, attach the shaders and set the texture parameters.
Code:
void init(Context context) {
//...
int[] texNames = new int[1];
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glGenTextures(1, texNames, 0);
mTextureName = texNames[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureName);
createProgram();
Bitmap textureBitmap;
try (InputStream inputStream = context.getAssets().open("face_geometry.png")) {
textureBitmap = BitmapFactory.decodeStream(inputStream);
} catch (IllegalArgumentException | IOException e) {
Log.e(TAG, "Open bitmap error!");
return;
}
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR_MIPMAP_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, textureBitmap, 0);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
ShaderUtil.checkGlError(TAG, "Init end.");
}
The final version of the init () function:
Code:
void init(Context context) {
ShaderUtil.checkGlError(TAG, "Init start.");
int[] texNames = new int[1];
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glGenTextures(1, texNames, 0);
mTextureName = texNames[0];
int[] buffers = new int[BUFFER_OBJECT_NUMBER];
GLES20.glGenBuffers(BUFFER_OBJECT_NUMBER, buffers, 0);
mVerticeId = buffers[0];
mTriangleId = buffers[1];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticeId);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVerticeBufferSize * BYTES_PER_POINT, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleId);
// Each floating-point number occupies 4 bytes.
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleBufferSize * 4, null,
GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureName);
createProgram();
Bitmap textureBitmap;
try (InputStream inputStream = context.getAssets().open("face_geometry.png")) {
textureBitmap = BitmapFactory.decodeStream(inputStream);
} catch (IllegalArgumentException | IOException e) {
Log.e(TAG, "Open bitmap error!");
return;
}
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR_MIPMAP_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, textureBitmap, 0);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
ShaderUtil.checkGlError(TAG, "Init end.");
}
c. Draw Face Display
In this section, there are some steps for face drawing operations, that is, to update the face geometry data in buffer. First, we will get the face geometry. For this, we will use the ARFaceGeometry class offered by HUAWEI AR Engine. We can obtain this class using ARSession’s getAllTrackables () function. This way we can get the face (s) on the camera. Example: ArSession.getAllTrackables (ARFace.class)
After obtaining the ARFaceGeometry class, we will use this class to get the vertices, texture coordinates, triangle number and triangle indices of the faces seen by the camera. Then, using these data, we will update the data inside the buffer objects we created earlier and specified with “mVerticeId” and “mTriangleId”.
Now let’s create a function called updateFaceGeometryData that takes an object of type ARFaceGeometry as a parameter. And let’s write the codes of the processes mentioned in the next paragraph into this function.
Code:
private void updateFaceGeometryData(ARFaceGeometry faceGeometry) {
ShaderUtil.checkGlError(TAG, "Before update data.");
FloatBuffer faceVertices = faceGeometry.getVertices();
// Obtain the number of geometric vertices of a face.
mPointsNum = faceVertices.limit() / 3;
FloatBuffer textureCoordinates = faceGeometry.getTextureCoordinates();
// Obtain the number of geometric texture coordinates of the
// face (the texture coordinates are two-dimensional).
int texNum = textureCoordinates.limit() / 2;
Log.d(TAG, "Update face geometry data: texture coordinates size:" + texNum);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticeId);
if (mVerticeBufferSize < (mPointsNum + texNum) * BYTES_PER_POINT) {
while (mVerticeBufferSize < (mPointsNum + texNum) * BYTES_PER_POINT) {
// If the capacity of the vertex VBO buffer is insufficient, expand the capacity.
mVerticeBufferSize *= 2;
}
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVerticeBufferSize, null, GLES20.GL_DYNAMIC_DRAW);
}
GLES20.glBufferSubData(GLES20.GL_ARRAY_BUFFER, 0, mPointsNum * BYTES_PER_POINT, faceVertices);
GLES20.glBufferSubData(GLES20.GL_ARRAY_BUFFER, mPointsNum * BYTES_PER_POINT, texNum * BYTES_PER_COORD,
textureCoordinates);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
mTrianglesNum = faceGeometry.getTriangleCount();
IntBuffer faceTriangleIndices = faceGeometry.getTriangleIndices();
Log.d(TAG, "update face geometry data: faceTriangleIndices.size: " + faceTriangleIndices.limit());
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleId);
if (mTriangleBufferSize < mTrianglesNum * BYTES_PER_POINT) {
while (mTriangleBufferSize < mTrianglesNum * BYTES_PER_POINT) {
// If the capacity of the vertex VBO buffer is insufficient, expand the capacity.
mTriangleBufferSize *= 2;
}
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleBufferSize, null, GLES20.GL_DYNAMIC_DRAW);
}
GLES20.glBufferSubData(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0, mTrianglesNum * BYTES_PER_POINT, faceTriangleIndices);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "After update data.");
}
As you know, model, view and projection matrices are required for 3D rendering on the screen. So we need to create the matrix of the face for the model. For this, we will use the ARFace class of HUAWEI AR Engine. With the getPose () function of this class, we will obtain the ARPose type object. We obtain the model view matrix from this object. We obtain the projection matrix from the ARCamera object of the HUAWEI AR Engine and multiply them. In this way, we obtain the matrix to update the model view projection (MVP) data. Now let’s do them with the following function.
Code:
private static final float PROJECTION_MATRIX_NEAR = 0.1f;
private static final float PROJECTION_MATRIX_FAR = 100.0f;
// The size of the MVP matrix is 4 x 4.
private float[] mModelViewProjections = new float[16];
private void updateModelViewProjectionData(ARCamera camera, ARFace face) {
// The size of the projection matrix is 4 * 4.
float[] projectionMatrix = new float[16];
camera.getProjectionMatrix(projectionMatrix, 0, PROJECTION_MATRIX_NEAR, PROJECTION_MATRIX_FAR);
ARPose facePose = face.getPose();
// The size of viewMatrix is 4 * 4.
float[] facePoseViewMatrix = new float[16];
facePose.toMatrix(facePoseViewMatrix, 0);
Matrix.multiplyMM(mModelViewProjections, 0, projectionMatrix, 0, facePoseViewMatrix, 0);
}
As the last step of the face drawing phase, we complete the drawing with the following function. With this function, we will draw the geometric features of the face using the values we have created / defined up to this stage.
Note: These drawing functions will be called for each frame.
Code:
private void drawFaceGeometry() {
ShaderUtil.checkGlError(TAG, "Before draw.");
Log.d(TAG, "Draw face geometry: mPointsNum: " + mPointsNum + " mTrianglesNum: " + mTrianglesNum);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureName);
GLES20.glUniform1i(mTextureUniform, 0);
ShaderUtil.checkGlError(TAG, "Init texture.");
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_CULL_FACE);
// Draw point.
GLES20.glUseProgram(mProgram);
GLES20.glEnableVertexAttribArray(mPositionAttribute);
GLES20.glEnableVertexAttribArray(mTextureCoordAttribute);
GLES20.glEnableVertexAttribArray(mColorUniform);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVerticeId);
GLES20.glVertexAttribPointer(mPositionAttribute, POSITION_COMPONENTS_NUMBER, GLES20.GL_FLOAT, false, BYTES_PER_POINT, 0);
GLES20.glVertexAttribPointer(mTextureCoordAttribute, TEXCOORD_COMPONENTS_NUMBER, GLES20.GL_FLOAT, false, BYTES_PER_COORD, 0);
GLES20.glUniform4f(mColorUniform, 1.0f, 0.0f, 0.0f, 1.0f);
GLES20.glUniformMatrix4fv(mModelViewProjectionUniform, 1, false, mModelViewProjections, 0);
GLES20.glUniform1f(mPointSizeUniform, 5.0f); // Set the size of Point to 5.
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, mPointsNum);
GLES20.glDisableVertexAttribArray(mColorUniform);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Draw point.");
// Draw triangles.
GLES20.glEnableVertexAttribArray(mColorUniform);
// Clear the color and use the texture color to draw triangles.
GLES20.glUniform4f(mColorUniform, 0.0f, 0.0f, 0.0f, 0.0f);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, mTriangleId);
// The number of input triangle points
GLES20.glDrawElements(GLES20.GL_TRIANGLES, mTrianglesNum * 3, GLES20.GL_UNSIGNED_INT, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
GLES20.glDisableVertexAttribArray(mColorUniform);
ShaderUtil.checkGlError(TAG, "Draw triangles.");
GLES20.glDisableVertexAttribArray(mTextureCoordAttribute);
GLES20.glDisableVertexAttribArray(mPositionAttribute);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glDisable(GLES20.GL_CULL_FACE);
ShaderUtil.checkGlError(TAG, "Draw after.");
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202387445597520482&fid=0101187876626530001&channelname=HuoDong59&ha_source=xda
Related
Hi!
I have a problem... again
In my layout I have a container which I want to replace with
a) a Google map view
b) a text view
This must be configurable by a settings fragment. Everything works fine. But every time I switch from text to map or visa verse my app crashes with an exception.
java.lang.IllegalStateException: Fragment already active
I don't know what I am doing wrong. After switching from map to text within my settings I call this getMainActivity().setView() method:
Code:
protected void setView(boolean setMap) {
if (setMap == true) {
Log.i(TAG, "Inflating Google Maps fragment");
// Get a new Fragment Manager
fm = getFragmentManager();
FragmentTransaction fta = fm.beginTransaction();
// New instance if none exists
if (this.fragmentMap == null)
{
Log.d(TAG, "map fragment is null");
this.fragmentMap = new FragmentMap();
}
// Ok... if the fragment is not already added we will do it here right now
if (!this.fragmentMap.isAdded())
{
Log.d(TAG, "map fragment not added yet");
// Create a new bundle for the fragment
Bundle args = new Bundle();
this.fragmentMap.setArguments(args);
fta.replace(R.id.container, this.fragmentMap).commit();
} else
{
fta.show(this.fragmentMap).commit();
}
this.useMap = true;
} else if (setMap == false) {
Log.i(TAG, "Inflating text fragment");
// Get a new Fragment Manager
fm = getFragmentManager();
FragmentTransaction fta = fm.beginTransaction();
// New instance if none exists
if (this.fragmentText == null)
{
Log.d(TAG, "textfragment is null");
this.fragmentText = new FragmentText();
}
// Ok... if the fragment is not already added we will do it here right now
if (this.fragmentText != null)
{
// Create a new bundle for the fragment
Bundle args = new Bundle();
Log.d(TAG, "textfragment not yet added");
this.fragmentText.setArguments(args);
fta.replace(R.id.container, this.fragmentText).commit();
} else
{
fta.show(this.fragmentText).commit();
}
this.useMap = false;
}
}
The last messages I get before the exception is thrown are the "....fragment not yet added" messages.
How can I switch between a map fragment and a text fragment?
Thorsten
Hi there
I am trying to Run google Map on Android Emulator But Map is Not Working.I mean Google Map is displaying in Fragment But there is Not any Markers that I place in My Code.
This is My activity class
Java:
double mLatitude=0;
double mLongitude=0;
private GoogleMap map;
Spinner mSprPlaceType;
String[] mPlaceType=null;
String[] mPlaceTypeName=null;
[user=5448622]@Suppress[/user]Lint("NewApi")
[user=439709]@override[/user]
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_show_places1);
// Array of place types
mPlaceType = getResources().getStringArray(R.array.place_type);
// Array of place type names
mPlaceTypeName = getResources().getStringArray(R.array.place_type_name);
// Creating an array adapter with an array of Place types
// to populate the spinner
ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_spinner_dropdown_item, mPlaceTypeName);
// Getting reference to the Spinner
mSprPlaceType = (Spinner) findViewById(R.id.spr_place_type);
// Setting adapter on Spinner to set place types
mSprPlaceType.setAdapter(adapter);
Button btnFind;
// Getting reference to Find Button
btnFind = ( Button ) findViewById(R.id.button1);
// Getting Google Play availability status
int status = GooglePlayServicesUtil.isGooglePlayServicesAvailable(getBaseContext());
if(status!=ConnectionResult.SUCCESS){ // Google Play Services are not available
int requestCode = 10;
Dialog dialog = GooglePlayServicesUtil.getErrorDialog(status, this, requestCode);
dialog.show();
}
else
{
map=((MapFragment)getFragmentManager().findFragmentById(R.id.map)).getMap();
map.setMyLocationEnabled(true);
// Getting LocationManager object from System Service LOCATION_SERVICE
LocationManager locationManager = (LocationManager) getSystemService(LOCATION_SERVICE);
// Creating a criteria object to retrieve provider
Criteria criteria = new Criteria();
// Getting the name of the best provider
String provider = locationManager.getBestProvider(criteria, true);
// Getting Current Location From GPS
Location location = locationManager.getLastKnownLocation(provider);
if(location!=null){
onLocationChanged(location);
}
locationManager.requestLocationUpdates(provider, 20000, 0, this);
// Setting click event lister for the find button
btnFind.setOnClickListener(new OnClickListener() {
[user=439709]@override[/user]
public void onClick(View v) {
int selectedPosition = mSprPlaceType.getSelectedItemPosition();
String type = mPlaceType[selectedPosition];
StringBuilder sb = new StringBuilder("https://maps.googleapis.com/maps/api/place/nearbysearch/json?");
sb.append("location="+mLatitude+","+mLongitude);
sb.append("&radius=10000");
sb.append("&types="+type);
sb.append("&sensor=true");
sb.append("&key=AIzaSyCba6q28XzWhcq5wPaB7ek7HWqh3Sq2Q3A");
// Creating a new non-ui thread task to download json data
PlacesTask placesTask = new PlacesTask();
// Invokes the "doInBackground()" method of the class PlaceTask
placesTask.execute(sb.toString());
}
});
}
}
/** A method to download json data from url */
private String downloadUrl(String strUrl) throws IOException{
String data = "";
InputStream iStream = null;
HttpURLConnection urlConnection = null;
try{
URL url = new URL(strUrl);
// Creating an http connection to communicate with url
urlConnection = (HttpURLConnection) url.openConnection();
// Connecting to url
urlConnection.connect();
// Reading data from url
iStream = urlConnection.getInputStream();
BufferedReader br = new BufferedReader(new InputStreamReader(iStream));
StringBuffer sb = new StringBuffer();
String line = "";
while( ( line = br.readLine()) != null){
sb.append(line);
}
data = sb.toString();
br.close();
}catch(Exception e){
Log.d("Exception while downloading url", e.toString());
}finally{
iStream.close();
urlConnection.disconnect();
}
return data;
}
/** A class, to download Google Places */
private class PlacesTask extends AsyncTask<String, Integer, String>{
String data = null;
// Invoked by execute() method of this object
[user=439709]@override[/user]
protected String doInBackground(String... url) {
try{
data = downloadUrl(url[0]);
}catch(Exception e){
Log.d("Background Task",e.toString());
}
return data;
}
// Executed after the complete execution of doInBackground() method
[user=439709]@override[/user]
protected void onPostExecute(String result){
ParserTask parserTask = new ParserTask();
// Start parsing the Google places in JSON format
// Invokes the "doInBackground()" method of the class ParseTask
parserTask.execute(result);
}
}
/** A class to parse the Google Places in JSON format */
private class ParserTask extends AsyncTask<String, Integer, List<HashMap<String,String>>>{
JSONObject jObject;
// Invoked by execute() method of this object
[user=439709]@override[/user]
protected List<HashMap<String,String>> doInBackground(String... jsonData) {
List<HashMap<String, String>> places = null;
PlaceJSONParser placeJsonParser = new PlaceJSONParser();
try{
jObject = new JSONObject(jsonData[0]);
/** Getting the parsed data as a List construct */
places = placeJsonParser.parse(jObject);
}catch(Exception e){
Log.d("Exception",e.toString());
}
return places;
}
// Executed after the complete execution of doInBackground() method
[user=439709]@override[/user]
protected void onPostExecute(List<HashMap<String,String>> list){
// Clears all the existing markers
map.clear();
for(int i=0;i<list.size();i++){
// Creating a marker
MarkerOptions markerOptions = new MarkerOptions();
// Getting a place from the places list
HashMap<String, String> hmPlace = list.get(i);
// Getting latitude of the place
double lat = Double.parseDouble(hmPlace.get("lat"));
// Getting longitude of the place
double lng = Double.parseDouble(hmPlace.get("lng"));
// Getting name
String name = hmPlace.get("place_name");
// Getting vicinity
String vicinity = hmPlace.get("vicinity");
LatLng latLng = new LatLng(lat, lng);
// Setting the position for the marker
markerOptions.position(latLng);
// Setting the title for the marker.
//This will be displayed on taping the marker
markerOptions.title(name + " : " + vicinity);
// Placing a marker on the touched position
map.addMarker(markerOptions);
}
}
}
[user=439709]@override[/user]
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.show_places1, menu);
return true;
}
[user=439709]@override[/user]
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
if (id == R.id.action_settings) {
return true;
}
return super.onOptionsItemSelected(item);
}
[user=439709]@override[/user]
public void onLocationChanged(Location location) {
// TODO Auto-generated method stub
mLatitude=location.getLatitude();
mLongitude=location.getLongitude();
LatLng latLng = new LatLng(mLatitude, mLongitude);
map.moveCamera(CameraUpdateFactory.newLatLng(latLng));
map.animateCamera(CameraUpdateFactory.zoomTo(12));
}
[user=439709]@override[/user]
public void onStatusChanged(String provider, int status, Bundle extras) {
// TODO Auto-generated method stub
}
[user=439709]@override[/user]
public void onProviderEnabled(String provider) {
// TODO Auto-generated method stub
}
[user=439709]@override[/user]
public void onProviderDisabled(String provider) {
// TODO Auto-generated method stub
}
}
And This is My Emulator Defination
Phone_Test2
Nexus S(4.0,480*800hdpi)
Google API(x86 System Image)
Intel Atomx86
HVGA
RAM:500MB
VM Heap:16
Internal Storage:200
SD Card:50
Click to expand...
Click to collapse
I have added all Jars and Permisiion in Manifest.Application is Working fine on Android Powerd Mobile Phone But Not on Android Emulator
any guess?
Thanks
More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202330537081990041&fid=0101187876626530001
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Article Introduction
In this article we are going to cover HUAWEI Map Kit JavaScript API introduction. Next we going to implementing HUAWEI Map in Ionic/Cordova project. Lastly we will implement HUAWEI Map Kit JavaScript into Native Application.
Technology Introduction
HUAWEI Map Kit provides JavaScript APIs for you to easily build map apps applicable to browsers.
It provides the basic map display, map interaction, route planning, place search, geocoding, and other functions to meet requirements of most developers.
Restriction
Before using the service, you need to apply for an API key on the HUAWEI Developers website. For details, please refer to "Creating an API Key" in API Console Operation Guide. To enhance the API key security, you are advised to restrict the API key. You can configure restrictions by app and API on the API key editing page.
Generating API Key
Go to HMS API Services > Credentials and click Create credential.
Click API key to generate new API Key.
In the dialog box that is displayed, click Restrict to set restrictions on the key to prevent unauthorized use or quota theft. This step is optional.
The restrictions include App restrictions and API restriction.
App restrictions: control which websites or apps can use your key. Set up to one app restriction per key.
API restrictions: specify the enabled APIs that this key can call.
After setup App restriction and API restriction API key will generate.
The API key is successfully created. Copy API Key to use in your project.
Huawei Web Map API introduction
1. Make a Basic Map
Code:
function loadMapScript() {
const apiKey = encodeURIComponent(
"API_KEY"
);
const src = `https://mapapi.cloud.huawei.com/mapjs/v1/api/js?callback=initMap&key=${apiKey}`;
const mapScript = document.createElement("script");
mapScript.setAttribute("src", src);
document.head.appendChild(mapScript);
}
function initMap() { }
function initMap() {
const mapOptions = {};
mapOptions.center = { lat: 48.856613, lng: 2.352222 };
mapOptions.zoom = 8;
mapOptions.language = "ENG";
const map = new HWMapJsSDK.HWMap(
document.getElementById("map"),
mapOptions
);
}
loadMapScript();
Note: Please update API_KEY with the key which you have generated. In script url we are declaring callback function, which will automatically initiate once Huawei Map Api loaded successfully.
2. Map Interactions
Map Controls
Code:
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 10;
scaleControl
mapOptions.scaleControl = true; // Set to display the scale.
mapOptions.scaleControlOptions = {
units: "imperial" // Set the scale unit to inch.
};
zoomSlider
Code:
mapOptions.zoomSlider = true ; // Set to display the zoom slider.
zoomControl
Code:
mapOptions.zoomControl = false; // Set not to display the zoom button.
rotateControl (Manage Compass)
Code:
mapOptions.rotateControl = true; // Set to display the compass.
navigationControl
Code:
mapOptions.navigationControl = true; // Set to display the pan button.
copyrightControl
Code:
mapOptions.copyrightControl = true; // Set to display the copyright information.
mapOptions.copyrightControlOptions = {value: "HUAWEI",} // Set the copyright information.
locationControl
Code:
mapOptions.locationControl= true; // Set to display the current location.
Camera
Map moving: You can call the map.panTo(latLng)
Map shift: You can call the map.panBy(x, y)
Zoom: You can use the map.setZoom(zoom) method to set the zoom level of a map.
Area control: You can use map.fitBounds(bounds) to set the map display scope.
Map Events
Map click event:
Code:
map.on('click', () => {
map.zoomIn();
});
Map center change event:
Code:
map.onCenterChanged(centerChangePost);
function centerChangePost() {
var center = map.getCenter();
alert( 'Lng:'+map.getCenter().lng+'
'+'Lat:'+map.getCenter().lat);
}
Map heading change event:
Code:
map.onHeadingChanged(headingChangePost);
function headingChangePost() {
alert('Heading Changed!');
}
Map zoom level change event:
Code:
map.onZoomChanged(zoomChangePost);
function zoomChangePost() {
alert('Zoom Changed!')
}
3. Drawing on Map
Marker:
You can add markers to a map to identify locations such as stores and buildings, and provide additional information with information windows.
Code:
var map;
var mMarker;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
mMarker = new HWMapJsSDK.HWMarker({
map: map,
position: {lat: 48.85, lng: 2.35},
zIndex: 10,
label: 'A',
icon: {
opacity: 0.5
}
});
}
Marker Result:
Marker Clustering:
The HMS Core Map SDK allows you to cluster markers to effectively manage them on the map at different zoom levels. When a user zooms in on the map to a high level, all markers are displayed on the map. When the user zooms out, the markers are clustered on the map for orderly display.
Code:
var map;
var markers = [];
var markerCluster;
var locations = [
{lat: 51.5145160, lng: -0.1270060},
{ lat : 51.5064490, lng : -0.1244260 },
{ lat : 51.5097080, lng : -0.1200450 },
{ lat : 51.5090680, lng : -0.1421420 },
{ lat : 51.4976080, lng : -0.1456320 },
···
{ lat : 51.5061590, lng : -0.140280 },
{ lat : 51.5047420, lng : -0.1470490 },
{ lat : 51.5126760, lng : -0.1189760 },
{ lat : 51.5108480, lng : -0.1208480 }
];
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 3;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
generateMarkers(locations);
markerCluster = new HWMapJsSDK.HWMarkerCluster(map, markers);
}
function generateMarkers(locations) {
for (let i = 0; i < locations.length; i++) {
var opts = {
position: locations[i]
};
markers.push(new HWMapJsSDK.HWMarker(opts));
}
}
Cluster markers Result:
Information Window:
The HMS Core Map SDK supports the display of information windows on the map. There are two types of information windows: One is to display text or image independently, and the other is to display text or image in a popup above a marker. The information window provides details about a marker.
Code:
var infoWindow;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
var map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
infoWindow = new HWMapJsSDK.HWInfoWindow({
map,
position: {lat: 48.856613, lng: 2.352222},
content: 'This is to show mouse event of another marker',
offset: [0, -40],
});
}
Info window Result:
Ground Overlay
The builder function of GroundOverlay uses the URL, LatLngBounds, and GroundOverlayOptions of an image as the parameters to display the image in a specified area on the map. The sample code is as follows:
Code:
var map;
var mGroundOverlay;
function initMap() {
var mapOptions = {};
mapOptions.center = {lat: 48.856613, lng: 2.352222};
mapOptions.zoom = 8;
map = new HWMapJsSDK.HWMap(document.getElementById('map'), mapOptions);
var imageBounds = {
north: 49,
south: 48.5,
east: 2.5,
west: 1.5,
};
mGroundOverlay = new HWMapJsSDK.HWGroundOverlay(
// Path to a local image or URL of an image.
'huawei_logo.png',
imageBounds,
{
map: map,
opacity: 1,
zIndex: 1
}
);
}
Marker Result:
Ionic / Cordova Map Implementation
In this part of article we are supposed to add Huawei Map Javascript API’s.
Update Index.html to implment Huawei Map JS scripts:
You need to update src/index.html and include Huawei map javacript cloud script url.
Code:
function loadMapScript() {
const apiKey = encodeURIComponent(
"API_KEY"
);
const src = `https://mapapi.cloud.huawei.com/mapjs/v1/api/js?callback=initMap&key=${apiKey}`;
const mapScript = document.createElement("script");
mapScript.setAttribute("src", src);
document.head.appendChild(mapScript);
}
function initMap() { }
loadMapScript();
Make new Map page:
Code:
ionic g page maps
Update maps.page.ts file and update typescript:
Code:
import { Component, OnInit, ChangeDetectorRef } from "@angular/core";
import { Observable } from "rxjs";
declare var HWMapJsSDK: any;
declare var cordova: any;
@Component({
selector: "app-maps",
templateUrl: "./maps.page.html",
styleUrls: ["./maps.page.scss"],
})
export class MapsPage implements OnInit {
map: any;
baseLat = 24.713552;
baseLng = 46.675297;
ngOnInit() {
this.showMap(his.baseLat, this.baseLng);
}
ionViewWillEnter() {
}
ionViewDidEnter() {
}
showMap(lat = this.baseLat, lng = this.baseLng) {
const mapOptions: any = {};
mapOptions.center = { lat: lat, lng: lng };
mapOptions.zoom = 10;
mapOptions.language = "ENG";
this.map = new HWMapJsSDK.HWMap(document.getElementById("map"), mapOptions);
this.map.setCenter({ lat: lat, lng: lng });
}
}
Ionic / Cordova App Result:
Native Application Huawei JS API Implementation
In this part of article we are supposed to add javascript based Huawei Map html version into our Native through webview. This part of implementation will be helpful for developer who required very minimal implementation of map.
Make assets/www/map.html file
Add the following HTML code inside map.html file:
Code:
var map;
var mMarker;
var infoWindow;
function initMap() {
const LatLng = { lat: 24.713552, lng: 46.675297 };
const mapOptions = {};
mapOptions.center = LatLng;
mapOptions.zoom = 10;
mapOptions.scaleControl = true;
mapOptions.locationControl= true;
mapOptions.language = "ENG";
map = new HWMapJsSDK.HWMap(
document.getElementById("map"),
mapOptions
);
map.setCenter(LatLng);
mMarker = new HWMapJsSDK.HWMarker({
map: map,
position: LatLng,
zIndex: 10,
label: 'A',
icon: {
opacity: 0.5
}
});
mMarker.addListener('click', () => {
infoWindow.open();
});
infoWindow = new HWMapJsSDK.HWInfoWindow({
map,
position: LatLng,
content: 'This is to info window of marker',
offset: [0, -40],
});
infoWindow.close();
}
Add the webview in your layout:
Code:
< WebView
android:id="@+id/webView_map"
android:layout_width="match_parent"
android:layout_height="match_parent"
/>
Update your Activity class to call html file
Code:
class MainActivity : AppCompatActivity() {
lateinit var context: Context
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
context = this
val mWebview = findViewById(R.id.webView_map)
mWebview.webChromeClient = WebChromeClient()
mWebview.webViewClient = WebViewClient()
mWebview.settings.javaScriptEnabled = true
mWebview.settings.setAppCacheEnabled(true)
mWebview.settings.mediaPlaybackRequiresUserGesture = true
mWebview.settings.domStorageEnabled = true
mWebview.loadUrl("file:///android_asset/www/map.html")
}
}
Internet permission:
Don’t forget to add internet permissions in androidmanifest.xml file.
Code:
< uses-permission android:name="android.permission.INTERNET" />
< uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
< uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
Native app Result:
References:
Huawei Map JavaScript API:
https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/javascript-api-introduction-0000001050164063
Complete Ionic JS Map Project:
https://github.com/salmanyaqoob/Ionic-All-HMS-Kits
Conclusion
Huawei Map JavaSript Api will be helpful for JavaScript developers to implement Huawei Map on cross platforms like “Cordova, Ionic, React-Native” and also helpful for the Native developers to implement under his projects. Developers can also able to implement Huawei Maps on websites.
Thank you very much, very helpful.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
A geofence is a virtual perimeter set on a real geographic area. Combining a user position with a geofence perimeter, it is possible to know if the user is inside the geofence or if he is exiting or entering the area.
In this article, we will discuss how to use the geofence to notify the user when the device enters/exits an area using the HMS Location Kit in a Xamarin.Android application. We will also add and customize HuaweiMap, which includes drawing circles, adding pointers, and using nearby searches in search places. We are going to learn how to use the below features together:
Geofence
Reverse Geocode
HuaweiMap
Nearby Search
First of all, you need to be a registered Huawei Mobile Developer and create an application in Huawei App Console in order to use HMS Map Location and Site Kits. You can follow there steps in to complete the configuration that required for development.
Configuring App Information in AppGallery Connect --> shorturl.at/rL347
Creating Xamarin Android Binding Libraries --> shorturl.at/rBP46
Integrating the HMS Map Kit Libraries for Xamarin --> shorturl.at/vAHPX
Integrating the HMS Location Kit Libraries for Xamarin --> shorturl.at/dCX07
Integrating the HMS Site Kit Libraries for Xamarin --> shorturl.at/bmDX6
Integrating the HMS Core SDK --> shorturl.at/qBISV
Setting Package in Xamarin --> shorturl.at/brCU1
When we create our Xamarin.Android application in the above steps, we need to make sure that the package name is the same as we entered the Console. Also, don’t forget the enable them in Console.
Manifest & Permissions
We have to update the application’s manifest file by declaring permissions that we need as shown below.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
Also, add a meta-data element to embed your app id in the application tag, it is required for this app to authenticate on the Huawei’s cloud server. You can find this id in agconnect-services.json file.
Code:
<meta-data android:name="com.huawei.hms.client.appid" android:value="appid=YOUR_APP_ID" />
Request location permission
Code:
private void RequestPermissions()
{
if (ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessCoarseLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessFineLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.WriteExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.ReadExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.Internet) != (int)Permission.Granted)
{
ActivityCompat.RequestPermissions(this,
new System.String[]
{
Manifest.Permission.AccessCoarseLocation,
Manifest.Permission.AccessFineLocation,
Manifest.Permission.WriteExternalStorage,
Manifest.Permission.ReadExternalStorage,
Manifest.Permission.Internet
},
100);
}
else
GetCurrentPosition();
}
Add a Map
Add a <fragment> element to your activity’s layout file, activity_main.xml. This element defines a MapFragment to act as a container for the map and to provide access to the HuaweiMap object.
Code:
<fragment
android:id="@+id/mapfragment"
class="com.huawei.hms.maps.MapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
Implement the IOnMapReadyCallback interface to MainActivity and override OnMapReady method which is triggered when the map is ready to use. Then use GetMapAsync to register for the map callback.
We request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
Code:
public class MainActivity : AppCompatActivity, IOnMapReadyCallback
{
...
public void OnMapReady(HuaweiMap map)
{
hMap = map;
hMap.UiSettings.MyLocationButtonEnabled = true;
hMap.UiSettings.CompassEnabled = true;
hMap.UiSettings.ZoomControlsEnabled = true;
hMap.UiSettings.ZoomGesturesEnabled = true;
hMap.MyLocationEnabled = true;
hMap.MapClick += HMap_MapClick;
if (selectedCoordinates == null)
selectedCoordinates = new GeofenceModel { LatLng = CurrentPosition, Radius = 30 };
}
}
As you can see above, with the UiSettings property of the HuaweiMap object we set my location button, enable compass, etc. Now when the app launch, directly get the current location and move the camera to it. In order to do that we use FusedLocationProviderClient that we instantiated and call LastLocation API.
LastLocation API returns a Task object that we can check the result by implementing the relevant listeners for success and failure.In success listener we are going to move the map’s camera position to the last known position.
Code:
private void GetCurrentPosition()
{
var locationTask = fusedLocationProviderClient.LastLocation;
locationTask.AddOnSuccessListener(new LastLocationSuccess(this));
locationTask.AddOnFailureListener(new LastLocationFail(this));
}
...
public class LastLocationSuccess : Java.Lang.Object, IOnSuccessListener
{
...
public void OnSuccess(Java.Lang.Object location)
{
Toast.MakeText(mainActivity, "LastLocation request successful", ToastLength.Long).Show();
if (location != null)
{
MainActivity.CurrentPosition = new LatLng((location as Location).Latitude, (location as Location).Longitude);
mainActivity.RepositionMapCamera((location as Location).Latitude, (location as Location).Longitude);
}
}
}
To change the position of the camera, we must specify where we want to move the camera, using a CameraUpdate. The Map Kit allows us to create many different types of CameraUpdate using CameraUpdateFactory.
There are some methods for the camera position changes as we see above. Simply these are:
NewLatLng: Change camera’s latitude and longitude, while keeping other properties
NewLatLngZoom: Changes the camera’s latitude, longitude, and zoom, while keeping other properties
NewCameraPosition: Full flexibility in changing the camera position
We are going to use NewCameraPosition. A CameraPosition can be obtained with a CameraPosition.Builder. And then we can set target, bearing, tilt and zoom properties.
Code:
public void RepositionMapCamera(double lat, double lng)
{
var cameraPosition = new CameraPosition.Builder();
cameraPosition.Target(new LatLng(lat, lng));
cameraPosition.Zoom(1000);
cameraPosition.Bearing(45);
cameraPosition.Tilt(20);
CameraUpdate cameraUpdate = CameraUpdateFactory.NewCameraPosition(cameraPosition.Build());
hMap.MoveCamera(cameraUpdate);
}
Creating Geofence
In this part, we will choose the location where we want to set geofence in two different ways. The first is to select the location by clicking on the map, and the second is to search for nearby places by keyword and select one after placing them on the map with the marker.
Set the geofence location by clicking on the map
It is always easier to select a location by seeing it. After this section, we are able to set a geofence around the clicked point when the map’s clicked. We attached the Click event to our map in the OnMapReady method. In this Click event, we will add a marker to the clicked point and draw a circle around it.
Also, we will use the Seekbar at the bottom of the page to adjust the circle radius. We set selectedCoordinates variable when adding the marker. Let’s create the following method to create the marker:
Code:
private void HMap_MapClick(object sender, HuaweiMap.MapClickEventArgs e)
{
selectedCoordinates.LatLng = e.P0;
if (circle != null)
{
circle.Remove();
circle = null;
}
AddMarkerOnMap();
}
void AddMarkerOnMap()
{
if (marker != null) marker.Remove();
var markerOption = new MarkerOptions()
.InvokeTitle("You are here now")
.InvokePosition(selectedCoordinates.LatLng);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
marker = hMap.AddMarker(markerOption);
bool isInfoWindowShown = marker.IsInfoWindowShown;
if (isInfoWindowShown)
marker.HideInfoWindow();
else
marker.ShowInfoWindow();
}
Adding MapInfoWindowAdapter class to our project for rendering the custom info model. And implement HuaweiMap.IInfoWindowAdapter interface to it. When an information window needs to be displayed for a marker, methods provided by this adapter are called in any case.
Now let’s create a custom info window layout and named it as map_info_view.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<Button
android:text="Add geofence"
android:width="100dp"
style="@style/Widget.AppCompat.Button.Colored"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/btnInfoWindow" />
</LinearLayout>
And return it after customizing it in GetInfoWindow() method. The full code of the adapter is below:
Code:
internal class MapInfoWindowAdapter : Java.Lang.Object, HuaweiMap.IInfoWindowAdapter
{
private MainActivity activity;
private GeofenceModel selectedCoordinates;
private View addressLayout;
public MapInfoWindowAdapter(MainActivity currentActivity){activity = currentActivity;}
public View GetInfoContents(Marker marker){return null;}
public View GetInfoWindow(Marker marker)
{
if (marker == null)
return null;
selectedCoordinates = new GeofenceModel { LatLng = new LatLng(marker.Position.Latitude, marker.Position.Longitude) };
View mapInfoView = activity.LayoutInflater.Inflate(Resource.Layout.map_info_view, null);
var radiusBar = activity.FindViewById<SeekBar>(Resource.Id.radiusBar);
if (radiusBar.Visibility == Android.Views.ViewStates.Invisible)
{
radiusBar.Visibility = Android.Views.ViewStates.Visible;
radiusBar.SetProgress(30, true);
}
activity.FindViewById<SeekBar>(Resource.Id.radiusBar)?.SetProgress(30, true);
activity.DrawCircleOnMap(selectedCoordinates);
Button button = mapInfoView.FindViewById<Button>(Resource.Id.btnInfoWindow);
button.Click += btnInfoWindow_ClickAsync;
return mapInfoView;
}
}
Now we create a method to arrange a circle around the marker that representing the geofence radius. Create a new DrawCircleOnMap method in MainActivity for this. To construct a circle, we must specify the Center and Radius. Also, I set other properties like StrokeColor etc.
Code:
public void DrawCircleOnMap(GeofenceModel geoModel)
{
if (circle != null)
{
circle.Remove();
circle = null;
}
CircleOptions circleOptions = new CircleOptions()
.InvokeCenter(geoModel.LatLng)
.InvokeRadius(geoModel.Radius)
.InvokeFillColor(Color.Argb(50, 0, 14, 84))
.InvokeStrokeColor(Color.Yellow)
.InvokeStrokeWidth(15);
circle = hMap.AddCircle(circleOptions);
}
private void radiusBar_ProgressChanged(object sender, SeekBar.ProgressChangedEventArgs e)
{
selectedCoordinates.Radius = e.Progress;
DrawCircleOnMap(selectedCoordinates);
}
We will use SeekBar to change the radius of the circle. As the value changes, the drawn circle will expand or shrink.
Reverse Geocoding
Now let’s handle the click event of the info window.
But before open that window, we need to reverse geocoding selected coordinates to getting a formatted address. HUAWEI Site Kit provides us a set of HTTP API including the one that we need, reverseGeocode.
Let’s add the GeocodeManager class to our project and update it as follows:
Code:
public async Task<Site> ReverseGeocode(double lat, double lng)
{
string result = "";
using (var client = new HttpClient())
{
MyLocation location = new MyLocation();
location.Lat = lat;
location.Lng = lng;
var root = new ReverseGeocodeRequest();
root.Location = location;
var settings = new JsonSerializerSettings();
settings.ContractResolver = new LowercaseSerializer();
var json = JsonConvert.SerializeObject(root, Formatting.Indented, settings);
var data = new StringContent(json, Encoding.UTF8, "application/json");
var url = "siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=" + Android.Net.Uri.Encode(ApiKey);
var response = await client.PostAsync(url, data);
result = response.Content.ReadAsStringAsync().Result;
}
return JsonConvert.DeserializeObject<ReverseGeocodeResponse>(result).sites.FirstOrDefault();
}
In the above code, we request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=APIKEY
Click to expand...
Click to collapse
Request model:
Code:
public class MyLocation
{
public double Lat { get; set; }
public double Lng { get; set; }
}
public class ReverseGeocodeRequest
{
public MyLocation Location { get; set; }
}
Note that the JSON response contains three root elements:
“returnCode”: For details, please refer to Result Codes.
“returnDesc”: description
“sites” contains an array of geocoded address information
Generally, only one entry in the “sites” array is returned for address lookups, though the geocoder may return several results when address queries are ambiguous.
Add the following codes to our MapInfoWindowAdapter where we get results from the Reverse Geocode API and set the UI elements.
Code:
private async void btnInfoWindow_ClickAsync(object sender, System.EventArgs e)
{
addressLayout = activity.LayoutInflater.Inflate(Resource.Layout.reverse_alert_layout, null);
GeocodeManager geocodeManager = new GeocodeManager(activity);
var addressResult = await geocodeManager.ReverseGeocode(selectedCoordinates.LatLng.Latitude, selectedCoordinates.LatLng.Longitude);
if (addressResult.ReturnCode != 0)
return;
var address = addressResult.Sites.FirstOrDefault();
var txtAddress = addressLayout.FindViewById<TextView>(Resource.Id.txtAddress);
var txtRadius = addressLayout.FindViewById<TextView>(Resource.Id.txtRadius);
txtAddress.Text = address.FormatAddress;
txtRadius.Text = selectedCoordinates.Radius.ToString();
AlertDialog.Builder builder = new AlertDialog.Builder(activity);
builder.SetView(addressLayout);
builder.SetTitle(address.Name);
builder.SetPositiveButton("Save", (sender, arg) =>
{
selectedCoordinates.Conversion = GetSelectedConversion();
GeofenceManager geofenceManager = new GeofenceManager(activity);
geofenceManager.AddGeofences(selectedCoordinates);
});
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
AlertDialog alert = builder.Create();
alert.Show();
}
Now, after selecting the conversion, we can complete the process by calling the AddGeofence method in the GeofenceManager class by pressing the save button in the dialog window.
Code:
public void AddGeofences(GeofenceModel geofenceModel)
{
//Set parameters
geofenceModel.Id = Guid.NewGuid().ToString();
if (geofenceModel.Conversion == 5) //Expiration value that indicates the geofence should never expire.
geofenceModel.Timeout = Geofence.GeofenceNeverExpire;
else
geofenceModel.Timeout = 10000;
List<IGeofence> geofenceList = new List<IGeofence>();
//Geofence Service
GeofenceService geofenceService = LocationServices.GetGeofenceService(activity);
PendingIntent pendingIntent = CreatePendingIntent();
GeofenceBuilder somewhereBuilder = new GeofenceBuilder()
.SetUniqueId(geofenceModel.Id)
.SetValidContinueTime(geofenceModel.Timeout)
.SetRoundArea(geofenceModel.LatLng.Latitude, geofenceModel.LatLng.Longitude, geofenceModel.Radius)
.SetDwellDelayTime(10000)
.SetConversions(geofenceModel.Conversion); ;
//Create geofence request
geofenceList.Add(somewhereBuilder.Build());
GeofenceRequest geofenceRequest = new GeofenceRequest.Builder()
.CreateGeofenceList(geofenceList)
.Build();
//Register geofence
var geoTask = geofenceService.CreateGeofenceList(geofenceRequest, pendingIntent);
geoTask.AddOnSuccessListener(new CreateGeoSuccessListener(activity));
geoTask.AddOnFailureListener(new CreateGeoFailListener(activity));
}
In the AddGeofence method, we need to set the geofence request parameters, like the selected conversion, unique Id and timeout according to conversion, etc. with GeofenceBuilder. We create GeofenceBroadcastReceiver and display a toast message when a geofence action occurs.
Code:
[BroadcastReceiver(Enabled = true)]
[IntentFilter(new[] { "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY" })]
class GeofenceBroadcastReceiver : BroadcastReceiver
{
public static readonly string ActionGeofence = "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY";
public override void OnReceive(Context context, Intent intent)
{
if (intent != null)
{
var action = intent.Action;
if (action == ActionGeofence)
{
GeofenceData geofenceData = GeofenceData.GetDataFromIntent(intent);
if (geofenceData != null)
{
Toast.MakeText(context, "Geofence triggered: " + geofenceData.ConvertingLocation.Latitude +"\n" + geofenceData.ConvertingLocation.Longitude + "\n" + geofenceData.Conversion.ToConversionName(), ToastLength.Long).Show();
}
}
}
}
}
After that in CreateGeoSuccessListener and CreateGeoFailureListener that we implement IOnSuccessListener and IOnFailureListener respectively, we display a toast message to the user like this:
Code:
public class CreateGeoFailListener : Java.Lang.Object, IOnFailureListener
{
public void OnFailure(Java.Lang.Exception ex)
{
Toast.MakeText(mainActivity, "Geofence request failed: " + GeofenceErrorCodes.GetErrorMessage((ex as ApiException).StatusCode), ToastLength.Long).Show();
}
}
public class CreateGeoSuccessListener : Java.Lang.Object, IOnSuccessListener
{
public void OnSuccess(Java.Lang.Object data)
{
Toast.MakeText(mainActivity, "Geofence request successful", ToastLength.Long).Show();
}
}
Set geofence location using Nearby Search
On the main layout when the user clicks the Search Nearby Places button, a search dialog like below appears:
Create search_alert_layout.xml with a search input In Main Activity, create click event of that button and open an alert dialog after it’s view is set to search_alert_layout. And make NearbySearch when clicking the Search button:
Code:
private void btnGeoWithAddress_Click(object sender, EventArgs e)
{
search_view = base.LayoutInflater.Inflate(Resource.Layout.search_alert_layout, null);
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.SetView(search_view);
builder.SetTitle("Search Location");
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
search_view.FindViewById<Button>(Resource.Id.btnSearch).Click += btnSearchClicked;
alert = builder.Create();
alert.Show();
}
private void btnSearchClicked(object sender, EventArgs e)
{
string searchText = search_view.FindViewById<TextView>(Resource.Id.txtSearch).Text;
GeocodeManager geocodeManager = new GeocodeManager(this);
geocodeManager.NearbySearch(CurrentPosition, searchText);
}
We pass search text and Current Location into the GeocodeManager NearbySearch method as parameters. We need to modify GeoCodeManager class and add nearby search method to it.
Code:
public void NearbySearch(LatLng currentLocation, string searchText)
{
ISearchService searchService = SearchServiceFactory.Create(activity, Android.Net.Uri.Encode("YOUR_API_KEY"));
NearbySearchRequest nearbySearchRequest = new NearbySearchRequest();
nearbySearchRequest.Query = searchText;
nearbySearchRequest.Language = "en";
nearbySearchRequest.Location = new Coordinate(currentLocation.Latitude, currentLocation.Longitude);
nearbySearchRequest.Radius = (Integer)2000;
nearbySearchRequest.PageIndex = (Integer)1;
nearbySearchRequest.PageSize = (Integer)5;
nearbySearchRequest.PoiType = LocationType.Address;
searchService.NearbySearch(nearbySearchRequest, new QuerySuggestionResultListener(activity as MainActivity));
}
And to handle the result we must create a listener and implement the ISearchResultListener interface to it.
Code:
public class NearbySearchResultListener : Java.Lang.Object, ISearchResultListener
{
public void OnSearchError(SearchStatus status)
{
Toast.MakeText(context, "Error Code: " + status.ErrorCode + " Error Message: " + status.ErrorMessage, ToastLength.Long);
}
public void OnSearchResult(Java.Lang.Object results)
{
NearbySearchResponse nearbySearchResponse = (NearbySearchResponse)results;
if (nearbySearchResponse != null && nearbySearchResponse.TotalCount > 0)
context.SetSearchResultOnMap(nearbySearchResponse.Sites);
}
}
In OnSearchResult method, NearbySearchResponse object return. We will insert markers to the mapper element in this response. The map will look like this:
In Main Activity create a method named SetSearchResultOnMap and pass IList<Site> as a parameter to insert multiple markers on the map.
Code:
public void SetSearchResultOnMap(IList<Com.Huawei.Hms.Site.Api.Model.Site> sites)
{
hMap.Clear();
if (searchMarkers != null && searchMarkers.Count > 0)
foreach (var item in searchMarkers)
item.Remove();
searchMarkers = new List<Marker>();
for (int i = 0; i < sites.Count; i++)
{
MarkerOptions marker1Options = new MarkerOptions()
.InvokePosition(new LatLng(sites[i].Location.Lat, sites[i].Location.Lng))
.InvokeTitle(sites[i].Name).Clusterable(true);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
var marker1 = hMap.AddMarker(marker1Options);
searchMarkers.Add(marker1);
RepositionMapCamera(sites[i].Location.Lat, sites[i].Location.Lng);
}
hMap.SetMarkersClustering(true);
alert.Dismiss();
}
Now, we add markers as we did above. But here we use SetMarkersClustering(true) to consolidates markers into clusters when zooming out of the map.
You can download the source code from below:
github.com/stugcearar/HMSCore-Xamarin-Android-Samples/tree/master/LocationKit/HMS_Geofence
Also if you have any questions, ask away in Huawei Developer Forums.
Errors
If your location permission set “Allowed only while in use instead” of ”Allowed all the time” below exception will be thrown.
int GEOFENCE_INSUFFICIENT_PERMISSION
Insufficient permission to perform geofence-related operations.
You can see all result codes including errors, in here for Location service.
You can find result codes with details here for Geofence request.
Introduction
The use of Augmented Reality (AR) is increasing every day in many areas from shopping to games, from design to education and more. If we ask what augmented reality is, we will get an answer from wikipedia exactly as follows.
What is the AR?
Augmented reality(AR) is an interactive experience of a real-world environment where the objects that reside in the real world are enhanced by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory.
However, today we will talk about the advantages of AR Engine, the augmented reality SDK offered by HUAWEI, rather than this classic definition. We will talk about the differences with the ML Kit, which looks similar but different, offered by HUAWEI, and we will develop a demo application using the HUAWEI AR Engine. And in this way, we will learn how to easily integrate augmented reality to our application by using HUAWEI AR Engine.
What is the HUAWEI AR Engine?
HUAWEI AR Engine is a platform for building augmented reality (AR) apps on Android smartphones. It is based on the HiSilicon chipset, and integrates AR core algorithms to provide basic AR capabilities such as motion tracking, environment tracking, body tracking, and face tracking, allowing our app to bridge virtual world with the real world, for a brand new visually interactive user experience. AR Engine accurately understands and provides virtual and physical convergence capabilities for our applications.
AR Engine Advantages
Normally, integrating augmented reality features to our application is a very complicated and laborious task. However, companies have offered SDKs to make this complex work easy. Apple AR Kit, Google AR Core and HUAWEI AR Engine, which is our main topic to develop applications today, are examples of these SDKs. However, there are differences between these SDKs such as performance, capability, and supported devices.
For example, while Google AR Core does not support face tracking and human body tracking, it does support AR Engine and AR Kit. Also, while AR Engine and AR Kit support hand gestures, AR Core does not.
Other advantages of HUAWEI AR Engine enables your device to understand how people move. HUAWEI AR Engine can assist in placing a virtual object or applying special effect on a hand by locating hand locations and recognizing specific gestures. With the depth component, motion tracking capability can track 21 hand skeleton points to implement precise interactive controls and special effect overlays. Regarding body recognition, the capability can track 23 body skeleton points with specific names (Left Hand etc.) to detect human posture in real time. AR Engine supports use with third party applications and the Depth Api. In addition to all these, HUAWEI AR Engine supports these features for both front and back cameras. Also, it is planned to add the feature to provide directions for certain locations in the coming days.
With the AR Engine, HUAWEI mobile phones provide interaction capabilities such as face, gesture, and body recognition, and more than 240 APIs, in addition to the basic motion tracking and environment tracking capabilities.
Differences from HUAWEI ML Kit
HUAWEI AR Engine Body Tracking and ML Kit vision skeleton recognition may look the same. However, there is quite a difference between them. ML Kit provides for some general purpose capabilities while the AR Engine tracks skeleton information in the AR scenario. Skeleton tracking and motion tracking are both enabled in AR Engine. So, AR Engine has various information from the coordinate system. But the service in the ML Kit cannot do the same. Because they served for different purposes.
The service in the AR Engine is used to create an AR app while the service in the ML Kit can only track the skeleton in the image in the smart phone coordinate system. They are different ways to implement these two services. Also, the models are different.
● ● ●
Demo App Development
We will create a simple demo application by using HUAWEI AR Engine’s Body tracking capability. In this demo application, I will try to draw lines that represent the body skeleton on the human body viewed by the camera. First you need to provide software and hardware requirements.
Hardware Requirements
The current version of HUAWEI AR Engine supports only HUAWEI devices. So, you need a HUAWEI phone that supports HUAWEI AR Engine, can be connected to a computer via a USB cable, and whose camera works properly.(You can see the supported devices in the below table)
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Software Requirements
Java JDK (1.8 or later).
Android Studio (3.1 or later).
Latest HUAWEI AR Engine SDK, which is available on HUAWEI Developers.
Latest HUAWEI AR Engine APK, which is available in HUAWEI AppGallery and has been installed on the phone.
After providing the requirements, you need to add OpenGL to graphic rendering and the HUAWEI AR Engine SDK dependencies to your app level build.gradle
Code:
dependencies{
//HUAWEI AR Engine SDK dependency
implementation 'com.huawei.hms:arenginesdk:2.12.0.1'
//opengl graphics library dependency
implementation 'de.javagl:obj:0.3.0'
}
To using the camera you need to add camera permission in your AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.CAMERA" />
Now we are ready for development of our app. Before enter to the development we should know the general process of using the HUAWEI AR Engine SDK. Our demo application AR Engine process has to follow steps in the photo.
Note: The general AR Engine usage process will follow steps in the photo. Development steps are not related with steps in the photo.
Now if you have a look at the general process of AR Engine, we can continue to development.
Note: Remember, if you get confused in these processes, take a look at this picture again after development is complete and you will fully understand the general usage process of HUAWEI AR Engine SDK.
First we need to create a class for body rendering shader utilities. We will use this utility class to create vertex and fragment shaders for body rendering. (We will be take advantage of OpenGL computer graphics library functions to create shaders)
Code:
import android.opengl.GLES20;
import android.util.Log;
/**
* This class provides code and programs related to body rendering shader.
*/
class BodyShaderUtil {
private static final String TAG = BodyShaderUtil.class.getSimpleName();
/**
* Newline character.
*/
public static final String LS = System.lineSeparator();
/**
* Code for the vertex shader.
*/
public static final String BODY_VERTEX =
"uniform vec4 inColor;" + LS
+ "attribute vec4 inPosition;" + LS
+ "uniform float inPointSize;" + LS
+ "varying vec4 varColor;" + LS
+ "uniform mat4 inProjectionMatrix;" + LS
+ "uniform float inCoordinateSystem;" + LS
+ "void main() {" + LS
+ " vec4 position = vec4(inPosition.xyz, 1.0);" + LS
+ " if (inCoordinateSystem == 2.0) {" + LS
+ " position = inProjectionMatrix * position;" + LS
+ " }" + LS
+ " gl_Position = position;" + LS
+ " varColor = inColor;" + LS
+ " gl_PointSize = inPointSize;" + LS
+ "}";
/**
* Code for the segment shader.
*/
public static final String BODY_FRAGMENT =
"precision mediump float;" + LS
+ "varying vec4 varColor;" + LS
+ "void main() {" + LS
+ " gl_FragColor = varColor;" + LS
+ "}";
private BodyShaderUtil() {
}
/**
* Create a shader.
*
* @return Shader program.
*/
static int createGlProgram() {
int vertex = loadShader(GLES20.GL_VERTEX_SHADER, BODY_VERTEX);
if (vertex == 0) {
return 0;
}
int fragment = loadShader(GLES20.GL_FRAGMENT_SHADER, BODY_FRAGMENT);
if (fragment == 0) {
return 0;
}
int program = GLES20.glCreateProgram();
if (program != 0) {
GLES20.glAttachShader(program, vertex);
GLES20.glAttachShader(program, fragment);
GLES20.glLinkProgram(program);
int[] linkStatus = new int[1];
GLES20.glGetProgramiv(program, GLES20.GL_LINK_STATUS, linkStatus, 0);
if (linkStatus[0] != GLES20.GL_TRUE) {
Log.e(TAG, "Could not link program " + GLES20.glGetProgramInfoLog(program));
GLES20.glDeleteProgram(program);
program = 0;
}
}
return program;
}
private static int loadShader(int shaderType, String source) {
int shader = GLES20.glCreateShader(shaderType);
if (0 != shader) {
GLES20.glShaderSource(shader, source);
GLES20.glCompileShader(shader);
int[] compiled = new int[1];
GLES20.glGetShaderiv(shader, GLES20.GL_COMPILE_STATUS, compiled, 0);
if (compiled[0] == 0) {
Log.e(TAG, "glError: Could not compile shader " + shaderType);
Log.e(TAG, "glError: " + GLES20.glGetShaderInfoLog(shader));
GLES20.glDeleteShader(shader);
shader = 0;
}
}
return shader;
}
}
Then we need to create an interface to rendering body AR type related data. We will implement this interface in the classes to be displayed. Then, we will call the overridden methods of the classes that implement this interface from the BodyRenderManager class, which will be created from the onCreate method of our activity.
You can see that we passed ARBody type Collection to onDrawFrame method. There are two reasons for this. First reason is the HUAWEI AR Engine can identify two human bodies at a time by default, and it always returns two body objects. The second reason is that we will draw the body skeleton in overridden onDrawFrame methods. We are using HUAWEI ARBody class because it returns the tracking result during body skeleton tracking, including body skeleton data which will be used on drawing.
Code:
import com.huawei.hiar.ARBody;
import java.util.Collection;
/**
* Rendering body AR type related data.
*/
interface BodyRelatedDisplay {
/**
* Init render.
*/
void init();
/**
* Render objects, call per frame.
*
* @param bodies ARBodies.
* @param projectionMatrix Camera projection matrix.
*/
void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix);
}
Now we need to create display classes to pass the data to OpenGL ES.
The body skeleton display class:
(We will use this class to pass skeleton data to be rendered and displayed on the screen to OpenGL ES)
Code:
import java.nio.FloatBuffer;
import java.util.Collection;
/**
* Obtain and pass the skeleton data to openGL ES, which will render the data and displays it on the screen.
*/
public class BodySkeletonDisplay implements BodyRelatedDisplay {
private static final String TAG = BodySkeletonDisplay.class.getSimpleName();
// Number of bytes occupied by each 3D coordinate. Float data occupies 4 bytes.
// Each skeleton point represents a 3D coordinate.
private static final int BYTES_PER_POINT = 4 * 3;
private static final int INITIAL_POINTS_SIZE = 150;
private static final float DRAW_COORDINATE = 2.0f;
private int mVbo;
private int mVboSize;
private int mProgram;
private int mPosition;
private int mProjectionMatrix;
private int mColor;
private int mPointSize;
private int mCoordinateSystem;
private int mNumPoints = 0;
private int mPointsNum = 0;
private FloatBuffer mSkeletonPoints;
/**
* Create a body skeleton shader on the GL thread.
* This method is called when {@link BodyRenderManager#onSurfaceCreated}.
*/
@Override
public void init() {
ShaderUtil.checkGlError(TAG, "Init start.");
int[] buffers = new int[1];
GLES20.glGenBuffers(1, buffers, 0);
mVbo = buffers[0];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
mVboSize = INITIAL_POINTS_SIZE * BYTES_PER_POINT;
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVboSize, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Before create gl program.");
createProgram();
ShaderUtil.checkGlError(TAG, "Init end.");
}
private void createProgram() {
ShaderUtil.checkGlError(TAG, "Create gl program start.");
mProgram = BodyShaderUtil.createGlProgram();
mColor = GLES20.glGetUniformLocation(mProgram, "inColor");
mPosition = GLES20.glGetAttribLocation(mProgram, "inPosition");
mPointSize = GLES20.glGetUniformLocation(mProgram, "inPointSize");
mProjectionMatrix = GLES20.glGetUniformLocation(mProgram, "inProjectionMatrix");
mCoordinateSystem = GLES20.glGetUniformLocation(mProgram, "inCoordinateSystem");
ShaderUtil.checkGlError(TAG, "Create gl program end.");
}
private void updateBodySkeleton() {
ShaderUtil.checkGlError(TAG, "Update Body Skeleton data start.");
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
mNumPoints = mPointsNum;
if (mVboSize < mNumPoints * BYTES_PER_POINT) {
while (mVboSize < mNumPoints * BYTES_PER_POINT) {
// If the size of VBO is insufficient to accommodate the new point cloud, resize the VBO.
mVboSize *= 2;
}
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVboSize, null, GLES20.GL_DYNAMIC_DRAW);
}
GLES20.glBufferSubData(GLES20.GL_ARRAY_BUFFER, 0, mNumPoints * BYTES_PER_POINT, mSkeletonPoints);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Update Body Skeleton data end.");
}
/**
* Update the node data and draw by using OpenGL.
* This method is called when {@link BodyRenderManager#onDrawFrame}.
*
* @param bodies Body data.
* @param projectionMatrix projection matrix.
*/
@Override
public void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix) {
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = DRAW_COORDINATE;
}
findValidSkeletonPoints(body);
updateBodySkeleton();
drawBodySkeleton(coordinate, projectionMatrix);
}
}
}
private void drawBodySkeleton(float coordinate, float[] projectionMatrix) {
ShaderUtil.checkGlError(TAG, "Draw body skeleton start.");
GLES20.glUseProgram(mProgram);
GLES20.glEnableVertexAttribArray(mPosition);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
// The size of the vertex attribute is 4, and each vertex has four coordinate components.
GLES20.glVertexAttribPointer(
mPosition, 4, GLES20.GL_FLOAT, false, BYTES_PER_POINT, 0);
GLES20.glUniform4f(mColor, 0.0f, 0.0f, 1.0f, 1.0f);
GLES20.glUniformMatrix4fv(mProjectionMatrix, 1, false, projectionMatrix, 0);
// Set the size of the skeleton points.
GLES20.glUniform1f(mPointSize, 30.0f);
GLES20.glUniform1f(mCoordinateSystem, coordinate);
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, mNumPoints);
GLES20.glDisableVertexAttribArray(mPosition);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Draw body skeleton end.");
}
private void findValidSkeletonPoints(ARBody arBody) {
int index = 0;
int[] isExists;
int validPointNum = 0;
float[] points;
float[] skeletonPoints;
// Determine whether the data returned by the algorithm is 3D human
// skeleton data or 2D human skeleton data, and obtain valid skeleton points.
if (arBody.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
isExists = arBody.getSkeletonPointIsExist3D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint3D();
} else {
isExists = arBody.getSkeletonPointIsExist2D();
points = new float[isExists.length * 3];
skeletonPoints = arBody.getSkeletonPoint2D();
}
// Save the three coordinates of each joint point(each point has three coordinates).
for (int i = 0; i < isExists.length; i++) {
if (isExists[i] != 0) {
points[index++] = skeletonPoints[3 * i];
points[index++] = skeletonPoints[3 * i + 1];
points[index++] = skeletonPoints[3 * i + 2];
validPointNum++;
}
}
mSkeletonPoints = FloatBuffer.wrap(points);
mPointsNum = validPointNum;
}
}
The body skeleton line display class:
(And this class will be used to pass the skeleton point connection data to OpenGL ES for rendering on the screen)
Code:
import java.nio.FloatBuffer;
import java.util.Collection;
/**
* Gets the skeleton point connection data and pass it to OpenGL ES for rendering on the screen.
*/
public class BodySkeletonLineDisplay implements BodyRelatedDisplay {
private static final String TAG = BodySkeletonLineDisplay.class.getSimpleName();
// Number of bytes occupied by each 3D coordinate. Float data occupies 4 bytes.
// Each skeleton point represents a 3D coordinate.
private static final int BYTES_PER_POINT = 4 * 3;
private static final int INITIAL_BUFFER_POINTS = 150;
private static final float COORDINATE_SYSTEM_TYPE_3D_FLAG = 2.0f;
private static final int LINE_POINT_RATIO = 6;
private int mVbo;
private int mVboSize = INITIAL_BUFFER_POINTS * BYTES_PER_POINT;
private int mProgram;
private int mPosition;
private int mProjectionMatrix;
private int mColor;
private int mPointSize;
private int mCoordinateSystem;
private int mNumPoints = 0;
private int mPointsLineNum = 0;
private FloatBuffer mLinePoints;
/**
* Constructor.
*/
BodySkeletonLineDisplay() {
}
/**
* Create a body skeleton line shader on the GL thread.
* This method is called when {@link BodyRenderManager#onSurfaceCreated}.
*/
@Override
public void init() {
ShaderUtil.checkGlError(TAG, "Init start.");
int[] buffers = new int[1];
GLES20.glGenBuffers(1, buffers, 0);
mVbo = buffers[0];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
ShaderUtil.checkGlError(TAG, "Before create gl program.");
createProgram();
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVboSize, null, GLES20.GL_DYNAMIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Init end.");
}
private void createProgram() {
ShaderUtil.checkGlError(TAG, "Create gl program start.");
mProgram = BodyShaderUtil.createGlProgram();
mPosition = GLES20.glGetAttribLocation(mProgram, "inPosition");
mColor = GLES20.glGetUniformLocation(mProgram, "inColor");
mPointSize = GLES20.glGetUniformLocation(mProgram, "inPointSize");
mProjectionMatrix = GLES20.glGetUniformLocation(mProgram, "inProjectionMatrix");
mCoordinateSystem = GLES20.glGetUniformLocation(mProgram, "inCoordinateSystem");
ShaderUtil.checkGlError(TAG, "Create gl program end.");
}
private void drawSkeletonLine(float coordinate, float[] projectionMatrix) {
ShaderUtil.checkGlError(TAG, "Draw skeleton line start.");
GLES20.glUseProgram(mProgram);
GLES20.glEnableVertexAttribArray(mPosition);
GLES20.glEnableVertexAttribArray(mColor);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
// Set the width of the rendered skeleton line.
GLES20.glLineWidth(18.0f);
// The size of the vertex attribute is 4, and each vertex has four coordinate components.
GLES20.glVertexAttribPointer(
mPosition, 4, GLES20.GL_FLOAT, false, BYTES_PER_POINT, 0);
GLES20.glUniform4f(mColor, 1.0f, 0.0f, 0.0f, 1.0f);
GLES20.glUniformMatrix4fv(mProjectionMatrix, 1, false, projectionMatrix, 0);
// Set the size of the points.
GLES20.glUniform1f(mPointSize, 100.0f);
GLES20.glUniform1f(mCoordinateSystem, coordinate);
GLES20.glDrawArrays(GLES20.GL_LINES, 0, mNumPoints);
GLES20.glDisableVertexAttribArray(mPosition);
GLES20.glDisableVertexAttribArray(mColor);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Draw skeleton line end.");
}
/**
* Rendering lines between body bones.
* This method is called when {@link BodyRenderManager#onDrawFrame}.
*
* @param bodies Bodies data.
* @param projectionMatrix Projection matrix.
*/
@Override
public void onDrawFrame(Collection<ARBody> bodies, float[] projectionMatrix) {
for (ARBody body : bodies) {
if (body.getTrackingState() == ARTrackable.TrackingState.TRACKING) {
float coordinate = 1.0f;
if (body.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coordinate = COORDINATE_SYSTEM_TYPE_3D_FLAG;
}
updateBodySkeletonLineData(body);
drawSkeletonLine(coordinate, projectionMatrix);
}
}
}
/**
* Update body connection data.
*/
private void updateBodySkeletonLineData(ARBody body) {
findValidConnectionSkeletonLines(body);
ShaderUtil.checkGlError(TAG, "Update body skeleton line data start.");
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, mVbo);
mNumPoints = mPointsLineNum;
if (mVboSize < mNumPoints * BYTES_PER_POINT) {
while (mVboSize < mNumPoints * BYTES_PER_POINT) {
// If the storage space is insufficient, allocate double the space.
mVboSize *= 2;
}
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, mVboSize, null, GLES20.GL_DYNAMIC_DRAW);
}
GLES20.glBufferSubData(GLES20.GL_ARRAY_BUFFER, 0, mNumPoints * BYTES_PER_POINT, mLinePoints);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
ShaderUtil.checkGlError(TAG, "Update body skeleton line data end.");
}
private void findValidConnectionSkeletonLines(ARBody arBody) {
mPointsLineNum = 0;
int[] connections = arBody.getBodySkeletonConnection();
float[] linePoints = new float[LINE_POINT_RATIO * connections.length];
float[] coors;
int[] isExists;
if (arBody.getCoordinateSystemType() == ARCoordinateSystemType.COORDINATE_SYSTEM_TYPE_3D_CAMERA) {
coors = arBody.getSkeletonPoint3D();
isExists = arBody.getSkeletonPointIsExist3D();
} else {
coors = arBody.getSkeletonPoint2D();
isExists = arBody.getSkeletonPointIsExist2D();
}
// Filter out valid skeleton connection lines based on the returned results,
// which consist of indexes of two ends, for example, [p0,p1;p0,p3;p0,p5;p1,p2].
// The loop takes out the 3D coordinates of the end points of the valid connection
// line and saves them in sequence.
for (int j = 0; j < connections.length; j += 2) {
if (isExists[connections[j]] != 0 && isExists[connections[j + 1]] != 0) {
linePoints[mPointsLineNum * 3] = coors[3 * connections[j]];
linePoints[mPointsLineNum * 3 + 1] = coors[3 * connections[j] + 1];
linePoints[mPointsLineNum * 3 + 2] = coors[3 * connections[j] + 2];
linePoints[mPointsLineNum * 3 + 3] = coors[3 * connections[j + 1]];
linePoints[mPointsLineNum * 3 + 4] = coors[3 * connections[j + 1] + 1];
linePoints[mPointsLineNum * 3 + 5] = coors[3 * connections[j + 1] + 2];
mPointsLineNum += 2;
}
}
mLinePoints = FloatBuffer.wrap(linePoints);
}
}
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0202387404360340481&fid=0101187876626530001&channelname=HuoDong59&ha_source=xda
How much time it takes to integrate this service ?