In the mobile Internet era, people are increasingly using mobile apps for a variety of different purposes, such as buying products online, hailing taxis, and much more. When using such an app, a user usually needs to manually enter their address for package delivery or search for an appropriate pick-up and drop-off location when they hail a taxi, which can be inconvenient.
To improve user experience, many apps nowadays allow users to select a point on the map and then use the selected point as the location, for example, for package delivery or getting on or off a taxi. Each location has a longitude-latitude coordinate that pinpoints its position precisely on the map. However, longitude-latitude coordinates are simply a string of numbers and provide little information to the average user. It would therefore be useful if there was a tool which an app can use to convert longitude-latitude coordinates into human-readable addresses.
Fortunately, the reverse geocoding function in HMS Core Location Kit can obtain the nearest address to a selected point on the map based on the longitude and latitude of the point. Reverse geocoding is the process of converting a location as described by geographic coordinates (longitude and latitude) to a human-readable address or place name, which is much more useful information for users. It permits the identification of nearby street addresses, places, and subdivisions such as neighborhoods, counties, states, and countries.
Generally, the reverse geocoding function can be used to obtain the nearest address to the current location of a device, show the address or place name when a user taps on the map, find the address of a geographic location, and more. For example, with reverse geocoding, an e-commerce app can show users the detailed address of a selected point on the map in the app; a ride-hailing or takeout delivery app can show the detailed address of a point that a user selects by dragging the map in the app or tapping the point on the map in the app, so that the user can select the address as the pick-up address or takeout delivery address; and an express delivery app can utilize reverse geocoding to show the locations of delivery vehicles based on the passed longitude-latitude coordinates, and intuitively display delivery points and delivery routes to users.
Bolstered by a powerful address parsing capability, the reverse geocoding function in this kit can display addresses of locations in accordance with local address formats with an accuracy as high as 90%. In addition, it supports 79 languages and boasts a parsing latency as low as 200 milliseconds.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
DemoThe file below is a demo of the reverse geocoding function in this kit.
Preparations
Before getting started with the development, you will need to make the following preparations:
Register as a Huawei developer and complete identity verification on the HUAWEI Developers website. You can click here to find out the detailed registration and identity verification procedure.
Create a project and then create an app in the project in AppGallery Connect. Before doing so, you must have a Huawei developer account and complete identity verification.
Generate a signing certificate fingerprint and configure it in AppGallery Connect. The signing certificate fingerprint is used to verify the authenticity of an app. Before releasing an app, you must generate a signing certificate fingerprint locally based on the signing certificate and configure it in AppGallery Connect.
Integrate the Location SDK into your app. If you are using Android Studio, you can integrate the SDK via the Maven repository.
Here, I won't be describing how to generate and configure a signing certificate fingerprint and integrate the SDK. You can click here to learn about the detailed procedure.
Development Procedure
After making relevant preparations, you can perform the steps below to use the reverse geocoding service in your app. Before using the service, ensure that you have installed HMS Core (APK) on your device.
1. Create a geocoding service client.
In order to call geocoding APIs, you first need to create a GeocoderService instance in the onClick() method of GeocoderActivity in your project. The sample code is as follows:
Code:
Locale locale = new Locale("zh", "CN");
GeocoderService geocoderService = LocationServices.getGeocoderService(GeocoderActivity.this, locale);
2. Obtain the reverse geocoding information.
To empower your app to obtain the reverse geocoding information, you need to call the getFromLocation() method of the GeocoderService object in your app. This method will return a List<HWLocation> object containing the location information based on the set GetFromLocationRequest object.
a. Set reverse geocoding request parameters.
There are three request parameters in the GetFromLocationRequest object, which indicate the latitude, longitude, and maximum number of returned results respectively. The sample code is as follows:
Code:
// Parameter 1: latitude
// Parameter 2: longitude
// Parameter 3: maximum number of returned results
// Pass valid longitude-latitude coordinates. If the coordinates are invalid, no geographical information will be returned. Outside China, pass longitude-latitude coordinates located outside China and ensure that the coordinates are correct.
GetFromLocationRequest getFromLocationRequest = new GetFromLocationRequest(39.985071, 116.501717, 5);
b. Call the getFromLocation() method to obtain reverse geocoding information.
The obtained reverse geocoding information will be returned in a List<HWLocation> object. You can add listeners using the addOnSuccessListener() and addOnFailureListener() methods, and obtain the task execution result using the onSuccess() and onFailure() methods.
The sample code is as follows:
Code:
private void getReverseGeocoding() {
// Initialize the GeocoderService object.
if (geocoderService == null) {
geocoderService = new GeocoderService(this, new Locale("zh", "CN"));
}
geocoderService.getFromLocation(getFromLocationRequest)
.addOnSuccessListener(new OnSuccessListener<List<HWLocation>>() {
@Override
public void onSuccess(List<HWLocation> hwLocation) {
// TODO: Define callback for API call success.
if (null != hwLocation && hwLocation.size() > 0) {
Log.d(TAG, "hwLocation data set quantity: " + hwLocation.size());
Log.d(TAG, "CountryName: " + hwLocation.get(0).getCountryName());
Log.d(TAG, "City: " + hwLocation.get(0).getCity());
Log.d(TAG, "Street: " + hwLocation.get(0).getStreet());
}
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// TODO: Define callback for API call failure.
}
});
}
Congratulations, your app is now able to use the reverse geocoding function to obtain the address of a location based on its longitude and latitude.
Conclusion
The quick development and popularization of the mobile Internet has caused many changes to our daily lives. One such change is that more and more people are using mobile apps on a daily basis, for example, to buy daily necessities or hail a taxi. These tasks traditionally require users to manually enter the delivery address or pick-up and drop-off location addresses. Manually entering such addresses is inconvenient and prone to mistakes.
To solve this issue, many apps allow users to select a point on the in-app map as the delivery address or the address for getting on or off a taxi. However, the point on the map is usually expressed as a set of longitude-latitude coordinates, which most users will find hard to understand.
As described in this post, my app resolves this issue using the reverse geocoding function, which is proven a very effective way for obtaining human-readable addresses based on longitude-latitude coordinates. If you are looking for a solution to such issues, have a try to find out if this is what your app needs.
Related
More articles like this one, you can visit HUAWEI Developer Forum and Medium.
All About Maps
Let's talk about maps. I started an open source project called All About Maps (https://github.com/ulusoyca/AllAboutMaps). In this project I aim to demonstrate how we can implement the same map related use cases with different map providers in one codebase. We will use Mapbox Maps, Google Maps, and Huawei HMS Map Kit. This project uses following libraries and patterns:
MVVM pattern with Android Jetpack Libraries
Kotlin Coroutines for asynchronous operations
Dagger2 Dependency Injection
Android Clean Architecture
Note: The codebase changes by time. You can always find the latest code in develop branch. The code when this article is written can be seen by choosing the tag: episode_1-parse-gpx:
https://github.com/ulusoyca/AllAboutMaps/tree/episode_1-parse-gpx/
Motivation
Why do we need maps in our apps? What are the features a developer would expect from a map SDK? Let's try to list some:
Showing a coordinate on a map with camera options (zoom, tilt, latitude, longitude, bearing)
Adding symbols, photos, polylines, polygons to map
Handle user gestures (click, pinch, move events)
Showing maps with different map styles (Outdoor, Hybrid, Satallite, Winter, Dark etc.)
Data visualization (heatmaps, charts, clusters, time-lapse effect)
Offline map visualization (providing map tiles without network connectivity)
Generate snapshot image of a bounded region
We can probably add more items but I believe this is the list of features which all map provider companies would most likely provide. Knowing that we can achieve the same tasks with different map providers, we should not create huge dependencies to any specific provider in our codebase. When a product owner (PO) tells to developers to switch from Google Maps to Mapbox Maps, or Huawei Maps, developers should never see it as a big deal. It is software development. Business as usual.
One would probably think why a PO would want to switch from one map provider to another. In many cases, the reason is not the technical details. For example, Google Play Services may not be available in some devices or regions like China. Another case is when a company X which has a subscription to Mapbox, acquires company Y which uses Google Maps. In this case the transition to one provider is more efficient. Change in the terms of services, and pricing might be other motivations.
We need competition in the market! Let's switch easily when needed but how dependencies make things worse? Problematic dependencies in the codebase are usually created by developing software like there is no tomorrow. It is not always developers' fault. Tight schedules, anti-refactoring minded teams, unorganized plannings may cause careless coding and then eventually to technical depts. In this project, I aim to show how we can encapsulate the import lines below belonging to three different map providers to minimum number of classes with minimum lines:
import com.huawei.hms.maps.*
import com.google.android.gms.maps.*
import com.mapbox.mapboxsdk.maps.*
It should be noted that the way of achieving this in this post is just one proposal. There are always alternative and better ways of implementations. In the end, as software developers, we should deliver our tasks time-efficiently, without over-engineering.
About the project
In the home page of the project you will see the list of tutorials. Since this is the first blog post, there is only one item for now. To make our life easier with RecyclerViews, I use Epoxy library by Airbnb in the project. Once you click the buttons in the card, it will take to the detail page. Using bottom sheet we can switch between map providers. Note that Huawei Map Kit requires a Huawei mobile phone.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this first blog post, we will parse the GPX file of 120 km route of Cappadocia Ultra Trail race and show the route and check points (food stations) on map. I finished this race in 23 hours 45 mins and you can also read my experience here (https://link.medium.com/uWmrWLAzR6). GPX is an open standart which contains route points that constructs a polyline and waypoints which are the attraction location. In this case, the waypoints represents the food and aid stations in the race. We will show the route with a polyline and waypoints with markers on map.
Architecture
Architecture is definitely not an overrated concept. Since the early days of Android, we have been seeking for the best architectural patterns that suits with Android development. We have heard of MVC, MVP, MVVM, MVI and many other patterns will emerge. The change and adaptation to a new pattern is inevitable by time. We should keep in mind some basic and commonly accepted concepts like SOLID principles, seperation of concerns, maintainability, readibility, testablity etc. so that we can switch to between patterns easily when needed.
Nowadays, widely accepted architecture in Android community is modularization with Clean Architecture. If you have time to invest more, I would strongly suggest Joe Birch's clean architecture tutorials. As Joe suggests in his tutorials, we do not have to apply every rule line by line but instead we take whatever we feel like is needed. Here is my take and how I modularized the All About Maps app:
Note that dependency injection with Dagger2 is the core of this implementation. If you are not familiar with the concept, I strongly suggest you to read the best Dagger2 tutorial in the wild Dagger2 world by Nimrod Dayan.
Domain Module
Many of us are excited to start implementation with UI to see the results immediately but we should patiently build our blocks. We shall start with the domain module since we will put our business logic and define the entities and user interactions there.
First question: What entities do we need for a Map app?
We don't have to put every entity at once. Since our first tutorial is about drawing polylines and symbols we will need the following data:
LatLng class which holds Latitude and Longitude
Point which represents a geo-coordinate.
RouteInfo that holds points to be used to draw route and waypoints
Let's see the implementations:
Code:
inline class Latitude(val value: Float)
inline class Longitude(val value: Float)
Code:
data class LatLng(
val latitude: Latitude,
val longitude: Longitude
)
Code:
data class Point(
val latitude: Latitude,
val longitude: Longitude,
val altitude: Float? = null,
val name: String? = null
) {
val latLng: LatLng
get() = LatLng(latitude, longitude)
}
Code:
data class RouteInfo(
val routePoints: List<Point> = emptyList(),
val wayPoints: List<Point> = emptyList()
)
I could have used Float primitive type for Latitude and Longitude fields. However, I strongly suggest you to take advantage of Kotlin inline classes. In my relatively long career of working on maps, I spent hours on issues caused by mistakenly using longitude for latitude values.
Note that LatLng class is available in all Map SDKs. However, all the modules below the domain layer should use only our own LatLng to prevent the dependency to map SDKs in those modules. In the app layer we can map our LatLng class to corresponding classes:
Code:
import com.ulusoy.allaboutmaps.domain.entities.LatLng
import com.mapbox.mapboxsdk.geometry.LatLng as MapboxLatLng
import com.huawei.hms.maps.model.LatLng as HuaweiLatLng
import com.google.android.gms.maps.model.LatLng as GoogleLatLang
fun LatLng.toMapboxLatLng() = MapboxLatLng(
latitude.value.toDouble(),
longitude.value.toDouble()
)
fun LatLng.toHuaweiLatLng() = HuaweiLatLng(
latitude.value.toDouble(),
longitude.value.toDouble()
)
fun LatLng.toGoogleLatLng() = GoogleLatLang(
latitude.value.toDouble(),
longitude.value.toDouble()
)
Second question: What actions user can trigger?
Domain module contains the uses cases (interactors) that an application can perform to achieve goals based on user interactions. The code in this module is less likely to change compared to other modules. Business is business. For example, this application has one job for now: showing the route info with a polyline and markers. It can get the route info from a web server, a database or in this case from application resource file which is a GPX file. Neither the app module nor the domain module doesn't care where the route points and waypoints are retrieved from. It is not their concern. The concerns are seperated.
Lets see the use case definition in our domain module:
Code:
class GetRouteInfoUseCase
@Inject constructor(
private val routeInfoRepository: RouteInfoRepository
) {
suspend operator fun invoke(): RouteInfo {
return routeInfoRepository.getRouteInfo()
}
}
Code:
interface RouteInfoRepository {
suspend fun getRouteInfo(): RouteInfo
}
RouteInfoRepository is an interface that lives in the domain module and it is a contract between domain and datasource modules. Its concrete implementation lives in the datasource module.
Datasource Module
Datasource module is an abstraction world. Life here is based on interfaces. The domain module communicates with datasource module through the repository interface, then datasource module orchestrates the data flow in repository class and returns the final value.
Here, the domain module asks for the route info. Datasource module decides what to return after retrieving data from different data sources. For the sake of simplicity, in this case we have only one datasource: GPX parser. The route info is extracted from a GPX file. We don't know where, and how. Let's see the code:
Here is the concrete implementation of RouteInfoRepository interface. Route info datasource is injected as constructor parameter to this class.
Code:
class RouteInfoDataRepository
@Inject constructor(
@Named("GPX_DATA_SOURCE")
private val gpxFileDatasource: RouteInfoDatasource
) : RouteInfoRepository {
override suspend fun getRouteInfo(): RouteInfo {
return gpxFileDatasource.parseGpxFile()
}
}
Here is our one only route info data source: GpxFileDataSource. It still doesn't know how to get the data from gpx file. However, it knows where to get the data from thanks to contract GpxFileParser
Code:
class GpxFileDatasource
@Inject constructor(
private val gpxFileParser: GpxFileParser
): RouteInfoDatasource {
override suspend fun parseGpxFile(): RouteInfo {
return gpxFileParser.parseGpxFile()
}
}
What is a GPX file? How is it parsed? Where is the file located? Datasource doesn't care about these details. It only knows that the concrete implementation of GpxFileParser will return the RouteInfo. Here is the contract between the datasource and the concrete implementation:
Code:
interface GpxFileParser {
suspend fun parseGpxFile(): RouteInfo
}
Is it already too confusing with too many abstractions around? Is it overengineering? You might be right and choose to have less abstractions when you have one datasource like in this case. However, in real world, we have multiple datasources. Data is all around us. It may come from web server, from database or from our connected devices such as wearables. The benefit here is when things get more complicated with multiple datasources. Let's think about this complicated scenario.
App asks for the route info through a use case class.
Domain module forwards the request to data source.
Datasource orchestrates the data in the repository class
It first asks to Web servers (remote data source) to get the route info. However, the user is offline. Thus, the remote data source is not available.
Then it checks what we have locally by first checking if the route info is available in database or not.
It is not available in database but we have a gpx file in our resource folder (I know it doesn't make sense but to give an example).
The repository class asks GPX parser to parse the file and return the desired RouteInfo data.
Too complicated? It would be this much easier to implement this code in repository class based on the scenario:
Code:
class RouteInfoDataRepository
@Inject constructor(
@Named("GPX_DATA_SOURCE")
private val gpxFileDatasource: RouteInfoDatasource,
@Named("REMOTE_DATA_SOURCE")
private val remoteDatasource: RouteInfoDatasource,
@Named("DATABASE_SOURCE")
private val localDatasource: RouteInfoDatasource
) : RouteInfoRepository {
override suspend fun getRouteInfo(): RouteInfo? {
var routeInfo = remoteDatasource.parseGpxFile()
if (routeInfo == null) {
Timber.d("Route info is not available in remote source, now trying local database")
routeInfo = localDatasource.parseGpxFile()
if (routeInfo == null) {
Timber.d("Route info is not available in local database. Let's hope we have a gpx file in the app resource folder")
gpxFileDatasource.parseGpxFile()
}
}
return routeInfo
}
}
Thanks to Kotlin coroutines we can write these asynchronous operations sequentially.
For full content, you can visit HUAWEI Developer Forum.
BackgroundThe real-time onscreen subtitle is a must-have function in an ordinary video app. However, developing such a function can prove costly for small- and medium-sized developers. And even when implemented, speech recognition is often prone to inaccuracy. Fortunately, there's a better way — HUAWEI ML Kit, which is remarkably easy to integrate, and makes real-time transcription an absolute breeze!
Introduction to ML KitML Kit allows your app to leverage Huawei's longstanding machine learning prowess to apply cutting-edge artificial intelligence (AI) across a wide range of contexts. With Huawei's expertise built in, ML Kit is able to provide a broad array of easy-to-use machine learning capabilities, which serve as the building blocks for tomorrow's cutting-edge AI apps. ML Kit capabilities include those related to:
Text (including text recognition, document recognition, and ID card recognition)
Language/Voice (such as real-time/on-device translation, automatic speech recognition, and real-time transcription)
Image (such as image classification, object detection and tracking, and landmark recognition)
Face/Body (such as face detection, skeleton detection, liveness detection, and face verification)
Natural language processing (text embedding)
Custom model (including the on-device inference framework and model development tool)
Real-time transcription is required to implement the function mentioned above. Let's take a look at how this works in practice:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Now let's move on to how to integrate this service.
Integrating Real-Time TranscriptionSteps
Registering as a Huawei developer on HUAWEI Developers
Creating an app
Create an app in AppGallery Connect. For details, see Getting Started with Android.
We have provided some screenshots for your reference:
3. Enabling ML Kit
4. Integrating the HMS Core SDK.
Add the AppGallery Connect configuration file by completing the steps below:
Download and copy the agconnect-service.json file to the app directory of your Android Studio project.
Call setApiKey during app initialization.
To learn more, go to Adding the AppGallery Connect Configuration File.
5.Configuring the maven repository address
Add build dependencies.
Import the real-time transcription SDK.
Code:
implementation 'com.huawei.hms:ml-computer-voice-realtimetranscription:2.2.0.300'
Add the AppGallery Connect plugin configuration.
Method 1: Add the following information under the declaration in the file header:
Code:
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block.
Code:
plugins {
id 'com.android.application'
// Add the following configuration:
id 'com.huawei.agconnect'
}
Please refer to Integrating the Real-Time Transcription SDK to learn more.
Setting the cloud authentication information
When using on-cloud services of ML Kit, you can set the API key or access token (recommended) in either of the following ways:
Access token
You can use the following API to initialize the access token when the app is started. The access token does not need to be set again once initialized.
MLApplication.getInstance().setAccessToken("your access token");
API key
You can use the following API to initialize the API key when the app is started. The API key does not need to be set again once initialized.
MLApplication.getInstance().setApiKey("your ApiKey");
For details, see Notes on Using Cloud Authentication Information.
Code Development
Create and configure a speech recognizer.
Code:
MLSpeechRealTimeTranscriptionConfig config = new MLSpeechRealTimeTranscriptionConfig.Factory()
// Set the language. Currently, this service supports Mandarin Chinese, English, and French.
.setLanguage(MLSpeechRealTimeTranscriptionConstants.LAN_ZH_CN)
// Punctuate the text recognized from the speech.
.enablePunctuation(true)
// Set the sentence offset.
.enableSentenceTimeOffset(true)
// Set the word offset.
.enableWordTimeOffset(true)
// Set the application scenario. MLSpeechRealTimeTranscriptionConstants.SCENES_SHOPPING indicates shopping, which is supported only for Chinese. Under this scenario, recognition for the name of Huawei products has been optimized.
.setScenes(MLSpeechRealTimeTranscriptionConstants.SCENES_SHOPPING)
.create();
MLSpeechRealTimeTranscription mSpeechRecognizer = MLSpeechRealTimeTranscription.getInstance();
Create a speech recognition result listener callback.
Code:
// Use the callback to implement the MLSpeechRealTimeTranscriptionListener API and methods in the API.
protected class SpeechRecognitionListener implements MLSpeechRealTimeTranscriptionListener{
@Override
public void onStartListening() {
// The recorder starts to receive speech.
}
@Override
public void onStartingOfSpeech() {
// The user starts to speak, that is, the speech recognizer detects that the user starts to speak.
}
@Override
public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) {
// Return the original PCM stream and audio power to the user. This API is not running in the main thread, and the return result is processed in a sub-thread.
}
@Override
public void onRecognizingResults(Bundle partialResults) {
// Receive the recognized text from MLSpeechRealTimeTranscription.
}
@Override
public void onError(int error, String errorMessage) {
// Called when an error occurs in recognition.
}
@Override
public void onState(int state,Bundle params) {
// Notify the app of the status change.
}
}
The recognition result can be obtained from the listener callbacks, including onRecognizingResults. Design the UI content according to the obtained results. For example, display the text transcribed from the input speech.
Bind the speech recognizer.
Code:
mSpeechRecognizer.setRealTimeTranscriptionListener(new SpeechRecognitionListener());
Call startRecognizing to start speech recognition.
Code:
mSpeechRecognizer.startRecognizing(config);
Release resources after recognition is complete.
Code:
if (mSpeechRecognizer!= null) {
mSpeechRecognizer.destroy();
}
(Optional) Obtain the list of supported languages.
Code:
MLSpeechRealTimeTranscription.getInstance()
.getLanguages(new MLSpeechRealTimeTranscription.LanguageCallback() {
@Override
public void onResult(List<String> result) {
Log.i(TAG, "support languages==" + result.toString());
}
@Override
public void onError(int errorCode, String errorMsg) {
Log.e(TAG, "errorCode:" + errorCode + "errorMsg:" + errorMsg);
}
});
We have finished integration here, so let's test it out on a simple screen.
Tap START RECORDING. The text recognized from the input speech will display in the lower portion of the screen.
We've now built a simple audio transcription function.
Eager to build a fancier UI, with stunning animations, and other effects? By all means, take your shot!
For reference:Real-Time Transcription
Sample Code for ML Kit
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source
f users have not installed GMS on their phones, they cannot directly sign in to your app using a Google account. In this case, you can let them sign in to your app in web mode by obtaining the access token for sign-in authentication from Google. The procedure is as follows:
Sign in to the Google Play Console, click Credentials, and create an OAuth 2.0 client ID with Type set to Web application.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Go to the settings page and find Client ID and Client secret on the right side of the page, which are needed for integrating Google sign-in. Add a URI of the page to be displayed after the web sign-in is complete to Authorized redirect URIs.
When users want to sign in with their Google accounts, the web page domain name that is https://accounts.google.com/o/oauth2/v2/auth, query should be opened. Add the following parameters to it.
client_id — Required — The client ID for your application. You can find this value in the API Console Credentials page.
redirect_uri — Required — Determines where the API server redirects the user after the user completes the authorization flow. The value must exactly match one of the authorized redirect URIs for the OAuth 2.0 client, which you configured in your client’s API Console Credentials page. If this value doesn’t match an authorized redirect URI for the provided client_id you will get a redirect_uri_mismatch error.
Note that the http or https scheme, case, and trailing slash (‘/’) must all match.
response_type — Required — Determines whether the Google OAuth 2.0 endpoint returns an authorization code.
Set the parameter value to code for web server applications.
scope — Required — A space-delimited list of scopes that identify the resources that your application could access on the user’s behalf. These values inform the consent screen that Google displays to the user.
Scopes enable your application to only request access to the resources that it needs while also enabling users to control the amount of access that they grant to your application. Thus, there is an inverse relationship between the number of scopes requested and the likelihood of obtaining user consent.
We recommend that your application request access to authorization scopes in context whenever possible. By requesting access to user data in context, via incremental authorization, you help users to more easily understand why your application needs the access it is requesting.
access_type — Recommended — Indicates whether your application can refresh access tokens when the user is not present at the browser. Valid parameter values are online, which is the default value, and offline.
Set the value to offline if your application needs to refresh access tokens when the user is not present at the browser. This is the method of refreshing access tokens described later in this document. This value instructs the Google authorization server to return a refresh token and an access token the first time that your application exchanges an authorization code for tokens.
state — Recommended — Specifies any string value that your application uses to maintain state between your authorization request and the authorization server’s response. The server returns the exact value that you send as a name=value pair in the URL fragment identifier (#) of the redirect_uri after the user consents to or denies your application’s access request.
You can use this parameter for several purposes, such as directing the user to the correct resource in your application, sending nonces, and mitigating cross-site request forgery. Since your redirect_uri can be guessed, using a state value can increase your assurance that an incoming connection is the result of an authentication request. If you generate a random string or encode the hash of a cookie or another value that captures the client’s state, you can validate the response to additionally ensure that the request and response originated in the same browser, providing protection against attacks such as cross-site request forgery. See the OpenID Connect documentation for an example of how to create and confirm a state token.
include_granted_scopes — Optional — Enables applications to use incremental authorization to request access to additional scopes in context. If you set this parameter’s value to true and the authorization request is granted, then the new access token will also cover any scopes to which the user previously granted the application access. See the incremental authorization section for examples.
login_hint — Optional — If your application knows which user is trying to authenticate, it can use this parameter to provide a hint to the Google Authentication Server. The server uses the hint to simplify the login flow either by prefilling the email field in the sign-in form or by selecting the appropriate multi-login session.
Set the parameter value to an email address or sub identifier, which is equivalent to the user’s Google ID.
prompt — Optional — A space-delimited, case-sensitive list of prompts to present the user. If you don’t specify this parameter, the user will be prompted only the first time your project requests access.
Possible values are:
none Do not display any authentication or consent screens. Must not be specified with other values.
consent Prompt the user for consent.
select_account Prompt the user to select an account.
Examples
Code:
https://accounts.google.com/o/oauth2/v2/auth?
scope=profile&
access_type=offline&
include_granted_scopes=true&
response_type=code&
state=state_parameter_passthrough_value&
redirect_uri=https%3A//oauth2.example.com/code&
client_id=342746900306-vmpmbn8dgulp7eun1i8haiu86kocn8t6.apps.googleusercontent.com
redirect_uri is the URI we configured in step 2.
When the web page we generated in step 3 is opened, the Google account sign-in page is displayed. Once users are signed in, they will be redirected to the URI we set for redirect_uri. You should save the code value that clings to this domain name.
Example:
Code:
https://oauth2.example.com/code?
state=state_parameter_passthrough_value&
code=4%2FzAFyRfnDjPJKLRlkcZCedy-P6GpYbmAPpOvbmeUwXCfv0lXkUWjjHRXGtrwpoordursX2wfKShoGakKbLGzS4Ac&
scope=email+profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile
&authuser=0
&prompt=none
In this case, the code value is as follows:
4%2FzAFyRfnDjPJKLRlkcZCedy-P6GpYbmAPpOvbmeUwXCfv0lXkUWjjHRXGtrwpoordursX2wfKShoGakKbLGzS4Ac
After obtaining the code value, send an HTTP request to https://oauth2.googleapis.com/token to obtain the value of access_token required for AppGallery Connect authentication.
The parameters are as follows.
client_id — The client ID obtained from the API Console Credentials page.
client_secret — The client secret obtained from the API Console Credentials page.
code — The authorization code returned from the initial request.
grant_type — As defined in the OAuth 2.0 specification, this field must contain a value of authorization_code.
redirect_uri — One of the redirect URIs listed for your project in the API Console Credentials page for the given client_id.
Example:
Code:
POST /token HTTP/1.1
Host: oauth2.googleapis.com
Content-Type: application/x-www-form-urlencoded
code=4%2FzAFyRfnDjPJKLRlkcZCedy-P6GpYbmAPpOvbmeUwXCfv0lXkUWjjHRXGtrwpoordursX2wfKShoGakKbLGzS4Ac
client_id=your_client_id&
client_secret=your_client_secret&
redirect_uri=https%3A//oauth2.example.com/code&
grant_type=authorization_code
The return result of the request carries the access_token you need.
Example of the return result:
JSON:
{
"access_token": "1/fFAGRNJru1FTz70BzhT3Zg",
"expires_in": 3920,
"token_type": "Bearer",
"scope": "https://www.googleapis.com/auth/drive.metadata.readonly",
"refresh_token": "1//xEoDL4iW3cxlI7yDbSRFYNG01kVKM2C-259HOF2aQbI"
}
Use the token to pass the AppGallery Connect authentication. Sample code
Java:
// accesstoken is the value of access_token obtained in the previous step.
AGConnectAuthCredential credential = GoogleAuthProvider.credentialWithToken(accesstoken);
AGConnectAuth.getInstance().signIn(credential)
.addOnSuccessListener(new OnSuccessListener<SignInResult>() {
@Override
public void onSuccess(SignInResult signInResult) {
// onSuccess
AGConnectUser user = signInResult.getUser();
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// onFail
}
});
References
Create a client ID on Google Play Console
Sign in to AppGallery Connect with a Google account
Can we use both HMS and GMS in application?
Basavaraj.navi said:
Can we use both HMS and GMS in application?
Click to expand...
Click to collapse
You certainly can! there are lots of ways you could implement this https://stackoverflow.com/questions/59974428/have-both-gms-and-hms-in-the-project has some great examples on how you might check for which services to use on a specific device.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
With the increasing popularity of the mobile Internet, mobile apps are now becoming an integral part of our daily lives and provide increasingly more diverse functions that bring many benefits to users. One such function is searching for Point of Interests (POIs) or places, such as banks and restaurants, in an app.
When a user searches for a POI in an app, besides general information about the POI, such as the name and location, they also expect to be shown other relevant details. For example, when searching for a POI in a taxi-hailing app, a user usually expects the app to display both the searched POI and other nearby POIs, so that the user can select the most convenient pick-up and drop-off point. When searching for a bank branch in a mobile banking app, a user usually wants the app to show both the searched bank branch and nearby POIs of a similar type and their details such as business hours, telephone numbers, and nearby roads.
However, showing POI details in an app is usually a challenge for developers of non-map-related apps, because it requires a large amount of detailed POI data that is generally hard to collect for most app developers. So, wouldn't it be great if there was a service which an app can use to provide users with information about POI (such as the business hours and ratings) when they search for different types of POIs (such as hotels, restaurants, and scenic spots) in the app?
Fortunately, HMS Core Site Kit provides a one-stop POI search service, which boasts more than 260 million POIs in over 200 countries and regions around the world. In addition, the service supports more than 70 languages, empowering users to search for places in their own native languages. The place detail search function in the kit allows an app to obtain information about a POI, such as the name, address, and longitude and latitude, based on the unique ID of the POI. For example, a user can search for nearby bank branches in a mobile banking app, and view information about each branch, such as their business hours and telephone numbers, or search for the location of a scenic spot and view information about nearby hotels and weather forecasts in a travel app, thanks to the place detail search function. The place detail search function can even be utilized by location-based games that can use the function to show in-game tasks and rankings of other players at a POI when a player searches for the POI in the game.
Th integration process for this kit is straightforward, which I'll demonstrate below.
Demo
Integration Procedure
PreparationsBefore getting started, you'll need to make some preparations, such as configuring your app information in AppGallery Connect, integrating the Site SDK, and configuring the obfuscation configuration file.
If you use Android Studio, you can integrate the SDK into your project via the Maven repository. The purpose of configuring the obfuscation configuration file is to prevent the SDK from being obfuscated.
You can follow instructions here to make relevant preparations. In this article, I won't be describing the preparation steps.
Developing Place Detail SearchAfter making relevant preparations, you will need to implement the place detail search function for obtaining POI details. The process is as follows:
1. Declare a SearchService object and use SearchServiceFactory to instantiate the object.
2. Create a DetailSearchRequest object and set relevant parameters.
The object will be used as the request body for searching for POI details. Relevant parameters are as follows:
siteId: ID of a POI. This parameter is mandatory.
language: language in which search results are displayed. English will be used if no language is specified, and if English is unavailable, the local language will be used.
children: indicates whether to return information about child nodes of the POI. The default value is false, indicating that child node information is not returned. If this parameter is set to true, all information about child nodes of the POI will be returned.
3. Create a SearchResultListener object to listen for the search result.
4. Use the created SearchService object to call the detailSearch() method and pass the created DetailSearchRequest and SearchResultListener objects to the method.
5. Obtain the DetailSearchResponse object using the created SearchResultListener object. You can obtain a Site object from the DetailSearchResponse object and then parse it to obtain the search results.
The sample code is as follows:
Code:
// Declare a SearchService object.
private SearchService searchService;
// Create a SearchService instance.
searchService = SearchServiceFactory.create(this, "
API key
");
// Create a request body.
DetailSearchRequest request = new DetailSearchRequest();
request.setSiteId("
C2B922CC4651907A1C463127836D3957
");
request.setLanguage("
fr
");
request.setChildren(
false
);
// Create a search result listener.
SearchResultListener<DetailSearchResponse> resultListener = new SearchResultListener<DetailSearchResponse>() {
// Return the search result when the search is successful.
@Override
public void onSearchResult(DetailSearchResponse result) {
Site site;
if (result == null || (site = result.getSite()) == null) {
return;
}
Log.i("TAG", String.format("siteId: '%s', name: %s\r\n", site.getSiteId(), site.getName()));
}
// Return the result code and description when a search exception occurs.
@Override
public void onSearchError(SearchStatus status) {
Log.i("TAG", "Error : " + status.getErrorCode() + " " + status.getErrorMessage());
}
};
// Call the place detail search API.
searchService.detailSearch(request, resultListener);
You have now completed the integration process and your app should be able to show users details about the POIs they search for.
Conclusion
Mobile apps are now an integral part of our daily life. To improve user experience and provide users with a more convenient experience, mobile apps are providing more and more functions such as POI search.
When searching for POIs in an app, besides general information such as the name and location of the POI, users usually expect to be shown other context-relevant information as well, such as business hours and similar POIs nearby. However, showing POI details in an app can be challenging for developers of non-map-related apps, because it requires a large amount of detailed POI data that is usually hard to collect for most app developers.
In this article, I demonstrated how I solved this challenge using the place detail search function, which allows my app to show POI details to users. The whole integration process is straightforward and cost-efficient, and is an effective way to show POI details to users.
Conventional pop-up ads and roll ads in apps not only frustrate users, but are a headache for advertisers. This is because on the one hand, advertising is expensive, but on the other hand, these ads do not necessarily reach their target audience. The emergence of personalized ads has proved a game changer.
To ensure ads are actually sent to their intended audience, publishers usually need to collect the personal data of users to determine their characteristics, hobbies, recent requirements, and more, and then push targeted ads in apps. Some users are unwilling to share privacy data to receive personalized ads. Therefore, if an app needs to collect, use, and share users' personal data for the purpose of personalized ads, valid consent from users must be obtained first.
HUAWEI Ads provides the capability of obtaining user consent. In countries/regions with strict privacy requirements, it is recommended that publishers access the personalized ad service through the HUAWEI Ads SDK and share personal data that has been collected and processed with HUAWEI Ads. HUAWEI Ads reserves the right to monitor the privacy and data compliance of publishers. By default, personalized ads are returned for ad requests to HUAWEI Ads, and the ads are filtered based on the user's previously collected data. HUAWEI Ads also supports ad request settings for non-personalized ads. For details, please refer to "Personalized Ads and Non-personalized Ads" in the HUAWEI Ads Privacy and Data Security Policies.
To obtain user consent, you can use the Consent SDK provided by HUAWEI Ads or the CMP that complies with IAB TCF v2.0. For details, see Integration with IAB TCF v2.0.
Let's see how the Consent SDK can be used to request user consent and how to request ads accordingly.
Development ProcedureTo begin with, you will need to integrate the HMS Core SDK and HUAWEI Ads SDK. For details, see the development guide.
Using the Consent SDK
1. Integrate the Consent SDK.
a. Configure the Maven repository address.
The code library configuration of Android Studio is different in versions earlier than Gradle 7.0, Gradle 7.0, and Gradle 7.1 and later versions. Select the corresponding configuration procedure based on your Gradle plugin version.
b. Add build dependencies to the app-level build.gradle file.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Replace {version} with the actual version number. For details about the version number, please refer to the version updates. The sample code is as follows:
Code:
dependencies {
implementation 'com.huawei.hms:ads-consent:3.4.54.300'
}
After completing all the preceding configurations, click
on the toolbar to synchronize the build.gradle file and download the dependencies.
2. Update the user consent status.
When using the Consent SDK, ensure that the Consent SDK obtains the latest information about the ad technology providers of HUAWEI Ads. If the list of ad technology providers changes after the user consent is obtained, the Consent SDK will automatically set the user consent status to UNKNOWN. This means that every time the app is launched, you should call the requestConsentUpdate() method to determine the user consent status. The sample code is as follows:
Code:
...
import com.huawei.hms.ads.consent.*;
...
public class ConsentActivity extends BaseActivity {
...
@Override
protected void onCreate(Bundle savedInstanceState) {
...
// Check the user consent status.
checkConsentStatus();
...
}
...
private void checkConsentStatus() {
...
Consent consentInfo = Consent.getInstance(this);
...
consentInfo.requestConsentUpdate(new ConsentUpdateListener() {
@Override
public void onSuccess(ConsentStatus consentStatus, boolean isNeedConsent, List<AdProvider> adProviders) {
// User consent status successfully updated.
...
}
@Override
public void onFail(String errorDescription) {
// Failed to update user consent status.
...
}
});
...
}
...
}
If the user consent status is successfully updated, the onSuccess() method of ConsentUpdateListener provides the updated ConsentStatus (specifies the consent status), isNeedConsent (specifies whether consent is required), and adProviders (specifies the list of ad technology providers).
3. Obtain user consent.
You need to obtain the consent (for example, in a dialog box) of a user and display a complete list of ad technology providers. The following example shows how to obtain user consent in a dialog box:
a. Collect consent in a dialog box.
The sample code is as follows:
Code:
...
import com.huawei.hms.ads.consent.*;
...
public class ConsentActivity extends BaseActivity {
...
@Override
protected void onCreate(Bundle savedInstanceState) {
...
// Check the user consent status.
checkConsentStatus();
...
}
...
private void checkConsentStatus() {
...
Consent consentInfo = Consent.getInstance(this);
...
consentInfo.requestConsentUpdate(new ConsentUpdateListener() {
@Override
public void onSuccess(ConsentStatus consentStatus, boolean isNeedConsent, List<AdProvider> adProviders) {
...
// The parameter indicating whether the consent is required is returned.
if (isNeedConsent) {
// If ConsentStatus is set to UNKNOWN, ask for user consent again.
if (consentStatus == ConsentStatus.UNKNOWN) {
...
showConsentDialog();
}
// If ConsentStatus is set to PERSONALIZED or NON_PERSONALIZED, no dialog box is displayed to ask for user consent.
else {
...
}
} else {
...
}
}
@Override
public void onFail(String errorDescription) {
...
}
});
...
}
...
private void showConsentDialog() {
// Start to process the consent dialog box.
ConsentDialog dialog = new ConsentDialog(this, mAdProviders);
dialog.setCallback(this);
dialog.setCanceledOnTouchOutside(false);
dialog.show();
}
}
Sample dialog box
Note: This image is for reference only. Design the UI based on the privacy page.
More information will be displayed if users tap here.
Note: This image is for reference only. Design the UI based on the privacy page.
b. Display the list of ad technology providers.
Display the names of ad technology providers to the user and allow the user to access the privacy policies of the ad technology providers.
After a user taps here on the information screen, the list of ad technology providers should appear in a dialog box, as shown in the following figure.
Note: This image is for reference only. Design the UI based on the privacy page.
c. Set consent status.
After obtaining the user's consent, use the setConsentStatus() method to set their content status. The sample code is as follows:
Code:
Consent.getInstance(getApplicationContext()).setConsentStatus(ConsentStatus.PERSONALIZED);
d. Set the tag indicating whether a user is under the age of consent.
If you want to request ads for users under the age of consent, call setUnderAgeOfPromise to set the tag for such users before calling requestConsentUpdate().
Code:
// Set the tag indicating whether a user is under the age of consent.
Consent.getInstance(getApplicationContext()).setUnderAgeOfPromise(true);
If setUnderAgeOfPromise is set to true, the onFail (String errorDescription) method is called back each time requestConsentUpdate() is called, and the errorDescription parameter is provided. In this case, do not display the dialog box for obtaining consent. The value false indicates that a user has reached the age of consent.
4. Load ads according to user consent.
By default, the setNonPersonalizedAd method is not called for requesting ads. In this case, personalized and non-personalized ads are requested, so if a user has not selected a consent option, only non-personalized ads can be requested.
The parameter of the setNonPersonalizedAd method can be set to the following values:
ALLOW_ALL: personalized and non-personalized ads.
ALLOW_NON_PERSONALIZED: non-personalized ads.
The sample code is as follows:
Code:
// Set the parameter in setNonPersonalizedAd to ALLOW_NON_PERSONALIZED to request only non-personalized ads.
RequestOptions requestOptions = HwAds.getRequestOptions();
requestOptions = requestOptions.toBuilder().setNonPersonalizedAd(ALLOW_NON_PERSONALIZED).build();
HwAds.setRequestOptions(requestOptions);
AdParam adParam = new AdParam.Builder().build();
adView.loadAd(adParam);
Testing the Consent SDK
To simplify app testing, the Consent SDK provides debug options that you can set.
1. Call getTestDeviceId() to obtain the ID of your device.
The sample code is as follows:
Code:
String testDeviceId = Consent.getInstance(getApplicationContext()).getTestDeviceId();
2. Use the obtained device ID to add your device as a test device to the trustlist.
The sample code is as follows:
Code:
Consent.getInstance(getApplicationContext()).addTestDeviceId(testDeviceId);
3. Call setDebugNeedConsent to set whether consent is required.
The sample code is as follows:
Code:
// Require consent for debugging. In this case, the value of isNeedConsent returned by the ConsentUpdateListener method is true.
Consent.getInstance(getApplicationContext()).setDebugNeedConsent(DebugNeedConsent.DEBUG_NEED_CONSENT);
// Not to require consent for debugging. In this case, the value of isNeedConsent returned by the ConsentUpdateListener method is false.
Consent.getInstance(getApplicationContext()).setDebugNeedConsent(DebugNeedConsent.DEBUG_NOT_NEED_CONSENT);
After these steps are complete, the value of isNeedConsent will be returned based on your debug status when calls are made to update the consent status.
For more information about the Consent SDK, please refer to the sample code.
References
Ads Kit
Development Guide of Ads Kit