Preface
HUAWEI Map Kit includes a route planning function, which offers a set of HTTPS-based APIs. These APIs are used to plan walking, cycling, and driving routes, as well as calculate route distances. They return route data in JSON format, and comprise the route planning capability.
Route planning APIs are as follows:
Walking route planning API: Provides the function for planning walking routes within distances of 100 kilometers or less.
Cycling route planning API: Provides the function for planning cycling routes within distances of 100 kilometers or less.
Driving route planning API: Provides the function for planning driving routes.
Up to 3 routes can be returned for each request.
Up to 5 waypoints can be specified.
Routes can be planned for future travel.
Routes can be planned based on real-time traffic conditions.
Use Cases
Ride hailing: Real-time route planning and route planning for future travel can provide accurate price estimates for ride-hailing orders. The estimated time of arrival (ETA) can be calculated for multiple routes in batches during order dispatch, for dramatically enhanced efficiency.
Logistics: Driving and cycling route planning provides accurate routes, ETAs, and estimated road tolls for trunk and branch road logistics and logistics delivery.
Tourism: When booking hotels and designing tourism routes, users can determine the distance between hotels, scenic spots, and transport stations with greater ease, thanks to high-level route planning capabilities, and enjoy efficient, hassle-free travel at all times.
Preparations
Before using the route planning function, first obtain the API key in AppGallery Connect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Note
If the API key contains special characters, you need to encode it using encodeURI. For example, if the original API key is ABC/DFG+, the conversion result is ABC%2FDFG%2B.
Apply for the network access permission in the AndroidManifest.xml file.
Code:
<!-- Network permission -->
<uses-permission android:name="android.permission.INTERNET" />
Development Procedure
1. Initialize a map for displaying planned routes.
Code:
private MapFragment mMapFragment;
private HuaweiMap hMap;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_directions);
mMapFragment = (MapFragment) getFragmentManager().findFragmentById(R.id.mapfragment_mapfragmentdemo);
mMapFragment.getMapAsync(this);
}
2. Obtain the current user location and use it as the start point for route planning.
Code:
private void getMyLocation() {
Task<Location> locationTask = LocationServices.getFusedLocationProviderClient(this).getLastLocation();
locationTask.addOnCompleteListener(param0 -> {
if (param0 != null) {
Location location = param0.getResult();
double Lat = location.getLatitude();
double Lng = location.getLongitude();
myLocation = new LatLng(Lat, Lng);
Log.d(TAG, " Lat is : " + Lat + ", Lng is : " + Lng);
CameraUpdate CameraUpdate = CameraUpdateFactory.newLatLng(myLocation);
hMap.moveCamera(CameraUpdate);
}
}).addOnFailureListener(param0 -> Log.d(TAG, "lastLocation is error"));
}
3. Add a map long-press event listener to listen to the end point for route planning.
Code:
hMap.setOnMapLongClickListener(latLng -> {
if (null != mDestinationMarker) {
mDestinationMarker.remove();
}
if (null != mPolylines) {
for (Polyline polyline : mPolylines) {
polyline.remove();
}
}
enableAllBtn();
MarkerOptions options = new MarkerOptions().position(latLng).title("dest");
mDestinationMarker = hMap.addMarker(options);
mDestinationMarker.setAnchor(0.5f,1f);
StringBuilder dest = new StringBuilder(String.format(Locale.getDefault(), "%.6f", latLng.latitude));
dest.append(", ").append(String.format(Locale.getDefault(), "%.6f", latLng.longitude));
((TextInputEditText)findViewById(R.id.dest_input)).setText(dest);
mDest = latLng;
});
4. Generate a route planning request based on the specified start point and end point.
Code:
private JSONObject buildRequest() {
JSONObject request = new JSONObject();
try {
JSONObject origin = new JSONObject();
origin.put("lng", myLocation.longitude);
origin.put("lat", myLocation.latitude);
JSONObject destination = new JSONObject();
destination.put("lng", mDest.longitude);
destination.put("lat", mDest.latitude);
request.put("origin", origin);
request.put("destination", destination);
} catch (JSONException e) {
e.printStackTrace();
}
return request;
}
5. Draw planned routes on the map based on the route planning response.
Code:
JSONObject route = new JSONObject(result);
JSONArray routes = route.optJSONArray("routes");
JSONObject route1 = routes.optJSONObject(0);
JSONArray paths = route1.optJSONArray("paths");
JSONObject path1 = paths.optJSONObject(0);
JSONArray steps = path1.optJSONArray("steps");
for (int i = 0; i < steps.length(); i++) {
PolylineOptions options = new PolylineOptions();
JSONObject step = steps.optJSONObject(i);
JSONArray polyline = step.optJSONArray("polyline");
for (int j = 0; j < polyline.length(); j++) {
JSONObject polyline_t = polyline.optJSONObject(j);
options.add(new LatLng(polyline_t.getDouble("lat"), polyline_t.getDouble("lng")));
}
Polyline pl = hMap.addPolyline(options.color(Color.BLUE).width(3));
mPolylines.add(pl);
}
Demo Effects
Related
Allow your users the freedom to choose their Android platform providing the same feature
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Some time ago I developed a Word Search game solver Android application using the services from Firebase ML Kit.
Solve WordSearch games with Android and ML Kit
A Kotlin ML Kit Data Structure & Algorithm Story
It was an interesting trip discovering the features of a framework that allows the developer to use AI capabilities without knowing all the rocket science behind.
In the specific, I’ve used the Document recognition feature to try to extract text from a word search game image.
After the text recognition phase, the output was cleaned and arranged into a matrix to be processed by the solver algorithm. This algo tried to look for all the words formed by grouping the letters respecting the rules of the games: contiguous letters in all the straight directions (vertical, horizontal, diagonal)
This app ran well on all the Android devices capable to run the Google Firebase SDK and the Google Mobile Services (GMS).
Since the second half of last year all new Huawei devices cannot run the GMS any more due to government restrictions, you can read more about this here:
[Update 14: Temporary License Extended Again]
Google has revoked Huawei's Android license
www.xda-developers.com
My app was not capable to run on the brand new Huawei devices
So I tried to look for solutions to make this case study app running on the new Huawei terminals.
Let’s follow my journey…
The Discovery of HMS ML Kit
I went throughout the Huawei documentation on HUAWEI Developer--The official site for HUAWEI developers. Provides HUAWEI appgallery service,HUAWEI Mobile Services,AI SDK,VR SDK
Here you can find many SDKs AKA Kits offering a set of smart features to the developers.
I’ve found one offering the features that I was looking for: HMS ML Kit. It is quite similar to the one from Firebase as it allows the developer to use Machine Learning capabilities like Image, Text, Face recognition and so on.
Huawei ML Kit
In particular, for my specific use case, I’ve used the text analyzer capable to run locally and taking advantage of the neural processing using NPU hardware.
Documentation HMS ML Kit Text recognition
Integrating HMS ML Kit was super easy. If you want to give it a try It’s just a matter of adding a dependency in your build.gradle file, enabling the service from the AppGallery web dashboard if you want to use the Cloud API and download the agconnect-services.json configuration file and use it in your app.
You can refer to the official guide here for the needed steps:
Documentation HMS ML Kit
Architectural Approach
My first desire was to maintain and deploy only one apk so I wanted to integrate both the Firebase ML Kit SDK and the HMS ML Kit one.
I thought about the main feature
Decode the image and getting back the text detected together with the bounding boxes surrounding each character to better display the spotted text to the user.
This was defined by this interface
Code:
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
interface DocumentTextRecognizer {
fun processImage(bitmap: Bitmap, success: (Document) -> Unit, error: (String?) -> Unit)
}
I’ve also defined my own data classes to have a common output format from both services
Code:
data class Symbol(
val text: String?,
val rect: Rect,
val idx: Int = 0,
val length: Int = 0
)
data class Document(val stringValue: String, val count: Int, val symbols: List<Symbol>)
Where Document represents the text result returned by the ML Kit services, it contains a list of Symbol (the character recognized) each one with its own char, the bounding box surrounding it (Rect), and the index in the string detected as both MLKit service will group some chars in a string with a unique bounding box.
Then I’ve created an object capable to instantiate the right service depending which service (HMS or GMS) is running on the device
Code:
object DocumentTextRecognizerService {
private fun getServiceType(context: Context) = when {
isGooglePlayServicesAvailable(
context
) -> ServiceType.GOOGLE
isHuaweiMobileServicesAvailable(
context
) -> ServiceType.HUAWEI
else -> ServiceType.GOOGLE
}
private fun isGooglePlayServicesAvailable(context: Context): Boolean {
return GoogleApiAvailability.getInstance()
.isGooglePlayServicesAvailable(context) == ConnectionResult.SUCCESS
}
private fun isHuaweiMobileServicesAvailable(context: Context): Boolean {
return HuaweiApiAvailability.getInstance()
.isHuaweiMobileServicesAvailable(context) == com.huawei.hms.api.ConnectionResult.SUCCESS
}
fun create(context: Context): DocumentTextRecognizer {
val type =
getServiceType(
context
)
if (type == ServiceType.HUAWEI)
return HMSDocumentTextRecognizer()
return GMSDocumentTextRecognizer()
}
}
This was pretty much all to make it works.
The ViewModel can use the service provided
Code:
class WordSearchAiViewModel(
private val resourceProvider: ResourceProvider,
private val recognizer: DocumentTextRecognizer
) : ViewModel() {
val resultList: MutableLiveData<List<String>> = MutableLiveData()
val resultBoundingBoxes: MutableLiveData<List<Symbol>> = MutableLiveData()
private lateinit var dictionary: List<String>
fun detectDocumentTextIn(bitmap: Bitmap) {
loadDictionary()
recognizer.processImage(bitmap, {
postWordsFound(it)
postBoundingBoxes(it)
},
{
Log.e("WordSearchAIViewModel", it)
})
}
by the right recognizer instantiated when the WordSearchAiViewModel is instantiated as well.
Running the app and choosing a word search game image on a Mate 30 Pro (an HMS device) shows this result
The Recognizer Brothers
You can check the code of the two recognizers below. What they are doing is to use the custom SDK implementation to get the result and adapt it to the interface, you can virtually use any other service capable to do the same.
Code:
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
import com.google.firebase.ml.vision.FirebaseVision
import com.google.firebase.ml.vision.common.FirebaseVisionImage
class GMSDocumentTextRecognizer : DocumentTextRecognizer {
private val detector = FirebaseVision.getInstance().onDeviceTextRecognizer
override fun processImage(
bitmap: Bitmap,
success: (Document) -> Unit,
error: (String?) -> Unit
) {
val firebaseImage = FirebaseVisionImage.fromBitmap(bitmap)
detector.processImage(firebaseImage)
.addOnSuccessListener { firebaseVisionDocumentText ->
if (firebaseVisionDocumentText != null) {
val words = firebaseVisionDocumentText.textBlocks
.flatMap { it -> it.lines }
.flatMap { it.elements }
val symbols: MutableList<Symbol> = emptyList<Symbol>().toMutableList()
words.forEach {
val rect = it.boundingBox
if (rect != null) {
it.text.forEachIndexed { idx, value ->
symbols.add(
Symbol(
value.toString(),
rect,
idx,
it.text.length
)
)
}
}
}
val document =
Document(
firebaseVisionDocumentText.text,
firebaseVisionDocumentText.textBlocks.size,
symbols
)
success(document)
}
}
.addOnFailureListener { error(it.localizedMessage) }
}
}
Code:
package com.laquysoft.wordsearchai.textrecognizer
import android.graphics.Bitmap
import com.huawei.hms.mlsdk.MLAnalyzerFactory
import com.huawei.hms.mlsdk.common.MLFrame
class HMSDocumentTextRecognizer : DocumentTextRecognizer {
//private val detector = MLAnalyzerFactory.getInstance().remoteDocumentAnalyzer
private val detector = MLAnalyzerFactory.getInstance().localTextAnalyzer
override fun processImage(
bitmap: Bitmap,
success: (Document) -> Unit,
error: (String?) -> Unit
) {
val hmsFrame = MLFrame.fromBitmap(bitmap)
detector.asyncAnalyseFrame(hmsFrame)
.addOnSuccessListener { mlDocument ->
if (mlDocument != null) {
val words = mlDocument.blocks
.flatMap { it.contents }
.flatMap { it.contents }
val symbols: MutableList<Symbol> = emptyList<Symbol>().toMutableList()
words.forEach {
val rect = it.border
it.stringValue.forEachIndexed { idx, value ->
symbols.add(Symbol(
value.toString(),
rect,
idx,
it.stringValue.length
))
}
}
val document =
Document(
mlDocument.stringValue,
mlDocument.blocks.size,
symbols
)
success(document)
}
}
.addOnFailureListener { error(it.localizedMessage) }
}
}
Conclusion
As good Android developers we should develop and deploy our apps in all the platforms our user can reach, love and adopt, without excluding anyone.
We should spend some time trying to give the users the same experience. This is a small sample about it and others will comes in the future.
More articles like this, you can visit HUAWEI Developer Forum and Medium.
Previously on All About Maps: Episode 1:
The principles of clean architecture
The importance of eliminating map provider dependencies with abstraction
Drawing polylines and markers on Mapbox Maps, Google Maps (GMS), and Huawei Maps (HMS)
Episode 2: Bounded Regions
Welcome to the second episode of AllAboutMaps. In order to understand this blog post better, I would first suggest reading the Episode 1. Otherwise, it will be difficult to follow the context.
In this episode we will talk about bounded regions:
The GPX parser datasource will parse the the file to get the list of attraction points (waypoints in this case).
The datasource module will emit the bounded region information in every 3 seconds
A rectangular bounded area from the centered attraction points with a given radius using a utility method (No dependency to any Map Provider!)
We will move the map camera to the bounded region each time a new bounded region is emitted.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
ChangeLog since Episode 1
As we all know, software development is continous process. It helps a lot when you have reviewers who can comment on your code and point out issues or come up with suggestions. Since this project is one person task, it is not always easy to spot the flows in the code duirng implementation. The software gets better and evolves hopefully in a good way when we add new features. Once again. I would like to add the disclaimer that my suggestions here are not silver bullets. There are always better approaches. I am more than happy to hear your suggestions in the comments!
You can see the full code change between episode 1 and 2 here:
https://github.com/ulusoyca/AllAboutMaps/compare/episode_1-parse-gpx...episode_2-bounded-region
Here are the main changes I would like to mention:
1- Added MapLifecycleHandlerFragment.kt base class
In episode 1, I had one feature: show the polyline and markers on the map. The base class of all 3 fragments (RouteInfoMapboxFragment, RouteInfoGoogleFragment and RouteInfoHuaweiFragment) called these lifecycle methods. When I added another feature (showing bounded regions) I realized that the new base class of this feature again implemented the same lifecycle methods. This is against the DRY rule (Dont Repeat Yourself)! Here is the base class I introduced so that each feature's base class will extend this one:
Code:
/**
* The base fragment handles map lifecycle. To use it, the mapview classes should implement
* [AllAboutMapView] interface.
*/
abstract class MapLifecycleHandlerFragment : DaggerFragment() {
protected lateinit var mapView: AllAboutMapView
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
mapView.onMapViewCreate(savedInstanceState)
}
override fun onResume() {
super.onResume()
mapView.onMapViewResume()
}
override fun onPause() {
super.onPause()
mapView.onMapViewPause()
}
override fun onStart() {
super.onStart()
mapView.onMapViewStart()
}
override fun onStop() {
super.onStop()
mapView.onMapViewStop()
}
override fun onDestroyView() {
super.onDestroyView()
mapView.onMapViewDestroy()
}
override fun onSaveInstanceState(outState: Bundle) {
super.onSaveInstanceState(outState)
mapView.onMapViewSaveInstanceState(outState)
}
}
Let's see the big picture now:
2- Refactored the abstraction for styles, marker options, and line options.
In the first episode, we encapsulated a dark map style inside each custom MapView. When I intended to use outdoor map style for the second episode, I realized that my first approach was a mistake. A specific style should not be encapsulated inside MapView. Each feature should be able to select different style. I took the responsibility to load the style from MapViews to fragments. Once the style is loaded, the style object is passed to MapView.
Code:
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
mapView = binding.mapView
super.onViewCreated(view, savedInstanceState)
binding.mapView.getMapAsync { mapboxMap ->
binding.mapView.onMapReady(mapboxMap)
mapboxMap.setStyle(Style.OUTDOORS) {
binding.mapView.onStyleLoaded(it)
onMapStyleLoaded()
}
}
}
I also realized the need for MarkerOptions and LineOptions entities in our domain module:
Code:
data class MarkerOptions(
var latLng: LatLng,
var text: String? = null,
@DrawableRes var iconResId: Int,
var iconMapStyleId: String,
@ColorRes var iconColor: Int,
@ColorRes var textColor: Int
)
Code:
data class LineOptions(
var latLngs: List<LatLng>,
@DimenRes var lineWidth: Int,
@ColorRes var lineColor: Int
)
Above entities has properties based on the needs of my project. I only care about the color, text, location, and icon properties of the marker. For polyline, I will customize width, color and text properties. If your project needs to customize the marker offset, opacity, line join type, and other properties, then feel free to add them in your case.
These entities are mapped to corresponding map provider classes:
LineOptions:
Code:
private fun LineOptions.toGoogleLineOptions(context: Context) = PolylineOptions()
.color(ContextCompat.getColor(context, lineColor))
.width(resources.getDimension(lineWidth))
.addAll(latLngs.map { it.toGoogleLatLng() })
Code:
private fun LineOptions.toHuaweiLineOptions(context: Context) = PolylineOptions()
.color(ContextCompat.getColor(context, lineColor))
.width(resources.getDimension(lineWidth))
.addAll(latLngs.map { it.toHuaweiLatLng() })
Code:
private fun LineOptions.toMapboxLineOptions(context: Context): MapboxLineOptions {
val color = ColorUtils.colorToRgbaString(ContextCompat.getColor(context, lineColor))
return MapboxLineOptions()
.withLineColor(color)
.withLineWidth(resources.getDimension(lineWidth))
.withLatLngs(latLngs.map { it.toMapboxLatLng() })
}
MarkerOptions
Code:
private fun DomainMarkerOptions.toGoogleMarkerOptions(): GoogleMarkerOptions {
var markerOptions = GoogleMarkerOptions()
.icon(BitmapDescriptorFactory.fromResource(iconResId))
.position(latLng.toGoogleLatLng())
markerOptions = text?.let { markerOptions.title(it) } ?: markerOptions
return markerOptions
}
Code:
private fun DomainMarkerOptions.toHuaweiMarkerOptions(context: Context): HuaweiMarkerOptions {
BitmapDescriptorFactory.setContext(context)
var markerOptions = HuaweiMarkerOptions()
.icon(BitmapDescriptorFactory.fromResource(iconResId))
.position(latLng.toHuaweiLatLng())
markerOptions = text?.let { markerOptions.title(it) } ?: markerOptions
return markerOptions
}
Code:
private fun DomainMarkerOptions.toMapboxSymbolOptions(context: Context, style: Style): SymbolOptions {
val drawable = ContextCompat.getDrawable(context, iconResId)
val bitmap = BitmapUtils.getBitmapFromDrawable(drawable)!!
style.addImage(iconMapStyleId, bitmap)
val iconColor = ColorUtils.colorToRgbaString(ContextCompat.getColor(context, iconColor))
val textColor = ColorUtils.colorToRgbaString(ContextCompat.getColor(context, textColor))
var symbolOptions = SymbolOptions()
.withIconImage(iconMapStyleId)
.withLatLng(latLng.toMapboxLatLng())
.withIconColor(iconColor)
.withTextColor(textColor)
symbolOptions = text?.let { symbolOptions.withTextField(it) } ?: symbolOptions
return symbolOptions
}
There are minor technical details to handle the differences between map provider APIs but it is out of this blog post's scope.
Earlier our methods for drawing polyline and marker looked like this:
Code:
fun drawPolyline(latLngs: List<LatLng>, @ColorRes mapLineColor: Int)
fun drawMarker(latLng: LatLng, icon: Bitmap, name: String?)
After this refactor they look like this:
Code:
fun drawPolyline(lineOptions: LineOptions)
fun drawMarker(markerOptions: MarkerOptions)
It is a code smell when the number of the arguments in a method increases when you add a new feature. That's why we created data holders to pass around.
3- A secondary constructor method for LatLng
While working on this feature, I realized that a secondary method that constructs the LatLng entity from double values would also be useful when mapping the entities with different map providers. I mentioned the reason why I use inline classes for Latitude and Longitude in the first episode.
Code:
inline class Latitude(val value: Float)
inline class Longitude(val value: Float)
data class LatLng(
val latitude: Latitude,
val longitude: Longitude
) {
constructor(latitude: Double, longitude: Double) : this(
Latitude(latitude.toFloat()),
Longitude(longitude.toFloat())
)
val latDoubleValue: Double
get() = latitude.value.toDouble()
val lngDoubleValue: Double
get() = longitude.value.toDouble()
}
Bounded Region
A bounded region is used to describe a particular area (in many cases it is rectangular) on a map. We usually need two coordinate pairs to describe a region: Soutwest and Northeast. In this stackoverflow answer (https://stackoverflow.com/a/31029389), it is well described:
As expected Mapbox, GMS and HMS maps provide LatLngBounds classes. However, they require a pair of coordinates to construct the bound. In our case we only have one location for each attraction point. We want to show the region with a radius from center on map. We need to do a little bit extra work to calculate the location pair but first let's add LatLngBound entity to our domain module:
Code:
data class LatLngBounds(
val southwestCorner: LatLng,
val northeastCorner: LatLng
)
Implementation
First, let's see the big (literally!) picture:
Thanks to our clean architecture, it is very easy to add a new feature with a new use case. Let's start with the domain module as always:
Code:
/**
* Emits the list of waypoints with a given update interval
*/
class StartWaypointPlaybackUseCase
@Inject constructor(
private val routeInfoRepository: RouteInfoRepository
) {
suspend operator fun invoke(
points: List<Point>,
updateInterval: Long
): Flow<Point> {
return routeInfoRepository.startWaypointPlayback(points, updateInterval)
}
}
The user interacts with the app to start the playback of waypoints. I call this playback because playback is "the reproduction of previously recorded sounds or moving images." We have a list of points to be listened in a given time. We will move map camera periodically from one bounded region to another. The waypoints are emitted from datasource with a given update interval. Domain module doesn't know the implementation details. It sends the request to our datasource module.
Let's see our datasource module. We added a new method in RouteInfoDataRepository:
Code:
override suspend fun startWaypointPlayback(
points: List<Point>,
updateInterval: Long
): Flow<Point> = flow {
val routeInfo = gpxFileDatasource.parseGpxFile()
routeInfo.wayPoints.forEachIndexed { index, waypoint ->
if (index != 0) {
delay(updateInterval)
}
emit(waypoint)
}
}.flowOn(Dispatchers.Default)
Thanks to Kotlin Coroutines, it is very simple to emit the points with a delay. Roman Elizarov describes the flow api in very neat diagram below. If you are interested to learn more about it, his talks are the best to start with.
Long story short, our app module invokes the use case from domain module, domain module forwards the request to datasource module. The corresponding repository class inside datasource module gets the data from GPX datasource and the datasource module orchestrates the data flow.
For full content, you can visit HUAWEI Developer Forum.
Hello everyone, in this article we will learn how to use Huawei Awareness Kit with foreground service to send notification when certain condition is met.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Huawei Awareness Kit enables us to observe some environmental factors such as time, location, behavior, audio device status, ambient light, weather and nearby beacons. So, why don’t we create our own conditions to be met and observe them even when the application is not running?
First of all, we need to do HMS Core integration to be able to use Awareness Kit. I will not go into the details of that because it is already covered here.
If you are done with the integration, let’s start coding.
Activity Class
We will keep our activity class pretty simple to prevent any confusion. It will only be responsible for starting the service:
Java:
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Intent serviceStartIntent = new Intent(this, MyService.class);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
startForegroundService(serviceStartIntent);
}
else {
startService(serviceStartIntent);
}
}
}
However, we need to add the following permission to start a service correctly:
XML:
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
Service Class
Now, let’s talk about the service class. We are about to create a service which will run even when the application is killed. However, this comes with some restrictions. Since the Android Oreo, if an application wants to start a foreground service, it must inform the user by a notification which needs to be visible during the lifetime of the foreground service. Also, this notification needs to be used to start foreground. Therefore, our first job in the service class is to create this notification and call the startForeground() method with it:
Java:
Notification notification;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O)
notification = createCustomNotification();
else
notification = new Notification();
startForeground(1234, notification);
And here is how we create the information notification we need for SDK versions later than 25:
Java:
@RequiresApi(api = Build.VERSION_CODES.O)
private Notification createCustomNotification() {
NotificationChannel notificationChannel = new NotificationChannel("1234", "name", NotificationManager.IMPORTANCE_HIGH);
NotificationManager manager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);
manager.createNotificationChannel(notificationChannel);
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this, "com.awarenesskit.demo");
return notificationBuilder
.setSmallIcon(R.drawable.ic_notification)
.setContentTitle("Observing headset status")
.setPriority(NotificationManager.IMPORTANCE_HIGH)
.build();
}
Note: You should replace the application id above with the application id of your application.
Now, it is time to prepare the parameters to create a condition to be met called barrier:
Java:
PendingIntent pendingIntent = PendingIntent.getBroadcast(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);
headsetBarrierReceiver = new HeadsetBarrierReceiver();
registerReceiver(headsetBarrierReceiver, new IntentFilter(barrierReceiverAction));
AwarenessBarrier headsetBarrier = HeadsetBarrier.connecting();
createBarrier(this, HEADSET_BARRIER_LABEL, headsetBarrier, pendingIntent);
Here we have sent the required parameters to a method which will create a barrier for observing headset status. If you want, you can use other awareness barriers too.
Creating a barrier is a simple and standard process which will be taken care of the following method:
Java:
private void createBarrier(Context context, String barrierLabel, AwarenessBarrier barrier, PendingIntent pendingIntent) {
BarrierUpdateRequest.Builder builder = new BarrierUpdateRequest.Builder();
BarrierUpdateRequest request = builder.addBarrier(barrierLabel, barrier, pendingIntent).build();
Awareness.getBarrierClient(context).updateBarriers(request)
.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void void1) {
System.out.println("Barrier Create Success");
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
System.out.println("Barrier Create Fail");
}
});
}
We will be done with the service class after adding the following methods:
Java:
@Override
public int onStartCommand(Intent intent, int flags, int startId) {
return START_STICKY;
}
@Override
public void onDestroy() {
unregisterReceiver(headsetBarrierReceiver);
super.onDestroy();
}
@Nullable
@Override
public IBinder onBind(Intent intent) {
return null;
}
And of course, we shouldn’t forget to add our service to the manifest file:
XML:
<service
android:name=".MyService"
android:enabled="true"
android:exported="true" />
Broadcast Receiver Class
Lastly, we need to create a broadcast receiver where we will observe and handle the changes in the headset status:
Java:
public class HeadsetBarrierReceiver extends BroadcastReceiver {
public static final String HEADSET_BARRIER_LABEL = "HEADSET_BARRIER_LABEL";
@Override
public void onReceive(Context context, Intent intent) {
BarrierStatus barrierStatus = BarrierStatus.extract(intent);
String barrierLabel = barrierStatus.getBarrierLabel();
int barrierPresentStatus = barrierStatus.getPresentStatus();
if (HEADSET_BARRIER_LABEL.equals(barrierLabel)) {
if (barrierPresentStatus == BarrierStatus.TRUE) {
System.out.println("The headset is connected.");
createNotification(context);
}
else if (barrierPresentStatus == BarrierStatus.FALSE) {
System.out.println("The headset is disconnected.");
}
}
}
When a change occurs in the headset status, this method will receive the information. Here, the value of barrierPresentStatus will determine if headset is connected or disconnected.
At this point, we can detect that headset is just connected, so it is time to send a notification. The following method will take care of that:
Java:
private void createNotification(Context context) {
// Create PendingIntent to make user open the application when clicking on the notification
PendingIntent pendingIntent = PendingIntent.getActivity(context, 1234, new Intent(context, MainActivity.class), PendingIntent.FLAG_UPDATE_CURRENT);
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(context, "channelId")
.setSmallIcon(R.drawable.ic_headset)
.setContentTitle("Cool Headset!")
.setContentText("Want to listen to some music ?")
.setContentIntent(pendingIntent);
NotificationManager notificationManager = (NotificationManager) context.getSystemService(Context.NOTIFICATION_SERVICE);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
NotificationChannel mChannel = new NotificationChannel("channelId", "ChannelName", NotificationManager.IMPORTANCE_DEFAULT);
notificationManager.createNotificationChannel(mChannel);
}
notificationManager.notify(1234, notificationBuilder.build());
}
Output
When headset is connected, the following notification will be created even if the application is not running:
Final Thoughts
We have learned how to use Barrier API of Huawei Awareness Kit with a foreground service to observe the changes in environmental factors even when the application is not running.
As you may notice, the permanent notification indicating that the application is running in the background is not dismissible by the user which can be annoying. Even though these notifications can be dismissed by some third party applications, not every user has those applications, so you should be careful when deciding to build such services.
In this case, observing the headset status was just an example, so it might not be the best scenario for this use case, but Huawei Awareness Kit has many other great features that you can use with foreground services in your projects.
References
You can check the complete project on GitHub.
Note that, you will not be able to run this project because you don’t have the agconnect-services.json file for it. Therefore, you should only take it as a reference to create your own project.
Does Awareness Kit support wearable devices such as smart watches?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
It can be so frustrating to lose track of a workout because the fitness app has stopped running in the background, when you turn off the screen or have another app in the front to listen to music or watch a video during the workout. Talk about all of your sweat and effort going to waste!
Fitness apps work by recognizing and displaying the user's workout status in real time, using the sensor on the phone or wearable device. They can obtain and display complete workout records to users only if they can keep running in the background. Since most users will turn off the screen, or use other apps during a workout, it has been a must-have feature for fitness apps to keep alive in the background. However, to save the battery power, most phones will restrict or even forcibly close apps once they are running in the background, causing the workout data to be incomplete. When building your own fitness app, it's important to keep this limitation in mind.
There are two tried and tested ways to keep fitness apps running in the background:
Instruct the user to manually configure the settings on their phones or wearable devices, for example, to disable battery optimization, or to allow the specific app to run in the background. However, this process can be cumbersome, and not easy to follow.
Or integrate development tools into your app, for example, Health Kit, which provides APIs that allow your app to keep running in the background during workouts, without losing track of any workout data.
The following details the process for integrating this kit.
Integration Procedure1. Before you get started, apply for Health Kit on HUAWEI Developers, select the required data scopes, and integrate the Health SDK.
2. Obtain users' authorization, and apply for the scopes to read and write workout records.
3. Enable a foreground service to prevent your app from being frozen by the system, and call ActivityRecordsController in the foreground service to create a workout record that can run in the background.
4. Call beginActivityRecord of ActivityRecordsController to start the workout record. By default, an app will be allowed to run in the background for 10 minutes.
Code:
// Note that this refers to an Activity object.
ActivityRecordsController activityRecordsController = HuaweiHiHealth.getActivityRecordsController(this);
// 1. Build the start time of a new workout record.
long startTime = Calendar.getInstance().getTimeInMillis();
// 2. Build the ActivityRecord object and set the start time of the workout record.
ActivityRecord activityRecord = new ActivityRecord.Builder()
.setId("MyBeginActivityRecordId")
.setName("BeginActivityRecord")
.setDesc("This is ActivityRecord begin test!")
.setActivityTypeId(HiHealthActivities.RUNNING)
.setStartTime(startTime, TimeUnit.MILLISECONDS)
.build();
// 3. Construct the screen to be displayed when the workout record is running in the background. Note that you need to replace MyActivity with the Activity class of the screen.
ComponentName componentName = new ComponentName(this, MyActivity.class);
// 4. Construct a listener for the status change of the workout record.
OnActivityRecordListener activityRecordListener = new OnActivityRecordListener() {
@Override
public void onStatusChange(int statusCode) {
Log.i("ActivityRecords", "onStatusChange statusCode:" + statusCode);
}
};
// 5. Call beginActivityRecord to start the workout record.
Task<Void> task1 = activityRecordsController.beginActivityRecord(activityRecord, componentName, activityRecordListener);
// 6. ActivityRecord is successfully started.
task1.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
Log.i("ActivityRecords", "MyActivityRecord begin success");
}
// 7. ActivityRecord fails to be started.
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
Log.i("ActivityRecords", errorCode + ": " + errorMsg);
}
});
5. If the workout lasts for more than 10 minutes, call continueActivityRecord of ActivityRecordsController each time before a 10-minute ends to apply for the workout to continue for another 10 minutes.
Code:
// Note that this refers to an Activity object.
ActivityRecordsController activityRecordsController = HuaweiHiHealth.getActivityRecordsController(this);
// Call continueActivityRecord and pass the workout record ID for the record to continue in the background.
Task<Void> endTask = activityRecordsController.continueActivityRecord("MyBeginActivityRecordId");
endTask.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
Log.i("ActivityRecords", "continue backgroundActivityRecord was successful!");
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.i("ActivityRecords", "continue backgroundActivityRecord error");
}
});
6. When the user finishes the workout, call endActivityRecord of ActivityRecordsController to stop the record and stop keeping it alive in the background.
Code:
// Note that this refers to an Activity object.
final ActivityRecordsController activityRecordsController = HuaweiHiHealth.getActivityRecordsController(this);
// Call endActivityRecord to stop the workout record. The input parameter is null or the ID string of ActivityRecord.
// Stop a workout record of the current app by specifying the ID string as the input parameter.
// Stop all workout records of the current app by specifying null as the input parameter.
Task<List<ActivityRecord>> endTask = activityRecordsController.endActivityRecord("MyBeginActivityRecordId");
endTask.addOnSuccessListener(new OnSuccessListener<List<ActivityRecord>>() {
@Override
public void onSuccess(List<ActivityRecord> activityRecords) {
Log.i("ActivityRecords","MyActivityRecord End success");
// Return the list of workout records that have stopped.
if (activityRecords.size() > 0) {
for (ActivityRecord activityRecord : activityRecords) {
DateFormat dateFormat = DateFormat.getDateInstance();
DateFormat timeFormat = DateFormat.getTimeInstance();
Log.i("ActivityRecords", "Returned for ActivityRecord: " + activityRecord.getName() + "\n\tActivityRecord Identifier is "
+ activityRecord.getId() + "\n\tActivityRecord created by app is " + activityRecord.getPackageName()
+ "\n\tDescription: " + activityRecord.getDesc() + "\n\tStart: "
+ dateFormat.format(activityRecord.getStartTime(TimeUnit.MILLISECONDS)) + " "
+ timeFormat.format(activityRecord.getStartTime(TimeUnit.MILLISECONDS)) + "\n\tEnd: "
+ dateFormat.format(activityRecord.getEndTime(TimeUnit.MILLISECONDS)) + " "
+ timeFormat.format(activityRecord.getEndTime(TimeUnit.MILLISECONDS)) + "\n\tActivity:"
+ activityRecord.getActivityType());
}
} else {
// null will be returned if the workout record hasn't stopped.
Log.i("ActivityRecords","MyActivityRecord End response is null");
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
Log.i("ActivityRecords",errorCode + ": " + errorMsg);
}
});
Note that calling the API for keeping your app running in the background is a sensitive operation and requires manual approval. Make sure that your app meets the data security and compliance requirements before applying for releasing it.
ConclusionHealth Kit allows you to build apps that continue tracking workouts in the background, even when the screen has been turned off, or another app has been opened to run in the front. It's a must-have for fitness app developers. Integrate the kit to get started today!
ReferencesHUAWEI Developers
Development Procedure for Keeping Your App Running in the Background
Background
Around half a year ago I decided to start decorating my new house. Before getting started, I did lots of research on a variety of different topics relating to interior decoration, such as how to choose a consistent color scheme, which measurements to make and how to make them, and how to choose the right furniture. However, my preparations made me realize that no matter how well prepared you are, you're always going to run into many unexpected challenges. Before rushing to the furniture store, I listed all the different pieces of furniture that I wanted to place in my living room, including a sofa, tea table, potted plants, dining table, and carpet, and determined the expected dimensions, colors, and styles of these various items of furniture. However, when I finally got to the furniture store, the dizzying variety of choices had me confused, and I found it very difficult to imagine how the different choices of furniture would actually look like in actual living room. At that moment a thought came to my mind: wouldn't it be great if there was an app that allows users to upload images of their home and then freely select different furniture products to see how they'll look like in their home? Such an app would surely save users wishing to decorate their home lots of time and unnecessary trouble, and reduce the risks of users being dissatisfied with the final decoration result.
That's when the idea of developing an app by myself came to my mind. My initial idea was to design an app that people could use to help them quickly satisfy their home decoration needs by allowing them see what furniture would look like in their homes. The basic way the app works is that users first upload one or multiple images of a room they want to decorate, and then set a reference parameter, such as the distance between the floor and the ceiling. Armed with this information, the app would then automatically calculate the parameters of other areas in the room. Then, users can upload images of furniture they like into a virtual shopping cart. When uploading such images, users need to specify the dimensions of the furniture. From the editing screen, users can drag and drop furniture from the shopping cart onto the image of the room to preview the effect. But then a problem arises: images of furniture dragged and dropped into the room look pasted on and do not blend naturally with their surroundings.
By a stroke of luck, I happened to discover HMS Core AR Engine when looking for a solution for the aforementioned problem. This development kit provides the ability to integrate virtual objects realistically into the real world, which is exactly what my app needs. With its plane detection capability, my app will be able to detect the real planes in a home and allow users to place virtual furniture based on these planes; and with its hit test capability, users can interact with virtual furniture to change their position and orientation in a natural manner.
Next, I'd like to briefly introduce the two capabilities this development kit offers.
AR Engine tracks the illumination, planes, images, objects, surfaces, and other environmental information, to allow apps to integrate virtual objects into the physical world and look and behave like they would if they were real. Its plane detection capability identifies feature points in groups on horizontal and vertical planes, as well as the boundaries of the planes, ensuring that your app can place virtual objects on them.
In addition, the kit continuously tracks the location and orientation of devices relative to their surrounding environment, and establishes a unified geometric space between the virtual world and the physical world. The kit uses its hit test capability to map a point of interest that users tap on the screen to a point of interest in the real environment, from where a ray will be emitted pointing to the location of the device camera, and return the intersecting point between the ray and the plane. In this way, users can interact with any virtual object on their device screen.
Functions and Features
Plane detection: Both horizontal and vertical planes are supported.
Accuracy: The margin of error is around 2.5 cm when the target plane is 1 m away from the camera.
Texture recognition delay: < 1s
Supports polygon fitting and plane merging.
Demo
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hit test
As shown in the demo, the app is able to identify the floor plane, so that the virtual suitcase can move over it as if it were real.
Developing Plane Detection
1. Create a WorldActivity object. This example demonstrates how to use the world AR scenario of AR Engine.
Code:
Public class WorldActivity extends BaseActivity{
Protected void onCreate (Bundle saveInstanceState) {
Initialize DisplayRotationManager.
mDisplayRotationManager = new DisplayRotationManager(this);
Initialize WorldRenderManager.
mWorldRenderManager = new WorldRenderManager(this,this);
}
// Create a gesture processor.
Private void initGestureDetector(){
mGestureDetector = new GestureDetector(this,new GestureDetector.SimpleOnGestureListener()){
}
}
mSurfaceView.setOnTouchListener(new View.OnTouchListener()){
public Boolean onTouch(View v,MotionEvent event){
return mGestureDetector.onTouchEvent(event);
}
}
// Create ARWorldTrackingConfig in the onResume lifecycle.
protected void onResume(){
mArSession = new ARSession(this.getApplicationContext());
mConfig = new ARWorldTrackingConfig(mArSession);
…
}
// Initialize a refresh configuration class.
private void refreshConfig(int lightingMode){
// Set the focus.
mConfig.setFocusMode(ARConfigBase.FocusMode.AUTO_FOCUS);
mArSession.configure(mConfig);
}
}
2. Initialize the WorldRenderManager class, which manages rendering related to world scenarios, including label rendering and virtual object rendering.
Code:
public class WorldRenderManager implements GLSurfaceView.Renderr{
// Initialize a class for frame drawing.
Public void onDrawFrame(GL10 unused){
// Set the openGL textureId for storing the camera preview stream data.
mSession.setCameraTextureName(mTextureDisplay.getExternalTextureId());
// Update the calculation result of AR Engine. You are advised to call this API when your app needs to obtain the latest data.
ARFrame arFrame = mSession.update();
// Obtains the camera specifications of the current frame.
ARCamera arCamera = arFrame.getCamera();
// Returns a projection matrix used for coordinate calculation, which can be used for the transformation from the camera coordinate system to the clip coordinate system.
arCamera.getProjectionMatrix(projectionMatrix, PROJ_MATRIX_OFFSET, PROJ_MATRIX_NEAR, PROJ_MATRIX_FAR);
Session.getAllTrackables(ARPlane.class)
...
}
}
3. Initialize the VirtualObject class, which provides properties of the virtual object and the necessary methods for rendering the virtual object.
Code:
Public class VirtualObject{
}
4. Initialize the ObjectDisplay class to draw virtual objects based on specified parameters.
Code:
Public class ObjectDisplay{
}
Developing Hit Test
1. Initialize the WorldRenderManager class, which manages rendering related to world scenarios, including label rendering and virtual object rendering.
Code:
public class WorldRenderManager implementsGLSurfaceView.Renderer{
// Pass the context.
public WorldRenderManager(Activity activity, Context context) {
mActivity = activity;
mContext = context;
…
}
// Set ARSession, which updates and obtains the latest data in OnDrawFrame.
public void setArSession(ARSession arSession) {
if (arSession == null) {
LogUtil.error(TAG, "setSession error, arSession is null!");
return;
}
mSession = arSession;
}
// Set ARWorldTrackingConfig to obtain the configuration mode.
public void setArWorldTrackingConfig(ARWorldTrackingConfig arConfig) {
if (arConfig == null) {
LogUtil.error(TAG, "setArWorldTrackingConfig error, arConfig is null!");
return;
}
mArWorldTrackingConfig = arConfig;
}
// Implement the onDrawFrame() method.
@Override
public void onDrawFrame(GL10 unused) {
mSession.setCameraTextureName(mTextureDisplay.getExternalTextureId());
ARFrame arFrame = mSession.update();
ARCamera arCamera = arFrame.getCamera();
...
}
// Output the hit result.
private ARHitResult hitTest4Result(ARFrame frame, ARCamera camera, MotionEvent event) {
ARHitResult hitResult = null;
List<ARHitResult> hitTestResults = frame.hitTest(event);
// Determine whether the hit point is within the plane polygon.
ARHitResult hitResultTemp = hitTestResults.get(i);
if (hitResultTemp == null) {
continue;
}
ARTrackable trackable = hitResultTemp.getTrackable();
// Determine whether the point cloud is tapped and whether the point faces the camera.
boolean isPointHitJudge = trackable instanceof ARPoint
&& ((ARPoint) trackable).getOrientationMode() == ARPoint.OrientationMode.ESTIMATED_SURFACE_NORMAL;
// Select points on the plane preferentially.
if (isPlanHitJudge || isPointHitJudge) {
hitResult = hitResultTemp;
if (trackable instanceof ARPlane) {
break;
}
}
return hitResult;
}
}
2. Create a WorldActivity object. This example demonstrates how to use the world AR scenario of AR Engine.
Code:
public class WorldActivity extends BaseActivity {
private ARSession mArSession;
private GLSurfaceView mSurfaceView;
private ARWorldTrackingConfig mConfig;
@Override
protected void onCreate(Bundle savedInstanceState) {
LogUtil.info(TAG, "onCreate");
super.onCreate(savedInstanceState);
setContentView(R.layout.world_java_activity_main);
mWorldRenderManager = new WorldRenderManager(this, this);
mWorldRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mWorldRenderManager.setQueuedSingleTaps(mQueuedSingleTaps)
}
@Override
protected void onResume() {
if (!PermissionManager.hasPermission(this)) {
this.finish();
}
errorMessage = null;
if (mArSession == null) {
try {
if (!arEngineAbilityCheck()) {
finish();
return;
}
mArSession = new ARSession(this.getApplicationContext());
mConfig = new ARWorldTrackingConfig(mArSession);
refreshConfig(ARConfigBase.LIGHT_MODE_ENVIRONMENT_LIGHTING | ARConfigBase.LIGHT_MODE_ENVIRONMENT_TEXTURE);
} catch (Exception capturedException) {
setMessageWhenError(capturedException);
}
if (errorMessage != null) {
stopArSession();
return;
}
}
@Override
protected void onPause() {
LogUtil.info(TAG, "onPause start.");
super.onPause();
if (mArSession != null) {
mDisplayRotationManager.unregisterDisplayListener();
mSurfaceView.onPause();
mArSession.pause();
}
LogUtil.info(TAG, "onPause end.");
}
@Override
protected void onDestroy() {
LogUtil.info(TAG, "onDestroy start.");
if (mArSession != null) {
mArSession.stop();
mArSession = null;
}
if (mWorldRenderManager != null) {
mWorldRenderManager.releaseARAnchor();
}
super.onDestroy();
LogUtil.info(TAG, "onDestroy end.");
}
...
}
Summary
If you've ever done any interior decorating, I'm sure you've wanted the ability to see what furniture would look like in your home without having to purchase them first. After all, most furniture isn't cheap and delivery and assembly can be quite a hassle. That's why apps that allow users to place and view virtual furniture in their real homes are truly life-changing technologies. HMS Core AR Engine can help greatly streamline the development of such apps. With its plane detection and hit test capabilities, the development kit enables your app to accurately detect planes in the real world, and then blend virtual objects naturally into the real world. In addition to virtual home decoration, this powerful kit also has a broad range of other applications. For example, you can leverage its capabilities to develop an AR video game, an AR-based teaching app that allows students to view historical artifacts in 3D, or an e-commerce app with a virtual try-on feature. Try AR Engine now and explore the unlimited possibilities it provides.
Reference
AR Engine Development Guide