{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
It can be so frustrating to lose track of a workout because the fitness app has stopped running in the background, when you turn off the screen or have another app in the front to listen to music or watch a video during the workout. Talk about all of your sweat and effort going to waste!
Fitness apps work by recognizing and displaying the user's workout status in real time, using the sensor on the phone or wearable device. They can obtain and display complete workout records to users only if they can keep running in the background. Since most users will turn off the screen, or use other apps during a workout, it has been a must-have feature for fitness apps to keep alive in the background. However, to save the battery power, most phones will restrict or even forcibly close apps once they are running in the background, causing the workout data to be incomplete. When building your own fitness app, it's important to keep this limitation in mind.
There are two tried and tested ways to keep fitness apps running in the background:
Instruct the user to manually configure the settings on their phones or wearable devices, for example, to disable battery optimization, or to allow the specific app to run in the background. However, this process can be cumbersome, and not easy to follow.
Or integrate development tools into your app, for example, Health Kit, which provides APIs that allow your app to keep running in the background during workouts, without losing track of any workout data.
The following details the process for integrating this kit.
Integration Procedure1. Before you get started, apply for Health Kit on HUAWEI Developers, select the required data scopes, and integrate the Health SDK.
2. Obtain users' authorization, and apply for the scopes to read and write workout records.
3. Enable a foreground service to prevent your app from being frozen by the system, and call ActivityRecordsController in the foreground service to create a workout record that can run in the background.
4. Call beginActivityRecord of ActivityRecordsController to start the workout record. By default, an app will be allowed to run in the background for 10 minutes.
Code:
// Note that this refers to an Activity object.
ActivityRecordsController activityRecordsController = HuaweiHiHealth.getActivityRecordsController(this);
// 1. Build the start time of a new workout record.
long startTime = Calendar.getInstance().getTimeInMillis();
// 2. Build the ActivityRecord object and set the start time of the workout record.
ActivityRecord activityRecord = new ActivityRecord.Builder()
.setId("MyBeginActivityRecordId")
.setName("BeginActivityRecord")
.setDesc("This is ActivityRecord begin test!")
.setActivityTypeId(HiHealthActivities.RUNNING)
.setStartTime(startTime, TimeUnit.MILLISECONDS)
.build();
// 3. Construct the screen to be displayed when the workout record is running in the background. Note that you need to replace MyActivity with the Activity class of the screen.
ComponentName componentName = new ComponentName(this, MyActivity.class);
// 4. Construct a listener for the status change of the workout record.
OnActivityRecordListener activityRecordListener = new OnActivityRecordListener() {
@Override
public void onStatusChange(int statusCode) {
Log.i("ActivityRecords", "onStatusChange statusCode:" + statusCode);
}
};
// 5. Call beginActivityRecord to start the workout record.
Task<Void> task1 = activityRecordsController.beginActivityRecord(activityRecord, componentName, activityRecordListener);
// 6. ActivityRecord is successfully started.
task1.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
Log.i("ActivityRecords", "MyActivityRecord begin success");
}
// 7. ActivityRecord fails to be started.
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
Log.i("ActivityRecords", errorCode + ": " + errorMsg);
}
});
5. If the workout lasts for more than 10 minutes, call continueActivityRecord of ActivityRecordsController each time before a 10-minute ends to apply for the workout to continue for another 10 minutes.
Code:
// Note that this refers to an Activity object.
ActivityRecordsController activityRecordsController = HuaweiHiHealth.getActivityRecordsController(this);
// Call continueActivityRecord and pass the workout record ID for the record to continue in the background.
Task<Void> endTask = activityRecordsController.continueActivityRecord("MyBeginActivityRecordId");
endTask.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
Log.i("ActivityRecords", "continue backgroundActivityRecord was successful!");
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Log.i("ActivityRecords", "continue backgroundActivityRecord error");
}
});
6. When the user finishes the workout, call endActivityRecord of ActivityRecordsController to stop the record and stop keeping it alive in the background.
Code:
// Note that this refers to an Activity object.
final ActivityRecordsController activityRecordsController = HuaweiHiHealth.getActivityRecordsController(this);
// Call endActivityRecord to stop the workout record. The input parameter is null or the ID string of ActivityRecord.
// Stop a workout record of the current app by specifying the ID string as the input parameter.
// Stop all workout records of the current app by specifying null as the input parameter.
Task<List<ActivityRecord>> endTask = activityRecordsController.endActivityRecord("MyBeginActivityRecordId");
endTask.addOnSuccessListener(new OnSuccessListener<List<ActivityRecord>>() {
@Override
public void onSuccess(List<ActivityRecord> activityRecords) {
Log.i("ActivityRecords","MyActivityRecord End success");
// Return the list of workout records that have stopped.
if (activityRecords.size() > 0) {
for (ActivityRecord activityRecord : activityRecords) {
DateFormat dateFormat = DateFormat.getDateInstance();
DateFormat timeFormat = DateFormat.getTimeInstance();
Log.i("ActivityRecords", "Returned for ActivityRecord: " + activityRecord.getName() + "\n\tActivityRecord Identifier is "
+ activityRecord.getId() + "\n\tActivityRecord created by app is " + activityRecord.getPackageName()
+ "\n\tDescription: " + activityRecord.getDesc() + "\n\tStart: "
+ dateFormat.format(activityRecord.getStartTime(TimeUnit.MILLISECONDS)) + " "
+ timeFormat.format(activityRecord.getStartTime(TimeUnit.MILLISECONDS)) + "\n\tEnd: "
+ dateFormat.format(activityRecord.getEndTime(TimeUnit.MILLISECONDS)) + " "
+ timeFormat.format(activityRecord.getEndTime(TimeUnit.MILLISECONDS)) + "\n\tActivity:"
+ activityRecord.getActivityType());
}
} else {
// null will be returned if the workout record hasn't stopped.
Log.i("ActivityRecords","MyActivityRecord End response is null");
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
String errorCode = e.getMessage();
String errorMsg = HiHealthStatusCodes.getStatusCodeMessage(Integer.parseInt(errorCode));
Log.i("ActivityRecords",errorCode + ": " + errorMsg);
}
});
Note that calling the API for keeping your app running in the background is a sensitive operation and requires manual approval. Make sure that your app meets the data security and compliance requirements before applying for releasing it.
ConclusionHealth Kit allows you to build apps that continue tracking workouts in the background, even when the screen has been turned off, or another app has been opened to run in the front. It's a must-have for fitness app developers. Integrate the kit to get started today!
ReferencesHUAWEI Developers
Development Procedure for Keeping Your App Running in the Background
Related
This article is originally from HUAWEI Developer Foum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
About GymOut:
GymOut is a simple work out App which uses some of the features of HMS awareness Kit. In this, we have to set Gym location (here it is taking current location) and work out time. Once we enter into Gym it prompts us to plug in the head set to listen the work out music. The music will stop playing after the completion of work out /on leaving the gym.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
An Introduction for Awareness Kit:
HUAWEI Awareness Kit provides your app with the ability to obtain contextual information including users' current time, location, behavior, audio device status, ambient light, weather, and nearby beacons. Through this features the app can gain insight into a user's current situation more efficiently, making it possible to deliver a smarter, more considerate user experience
· Capture API
The Capture API allows the app to request the current user status, such as time, location, behaviour, and whether a headset is connected. For example, it can request location for getting latitude and longitude.
· BarrierAPI
The Barrier API allows the app to set a combination of contextual conditions. When the preset contextual conditions are met, the app will receive a notification. We can even accommodate our app with different combinations of contextual conditions to support different use cases. For example, we will get notification once we enter into a specified location.
Software Requirements
Android Studio
Java JDK 1.8 or later
Huawei Mobile Services (APK) 4.0.0.300 or later
Integration
1. Create a project in android studio and Huawei AGC.
2. Provide the SHA-256 Key in App Information Section.
3. Provide storage location. (Selected Singapore here)
4. Download the agconnect-services.json from AGCand save into app directory.
5. In root build.gradle
Go to allprojects->repositories and buildscript->repositories and the given line.
Code:
maven { url 'http://developer.huawei.com/repo/' }
In dependency add class path
Code:
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
6. In app build.gradle
We have to add signingConfigs and buildTypes, otherwise we have to create signed apk for awareness testing.
Code:
signingConfigs {
release {
storeFile file('store.jks')
keyAlias 'mykey'
keyPassword 'hmsapp'
storePassword 'hmsapp'
v1SigningEnabled true
v2SigningEnabled true
}
}
buildTypes {
release {
signingConfig signingConfigs.release
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
debug {
signingConfig signingConfigs.release
debuggable true
}
}
Add Implementation
Code:
implementation 'com.huawei.hms:awareness:1.0.4.301'
Apply plugin
Code:
apply plugin: 'com.huawei.agconnect'
7. The app need the following permissions
Code:
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="com.huawei.hms.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="android.permission.ACTIVITY_RECOGNITION" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
Code Implementation:
In this application we are using the following HMS Features:
Capture API
Location Capture: For getting gym Latitude and Longitude.
Barrier API
Location Barrier: To get notification when entering into Gym and exit from Gym .
Headset Barrier:To get alert to connect Head set for workout music
Time Barrier: To alert, once the time elapses the specified workout time.
To capture Location API using getLocation () function
Code:
private void getLocation() {
Awareness.getCaptureClient(this).getLocation()
.addOnSuccessListener(new OnSuccessListener<LocationResponse>() {
@Override
public void onSuccess(LocationResponse locationResponse) {
Location location = locationResponse.getLocation();
latitude=location.getLatitude();
longitude=location.getLongitude();
Toast.makeText(getApplicationContext(),"Longitude:" + longitude
+ ",Latitude:" + latitude,Toast.LENGTH_SHORT).show();
mLogView.printLog("Longitude:" + longitude
+ ",Latitude:" + latitude);
mp = MediaPlayer.create(getApplicationContext(), R.raw.audio);
mp.setLooping(true);
alert_stayTimeDialog();
//add_locationBarrier_enter
AwarenessBarrier enterBarrier = LocationBarrier.enter(latitude, longitude, radius);
Utils.addBarrier(getApplicationContext(), ENTER_BARRIER_LABEL, enterBarrier, mPendingIntent);
AwarenessBarrier exitBarrier = LocationBarrier.exit(latitude, longitude, radius);
Utils.addBarrier(getApplicationContext(), EXIT_BARRIER_LABEL, exitBarrier, mPendingIntent);
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Toast.makeText(getApplicationContext(),"Failed to get the location.",Toast.LENGTH_SHORT).show();
mLogView.printLog("Failed to get the location.");
Log.e(TAG, "Failed to get the location.", e);
}
});
mScrollView.postDelayed(()-> mScrollView.smoothScrollTo(0,mScrollView.getBottom()),200);
}
Using GymBarrierReceiver to receive the broadcast sent by Awareness Kit when the barrier status changes.
Code:
final class GymBarrierReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
BarrierStatus barrierStatus = BarrierStatus.extract(intent);
String label = barrierStatus.getBarrierLabel();
int barrierPresentStatus = barrierStatus.getPresentStatus();
switch (label) {
case ENTER_BARRIER_LABEL:
if (barrierPresentStatus == BarrierStatus.TRUE) {
mLogView.printLog("You are in Gym");
alert_startWorkOutDialog();
} else if (barrierPresentStatus == BarrierStatus.FALSE) {
// mLogView.printLog("You are away from Gym");
} else {
mLogView.printLog("The location status is unknown.");
}
break;
case STAY_BARRIER_LABEL:
if (barrierPresentStatus == BarrierStatus.TRUE) {
mLogView.printLog("You have to spent "
+ workOutTime + " minutes in Gym");
} else if (barrierPresentStatus == BarrierStatus.FALSE) {
mLogView.printLog("Your time of duration in Gym is Over.");
} else {
mLogView.printLog("The location status is unknown.");
}
break;
case EXIT_BARRIER_LABEL:
if (barrierPresentStatus == BarrierStatus.TRUE) {
mLogView.printLog("You are exiting Gym");
mp.pause();
play_pause.setBackground(getDrawable(android.R.drawable.ic_media_play));
} else if (barrierPresentStatus == BarrierStatus.FALSE) {
// mLogView.printLog("You are in Gym");
} else {
mLogView.printLog("The location status is unknown.");
}
break;
case KEEPING_BARRIER_LABEL:
if (barrierPresentStatus == BarrierStatus.TRUE) {
mLogView.printLog("Audio is playing...");
play_pause.setVisibility(View.VISIBLE);
mp.start();
play_pause.setBackground(getDrawable(android.R.drawable.ic_media_pause));
} else if (barrierPresentStatus == BarrierStatus.FALSE) {
mLogView.printLog("Connect headset to play audio");
mp.pause();
play_pause.setBackground(getDrawable(android.R.drawable.ic_media_play));
} else {
mLogView.printLog("The headset status is unknown.");
}
break;
case CONNECTING_BARRIER_LABEL:
if (barrierPresentStatus == BarrierStatus.TRUE) {
mLogView.printLog("The headset is connecting. Audio is playing");
play_pause.setVisibility(View.VISIBLE);
mp.start();
play_pause.setBackground(getDrawable(android.R.drawable.ic_media_pause));
} else if (barrierPresentStatus == BarrierStatus.FALSE) {
} else {
mLogView.printLog("The headset status is unknown.");
}
break;
case DURING_TIME_PERIOD_BARRIER_LABEL:
if (barrierPresentStatus == BarrierStatus.TRUE) {
mLogView.printLog("Your Work out is "+workOutTime+" minutes.");
} else if (barrierPresentStatus == BarrierStatus.FALSE) {
mLogView.printLog("Work out time reached "+workOutTime+" minutes.");
} else {
mLogView.printLog("The time status is unknown.");
}
break;
default:
break;
}
mScrollView.postDelayed(()-> mScrollView.smoothScrollTo(0,mScrollView.getBottom()*3),200);
}
}
MediaPlayer is added as given :
Code:
MediaPlayer mp = MediaPlayer.create(this, R.raw.audio);
mp.setLooping(true);
CountDownTimer to notify remaining workout time:
Code:
public void countDown(long workOutInMs){
new CountDownTimer(workOutInMs,1000) {
@Override
public void onTick(long millisUntilFinished) {
int seconds = (int) (millisUntilFinished / 1000);
int minutes = seconds / 60;
seconds = seconds % 60;
workOut_txt.setText(String.format("%d:%02d", minutes, seconds));
// counter++;
}
@Override
public void onFinish() {
workOut_txt.setText("Your work out is over");
mp.pause();
play_pause.setBackground(getDrawable(android.R.drawable.ic_media_play));
play_pause.setVisibility(View.GONE);
}
}.start();
}
Added different barriers as given:
Code:
AwarenessBarrier enterBarrier = LocationBarrier.enter(latitude, longitude, radius);
Utils.addBarrier(getApplicationContext(), ENTER_BARRIER_LABEL, enterBarrier, mPendingIntent);
AwarenessBarrier exitBarrier = LocationBarrier.exit(latitude, longitude, radius);
Utils.addBarrier(getApplicationContext(), EXIT_BARRIER_LABEL, exitBarrier, mPendingIntent);
AwarenessBarrier stayBarrier = LocationBarrier.stay(latitude, longitude, radius, workOutInMs);
Utils.addBarrier(getApplicationContext(), STAY_BARRIER_LABEL, stayBarrier, mPendingIntent);
AwarenessBarrier keepingConnectedBarrier = HeadsetBarrier.keeping(HeadsetStatus.CONNECTED);
Utils.addBarrier(getApplicationContext(), KEEPING_BARRIER_LABEL, keepingConnectedBarrier, mPendingIntent);
AwarenessBarrier connectingBarrier = HeadsetBarrier.connecting();
Utils.addBarrier(getApplicationContext(), CONNECTING_BARRIER_LABEL, connectingBarrier, mPendingIntent);
AwarenessBarrier timePeriodBarrier = TimeBarrier.duringTimePeriod(currentTimeStamp,
currentTimeStamp + workOutInMs);
Utils.addBarrier(getApplicationContext(), DURING_TIME_PERIOD_BARRIER_LABEL,
timePeriodBarrier, mPendingIntent);
The function alert_stayTimeDialog() is used to capture user workout time.
Code:
void alert_stayTimeDialog(){
final EditText edittext = new EditText(this);
edittext.setText("0");
edittext.setInputType(InputType.TYPE_NUMBER_FLAG_SIGNED | InputType.TYPE_CLASS_NUMBER);
AlertDialog.Builder alert = new AlertDialog.Builder(
this);
alert.setMessage("Your Workout Time");
alert.setTitle("Enter time in minutes ");
alert.setView(edittext);
alert.setPositiveButton("OK", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int whichButton) {
//What ever you want to do with the value
Editable YouEditTextValue = edittext.getText();
//OR
int stayTime = Integer.parseInt(edittext.getText().toString());
//Add the code after success
SharedPreferences.Editor editor = gymOutPref.edit();
editor.putBoolean("SET_WORKOUT", true);
editor.putInt("SET_WORKOUT_TIME", stayTime);
editor.commit();
workOutTime=stayTime;
set_workout.setEnabled(false);
set_workout.setBackground(getDrawable(R.drawable.button_style_disable));
delete_workout.setEnabled(true);
delete_workout.setBackground(getDrawable(R.drawable.button_style));
workOut_txt.setText("WorkOut Time:"+stayTime+" Mnts");
Toast.makeText(getApplicationContext(), "WorkOut Time: " + stayTime, Toast.LENGTH_LONG).show();
}
});
alert.setNegativeButton("Cancel", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int whichButton) {
// what ever you want to do with No option.
}
});
alert.show();
}
Preface
HUAWEI Map Kit includes a route planning function, which offers a set of HTTPS-based APIs. These APIs are used to plan walking, cycling, and driving routes, as well as calculate route distances. They return route data in JSON format, and comprise the route planning capability.
Route planning APIs are as follows:
Walking route planning API: Provides the function for planning walking routes within distances of 100 kilometers or less.
Cycling route planning API: Provides the function for planning cycling routes within distances of 100 kilometers or less.
Driving route planning API: Provides the function for planning driving routes.
Up to 3 routes can be returned for each request.
Up to 5 waypoints can be specified.
Routes can be planned for future travel.
Routes can be planned based on real-time traffic conditions.
Use Cases
Ride hailing: Real-time route planning and route planning for future travel can provide accurate price estimates for ride-hailing orders. The estimated time of arrival (ETA) can be calculated for multiple routes in batches during order dispatch, for dramatically enhanced efficiency.
Logistics: Driving and cycling route planning provides accurate routes, ETAs, and estimated road tolls for trunk and branch road logistics and logistics delivery.
Tourism: When booking hotels and designing tourism routes, users can determine the distance between hotels, scenic spots, and transport stations with greater ease, thanks to high-level route planning capabilities, and enjoy efficient, hassle-free travel at all times.
Preparations
Before using the route planning function, first obtain the API key in AppGallery Connect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Note
If the API key contains special characters, you need to encode it using encodeURI. For example, if the original API key is ABC/DFG+, the conversion result is ABC%2FDFG%2B.
Apply for the network access permission in the AndroidManifest.xml file.
Code:
<!-- Network permission -->
<uses-permission android:name="android.permission.INTERNET" />
Development Procedure
1. Initialize a map for displaying planned routes.
Code:
private MapFragment mMapFragment;
private HuaweiMap hMap;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_directions);
mMapFragment = (MapFragment) getFragmentManager().findFragmentById(R.id.mapfragment_mapfragmentdemo);
mMapFragment.getMapAsync(this);
}
2. Obtain the current user location and use it as the start point for route planning.
Code:
private void getMyLocation() {
Task<Location> locationTask = LocationServices.getFusedLocationProviderClient(this).getLastLocation();
locationTask.addOnCompleteListener(param0 -> {
if (param0 != null) {
Location location = param0.getResult();
double Lat = location.getLatitude();
double Lng = location.getLongitude();
myLocation = new LatLng(Lat, Lng);
Log.d(TAG, " Lat is : " + Lat + ", Lng is : " + Lng);
CameraUpdate CameraUpdate = CameraUpdateFactory.newLatLng(myLocation);
hMap.moveCamera(CameraUpdate);
}
}).addOnFailureListener(param0 -> Log.d(TAG, "lastLocation is error"));
}
3. Add a map long-press event listener to listen to the end point for route planning.
Code:
hMap.setOnMapLongClickListener(latLng -> {
if (null != mDestinationMarker) {
mDestinationMarker.remove();
}
if (null != mPolylines) {
for (Polyline polyline : mPolylines) {
polyline.remove();
}
}
enableAllBtn();
MarkerOptions options = new MarkerOptions().position(latLng).title("dest");
mDestinationMarker = hMap.addMarker(options);
mDestinationMarker.setAnchor(0.5f,1f);
StringBuilder dest = new StringBuilder(String.format(Locale.getDefault(), "%.6f", latLng.latitude));
dest.append(", ").append(String.format(Locale.getDefault(), "%.6f", latLng.longitude));
((TextInputEditText)findViewById(R.id.dest_input)).setText(dest);
mDest = latLng;
});
4. Generate a route planning request based on the specified start point and end point.
Code:
private JSONObject buildRequest() {
JSONObject request = new JSONObject();
try {
JSONObject origin = new JSONObject();
origin.put("lng", myLocation.longitude);
origin.put("lat", myLocation.latitude);
JSONObject destination = new JSONObject();
destination.put("lng", mDest.longitude);
destination.put("lat", mDest.latitude);
request.put("origin", origin);
request.put("destination", destination);
} catch (JSONException e) {
e.printStackTrace();
}
return request;
}
5. Draw planned routes on the map based on the route planning response.
Code:
JSONObject route = new JSONObject(result);
JSONArray routes = route.optJSONArray("routes");
JSONObject route1 = routes.optJSONObject(0);
JSONArray paths = route1.optJSONArray("paths");
JSONObject path1 = paths.optJSONObject(0);
JSONArray steps = path1.optJSONArray("steps");
for (int i = 0; i < steps.length(); i++) {
PolylineOptions options = new PolylineOptions();
JSONObject step = steps.optJSONObject(i);
JSONArray polyline = step.optJSONArray("polyline");
for (int j = 0; j < polyline.length(); j++) {
JSONObject polyline_t = polyline.optJSONObject(j);
options.add(new LatLng(polyline_t.getDouble("lat"), polyline_t.getDouble("lng")));
}
Polyline pl = hMap.addPolyline(options.color(Color.BLUE).width(3));
mPolylines.add(pl);
}
Demo Effects
Hello everyone, in this article we will learn how to use Huawei Awareness Kit with foreground service to send notification when certain condition is met.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Huawei Awareness Kit enables us to observe some environmental factors such as time, location, behavior, audio device status, ambient light, weather and nearby beacons. So, why don’t we create our own conditions to be met and observe them even when the application is not running?
First of all, we need to do HMS Core integration to be able to use Awareness Kit. I will not go into the details of that because it is already covered here.
If you are done with the integration, let’s start coding.
Activity Class
We will keep our activity class pretty simple to prevent any confusion. It will only be responsible for starting the service:
Java:
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Intent serviceStartIntent = new Intent(this, MyService.class);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
startForegroundService(serviceStartIntent);
}
else {
startService(serviceStartIntent);
}
}
}
However, we need to add the following permission to start a service correctly:
XML:
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
Service Class
Now, let’s talk about the service class. We are about to create a service which will run even when the application is killed. However, this comes with some restrictions. Since the Android Oreo, if an application wants to start a foreground service, it must inform the user by a notification which needs to be visible during the lifetime of the foreground service. Also, this notification needs to be used to start foreground. Therefore, our first job in the service class is to create this notification and call the startForeground() method with it:
Java:
Notification notification;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O)
notification = createCustomNotification();
else
notification = new Notification();
startForeground(1234, notification);
And here is how we create the information notification we need for SDK versions later than 25:
Java:
@RequiresApi(api = Build.VERSION_CODES.O)
private Notification createCustomNotification() {
NotificationChannel notificationChannel = new NotificationChannel("1234", "name", NotificationManager.IMPORTANCE_HIGH);
NotificationManager manager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE);
manager.createNotificationChannel(notificationChannel);
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(this, "com.awarenesskit.demo");
return notificationBuilder
.setSmallIcon(R.drawable.ic_notification)
.setContentTitle("Observing headset status")
.setPriority(NotificationManager.IMPORTANCE_HIGH)
.build();
}
Note: You should replace the application id above with the application id of your application.
Now, it is time to prepare the parameters to create a condition to be met called barrier:
Java:
PendingIntent pendingIntent = PendingIntent.getBroadcast(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT);
headsetBarrierReceiver = new HeadsetBarrierReceiver();
registerReceiver(headsetBarrierReceiver, new IntentFilter(barrierReceiverAction));
AwarenessBarrier headsetBarrier = HeadsetBarrier.connecting();
createBarrier(this, HEADSET_BARRIER_LABEL, headsetBarrier, pendingIntent);
Here we have sent the required parameters to a method which will create a barrier for observing headset status. If you want, you can use other awareness barriers too.
Creating a barrier is a simple and standard process which will be taken care of the following method:
Java:
private void createBarrier(Context context, String barrierLabel, AwarenessBarrier barrier, PendingIntent pendingIntent) {
BarrierUpdateRequest.Builder builder = new BarrierUpdateRequest.Builder();
BarrierUpdateRequest request = builder.addBarrier(barrierLabel, barrier, pendingIntent).build();
Awareness.getBarrierClient(context).updateBarriers(request)
.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void void1) {
System.out.println("Barrier Create Success");
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
System.out.println("Barrier Create Fail");
}
});
}
We will be done with the service class after adding the following methods:
Java:
@Override
public int onStartCommand(Intent intent, int flags, int startId) {
return START_STICKY;
}
@Override
public void onDestroy() {
unregisterReceiver(headsetBarrierReceiver);
super.onDestroy();
}
@Nullable
@Override
public IBinder onBind(Intent intent) {
return null;
}
And of course, we shouldn’t forget to add our service to the manifest file:
XML:
<service
android:name=".MyService"
android:enabled="true"
android:exported="true" />
Broadcast Receiver Class
Lastly, we need to create a broadcast receiver where we will observe and handle the changes in the headset status:
Java:
public class HeadsetBarrierReceiver extends BroadcastReceiver {
public static final String HEADSET_BARRIER_LABEL = "HEADSET_BARRIER_LABEL";
@Override
public void onReceive(Context context, Intent intent) {
BarrierStatus barrierStatus = BarrierStatus.extract(intent);
String barrierLabel = barrierStatus.getBarrierLabel();
int barrierPresentStatus = barrierStatus.getPresentStatus();
if (HEADSET_BARRIER_LABEL.equals(barrierLabel)) {
if (barrierPresentStatus == BarrierStatus.TRUE) {
System.out.println("The headset is connected.");
createNotification(context);
}
else if (barrierPresentStatus == BarrierStatus.FALSE) {
System.out.println("The headset is disconnected.");
}
}
}
When a change occurs in the headset status, this method will receive the information. Here, the value of barrierPresentStatus will determine if headset is connected or disconnected.
At this point, we can detect that headset is just connected, so it is time to send a notification. The following method will take care of that:
Java:
private void createNotification(Context context) {
// Create PendingIntent to make user open the application when clicking on the notification
PendingIntent pendingIntent = PendingIntent.getActivity(context, 1234, new Intent(context, MainActivity.class), PendingIntent.FLAG_UPDATE_CURRENT);
NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(context, "channelId")
.setSmallIcon(R.drawable.ic_headset)
.setContentTitle("Cool Headset!")
.setContentText("Want to listen to some music ?")
.setContentIntent(pendingIntent);
NotificationManager notificationManager = (NotificationManager) context.getSystemService(Context.NOTIFICATION_SERVICE);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
NotificationChannel mChannel = new NotificationChannel("channelId", "ChannelName", NotificationManager.IMPORTANCE_DEFAULT);
notificationManager.createNotificationChannel(mChannel);
}
notificationManager.notify(1234, notificationBuilder.build());
}
Output
When headset is connected, the following notification will be created even if the application is not running:
Final Thoughts
We have learned how to use Barrier API of Huawei Awareness Kit with a foreground service to observe the changes in environmental factors even when the application is not running.
As you may notice, the permanent notification indicating that the application is running in the background is not dismissible by the user which can be annoying. Even though these notifications can be dismissed by some third party applications, not every user has those applications, so you should be careful when deciding to build such services.
In this case, observing the headset status was just an example, so it might not be the best scenario for this use case, but Huawei Awareness Kit has many other great features that you can use with foreground services in your projects.
References
You can check the complete project on GitHub.
Note that, you will not be able to run this project because you don’t have the agconnect-services.json file for it. Therefore, you should only take it as a reference to create your own project.
Does Awareness Kit support wearable devices such as smart watches?
"John, have you seen my glasses?"
Our old friend John, a programmer at Huawei, has a grandpa who despite his old age, is an avid reader. Leaning back, struggling to make out what was written on the newspaper through his glasses, but unable to take his eyes off the text — this was how my grandpa used to read, John explained.
Reading this way was harmful on his grandpa's vision, and it occurred to John that the ears could take over the role of "reading" from the eyes. He soon developed a text-reading app that followed this logic, recognizing and then reading out text from a picture. Thanks to this app, John's grandpa now can ”read” from the comfort of his rocking chair, without having to strain his eyes.
How to Implement
The user takes a picture of a text passage. The app then automatically identifies the location of the text within the picture, and adjusts the shooting angle to an angle directly facing the text.
The app recognizes and extracts the text from the picture.
The app converts the recognized text into audio output by leveraging text-to-speech technology.
These functions are easy to implement, when relying on three services in HUAWEI ML Kit: document skew correction, text recognition, and text to speech (TTS).
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Preparations
1. Configure the Huawei Maven repository address.
2. Add the build dependencies for the HMS Core SDK.
Code:
dependencies {
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-voice-tts:2.1.0.300'
// Import the bee voice package.
implementation 'com.huawei.hms:ml-computer-voice-tts-model-bee:2.1.0.300'
// Import the eagle voice package.
implementation 'com.huawei.hms:ml-computer-voice-tts-model-eagle:2.1.0.300'
// Import a PDF file analyzer.
implementation 'com.itextpdf:itextg:5.5.10'
}
Tap PREVIOUS or NEXT to turn to the previous or next page. Tap speak to start reading; tap it again to pause reading.
Development process
1. Create a TTS engine by using the custom configuration class MLTtsConfig. Here, on-device TTS is used as an example.
Java:
private void initTts() {
// Set authentication information for your app to download the model package from the server of Huawei.
MLApplication.getInstance().setApiKey(AGConnectServicesConfig.
fromContext(getApplicationContext()).getString("client/api_key"));
// Create a TTS engine by using MLTtsConfig.
mlTtsConfigs = new MLTtsConfig()
// Set the text converted from speech to English.
.setLanguage(MLTtsConstants.TTS_EN_US)
// Set the speaker with the English male voice (eagle).
.setPerson(MLTtsConstants.TTS_SPEAKER_OFFLINE_EN_US_MALE_EAGLE)
// Set the speech speed whose range is (0, 5.0]. 1.0 indicates a normal speed.
.setSpeed(.8f)
// Set the volume whose range is (0, 2). 1.0 indicates a normal volume.
.setVolume(1.0f)
// Set the TTS mode to on-device.
.setSynthesizeMode(MLTtsConstants.TTS_OFFLINE_MODE);
mlTtsEngine = new MLTtsEngine(mlTtsConfigs);
// Update the configuration when the engine is running.
mlTtsEngine.updateConfig(mlTtsConfigs);
// Pass the TTS callback function to the TTS engine to perform TTS.
mlTtsEngine.setTtsCallback(callback);
// Create an on-device TTS model manager.
manager = MLLocalModelManager.getInstance();
isPlay = false;
}
2. Create a TTS callback function for processing the TTS result.
Java:
MLTtsCallback callback = new MLTtsCallback() {
@Override
public void onError(String taskId, MLTtsError err) {
// Processing logic for TTS failure.
}
@Override
public void onWarn(String taskId, MLTtsWarn warn) {
// Alarm handling without affecting service logic.
}
@Override
// Return the mapping between the currently played segment and text. start: start position of the audio segment in the input text; end (excluded): end position of the audio segment in the input text.
public void onRangeStart(String taskId, int start, int end) {
// Process the mapping between the currently played segment and text.
}
@Override
// taskId: ID of a TTS task corresponding to the audio.
// audioFragment: audio data.
// offset: offset of the audio segment to be transmitted in the queue. One TTS task corresponds to a TTS queue.
// range: text area where the audio segment to be transmitted is located; range.first (included): start position; range.second (excluded): end position.
public void onAudioAvailable(String taskId, MLTtsAudioFragment audioFragment, int offset,
Pair<Integer, Integer> range, Bundle bundle) {
// Audio stream callback API, which is used to return the synthesized audio data to the app.
}
@Override
public void onEvent(String taskId, int eventId, Bundle bundle) {
// Callback method of a TTS event. eventId indicates the event name.
boolean isInterrupted;
switch (eventId) {
case MLTtsConstants.EVENT_PLAY_START:
// Called when playback starts.
break;
case MLTtsConstants.EVENT_PLAY_STOP:
// Called when playback stops.
isInterrupted = bundle.getBoolean(MLTtsConstants.EVENT_PLAY_STOP_INTERRUPTED);
break;
case MLTtsConstants.EVENT_PLAY_RESUME:
// Called when playback resumes.
break;
case MLTtsConstants.EVENT_PLAY_PAUSE:
// Called when playback pauses.
break;
// Pay attention to the following callback events when you focus on only the synthesized audio data but do not use the internal player for playback.
case MLTtsConstants.EVENT_SYNTHESIS_START:
// Called when TTS starts.
break;
case MLTtsConstants.EVENT_SYNTHESIS_END:
// Called when TTS ends.
break;
case MLTtsConstants.EVENT_SYNTHESIS_COMPLETE:
// TTS is complete. All synthesized audio streams are passed to the app.
isInterrupted = bundle.getBoolean(MLTtsConstants.EVENT_SYNTHESIS_INTERRUPTED);
break;
default:
break;
}
}
};
3. Extract text from a PDF file.
Java:
private String loadText(String path) {
String result = "";
try {
PdfReader reader = new PdfReader(path);
result = result.concat(PdfTextExtractor.getTextFromPage(reader,
mCurrentPage.getIndex() + 1).trim() + System.lineSeparator());
reader.close();
} catch (IOException e) {
showToast(e.getMessage());
}
// Obtain the position of the header.
int header = result.indexOf(System.lineSeparator());
// Obtain the position of the footer.
int footer = result.lastIndexOf(System.lineSeparator());
if (footer != 0){
// Do not display the text in the header and footer.
return result.substring(header, footer - 5);
}else {
return result;
}
}
4. Perform TTS in on-device mode.
Java:
// Create an MLTtsLocalModel instance to set the speaker so that the language model corresponding to the speaker can be downloaded through the model manager.
MLTtsLocalModel model = new MLTtsLocalModel.Factory(MLTtsConstants.TTS_SPEAKER_OFFLINE_EN_US_MALE_EAGLE).create();
manager.isModelExist(model).addOnSuccessListener(new OnSuccessListener<Boolean>() {
@Override
public void onSuccess(Boolean aBoolean) {
// If the model is not downloaded, call the download API. Otherwise, call the TTS API of the on-device engine.
if (aBoolean) {
String source = loadText(mPdfPath);
// Call the speak API to perform TTS. source indicates the text to be synthesized.
mlTtsEngine.speak(source, MLTtsEngine.QUEUE_APPEND);
if (isPlay){
// Pause playback.
mlTtsEngine.pause();
tv_speak.setText("speak");
}else {
// Resume playback.
mlTtsEngine.resume();
tv_speak.setText("pause");
}
isPlay = !isPlay;
} else {
// Call the API for downloading the on-device TTS model.
downloadModel(MLTtsConstants.TTS_SPEAKER_OFFLINE_EN_US_MALE_EAGLE);
showToast("The offline model has not been downloaded!");
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
showToast(e.getMessage());
}
});
5. Release resources when the current UI is destroyed.
Java:
@Override
protected void onDestroy() {
super.onDestroy();
try {
if (mParcelFileDescriptor != null) {
mParcelFileDescriptor.close();
}
if (mCurrentPage != null) {
mCurrentPage.close();
}
if (mPdfRenderer != null) {
mPdfRenderer.close();
}
if (mlTtsEngine != null){
mlTtsEngine.shutdown();
}
} catch (IOException e) {
e.printStackTrace();
}
}
Other Applicable Scenarios
TTS can be used across a broad range of scenarios. For example, you could integrate it into an education app to read bedtime stories to children, or integrate it into a navigation app, which could read out instructions aloud.
For more details, you can go to:
Reddit to join our developer discussion
GitHub to download demos and sample codes
Stack Overflow to solve any integration problems
Original Source
Well explained will it supports all languages?
Displaying products with 3D models is something too great to ignore for an e-commerce app. Using those fancy gadgets, such an app can leave users with the first impression upon products in a fresh way!
The 3D model plays an important role in boosting user conversion. It allows users to carefully view a product from every angle, before they make a purchase. Together with the AR technology, which gives users an insight into how the product will look in reality, the 3D model brings a fresher online shopping experience that can rival offline shopping.
Despite its advantages, the 3D model has yet to be widely adopted. The underlying reason for this is that applying current 3D modeling technology is expensive:
Technical requirements: Learning how to build a 3D model is time-consuming.
Time: It takes at least several hours to build a low polygon model for a simple object, and even longer for a high polygon one.
Spending: The average cost of building a simple model can be more than one hundred dollars, and even higher for building a complex one.
Luckily, 3D object reconstruction, a capability in 3D Modeling Kit newly launched in HMS Core, makes 3D model building straightforward. This capability automatically generates a 3D model with a texture for an object, via images shot from different angles with a common RGB-Cam. It gives an app the ability to build and preview 3D models. For instance, when an e-commerce app has integrated 3D object reconstruction, it can generate and display 3D models of shoes. Users can then freely zoom in and out on the models for a more immersive shopping experience.
Actual Effect
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Technical Solutions
3D object reconstruction is implemented on both the device and cloud. RGB images of an object are collected on the device and then uploaded to the cloud. Key technologies involved in the on-cloud modeling process include object detection and segmentation, feature detection and matching, sparse/dense point cloud computing, and texture reconstruction. Finally, the cloud outputs an OBJ file (a commonly used 3D model file format) of the generated 3D model with 40,000 to 200,000 patches.
Preparations1. Configuring a Dependency on the 3D Modeling SDK
Open the app-level build.gradle file and add a dependency on the 3D Modeling SDK in the dependencies block.
Code:
// Build a dependency on the 3D Modeling SDK.
implementation 'com.huawei.hms:modeling3d-object-reconstruct:1.0.0.300'
2. Configuring AndroidManifest.xml
Open the AndroidManifest.xml file in the main folder. Add the following information before <application> to apply for the storage read and write permissions and camera permission.
Code:
/<!-- Permission to read data from and write data into storage. -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<!-- Permission to use the camera. -->
<uses-permission android:name="android.permission.CAMERA" />
Development Procedure1. Configuring the Storage Permission Application
In the onCreate() method of MainActivity, check whether the storage read and write permissions have been granted; if not, apply for them by using requestPermissions.
Code:
/if (EasyPermissions.hasPermissions(MainActivity.this, PERMISSIONS)) {
Log.i(TAG, "Permissions OK");
} else {
EasyPermissions.requestPermissions(MainActivity.this, "To use this app, you need to enable the permission.",
RC_CAMERA_AND_EXTERNAL_STORAGE, PERMISSIONS);
}
Check the application result. If the permissions are not granted, prompt the user to grant them.
Code:
@Override
public void onPermissionsGranted(int requestCode, @NonNull List<String> perms) {
Log.i(TAG, "permissions = " + perms);
if (requestCode == RC_CAMERA_AND_EXTERNAL_STORAGE && PERMISSIONS.length == perms.size()) {
initView();
initListener();
}
}
@Override
public void onPermissionsDenied(int requestCode, @NonNull List<String> perms) {
if (EasyPermissions.somePermissionPermanentlyDenied(this, perms)) {
new AppSettingsDialog.Builder(this)
.setRequestCode(RC_CAMERA_AND_EXTERNAL_STORAGE)
.setRationale("To use this app, you need to enable the permission.")
.setTitle("Insufficient permissions")
.build()
.show();
}
}
2. Creating a 3D Object Reconstruction Configurator
Code:
/// Set the PICTURE mode.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
.create();
3. Creating a 3D Object Reconstruction Engine and Initializing the Task
Call getInstance() of Modeling3dReconstructEngine and pass the current context to create an instance of the 3D object reconstruction engine.
Code:
// Create an engine.
modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(mContext);
Use the engine to initialize the task.
Code:
// Initialize the 3D object reconstruction task.
modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
// Obtain the task ID.
String taskId = modeling3dReconstructInitResult.getTaskId();
4. Creating a Listener Callback to Process the Image Upload Result
Create a listener callback that allows you to configure the operations triggered upon upload success and failure.
Code:
// Create an upload listener callback.
private final Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Upload progress.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
if (result.isComplete()) {
isUpload = true;
ScanActivity.this.runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, getString(R.string.upload_text_success), Toast.LENGTH_SHORT).show();
}
});
TaskInfoAppDbUtils.updateTaskIdAndStatusByPath(new Constants(ScanActivity.this).getCaptureImageFile() + manager.getSurfaceViewCallback().getCreateTime(), taskId, 1);
}
}
@Override
public void onError(String taskId, int errorCode, String message) {
isUpload = false;
runOnUiThread(new Runnable() {
@Override
public void run() {
progressCustomDialog.dismiss();
Toast.makeText(ScanActivity.this, "Upload failed." + message, Toast.LENGTH_SHORT).show();
LogUtil.e("taskid" + taskId + "errorCode: " + errorCode + " errorMessage: " + message);
}
});
}
};
5. Passing the Upload Listener Callback to the Engine to Upload Images
Pass the upload listener callback to the engine. Call uploadFile(),
pass the task ID obtained in step 3 and the path of the images to be uploaded. Then, upload the images to the cloud server.
Code:
// Pass the listener callback to the engine.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Start uploading.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
6. Querying the Task Status
Call getInstance of Modeling3dReconstructTaskUtils to create a task processing instance. Pass the current context.
Code:
// Create a task processing instance.
modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(Modeling3dDemo.getApp());
Call queryTask of the task processing instance to query the status of the 3D object reconstruction task.
Code:
// Query the task status, which can be: 0 (images to be uploaded); 1: (image upload completed); 2: (model being generated); 3( model generation completed); 4: (model generation failed).
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(task.getTaskId());
7. Creating a Listener Callback to Process the Model File Download Result
Create a listener callback that allows you to configure the operations triggered upon download success and failure.
Code:
// Create a download listener callback.
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
dialog.show();
}
});
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download complete", Toast.LENGTH_SHORT).show();
TaskInfoAppDbUtils.updateDownloadByTaskId(taskId, 1);
dialog.dismiss();
}
});
}
@Override
public void onError(String taskId, int errorCode, String message) {
LogUtil.e(taskId + " <---> " + errorCode + message);
((Activity) mContext).runOnUiThread(new Runnable() {
@Override
public void run() {
Toast.makeText(getContext(), "Download failed." + message, Toast.LENGTH_SHORT).show();
dialog.dismiss();
}
});
}
};
8. Passing the Download Listener Callback to the Engine to Download the File of the Generated Model
Pass the download listener callback to the engine. Call downloadModel, pass the task ID obtained in step 3 and the path for saving the model file to download it.
Code:
/ Pass the download listener callback to the engine.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Download the model file.
modeling3dReconstructEngine.downloadModel(appDb.getTaskId(), appDb.getFileSavePath());
More Information
The object should have rich texture, be medium-sized, and a rigid body. The object should not be reflective, transparent, or semi-transparent. The object types include goods (like plush toys, bags, and shoes), furniture (like sofas), and cultural relics (such as bronzes, stone artifacts, and wooden artifacts).
The object dimension should be within the range from 15 x 15 x 15 cm to 150 x 150 x 150 cm. (A larger dimension requires a longer time for modeling.)
3D object reconstruction does not support modeling for the human body and face.
Ensure the following requirements are met during image collection: Put a single object on a stable plane in pure color. The environment shall not be dark or dazzling. Keep all images in focus, free from blur caused by motion or shaking. Ensure images are taken from various angles including the bottom, flat, and top (it is advised that you upload more than 50 images for an object). Move the camera as slowly as possible. Do not change the angle during shooting. Lastly, ensure the object-to-image ratio is as big as possible, and all parts of the object are present.
These are all about the sample code of 3D object reconstruction. Try to integrate it into your app and build your own 3D models!
ReferencesFor more details, you can go to:
3D Modeling Kit official website
3D Moedling Kit Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download 3D Modeling Kit sample codes
Stack Overflow to solve any integration problems