[GUIDE][TUTORIAL][SHARE] Create Floating Window out of your App! - Galaxy Y GT-S5360 Themes and Apps

Note : I'm just sharing the work of the XDA Member klinkdawg24.I haven't tested this method but I request the people to here to try implementing this in your apps and see.Here's the guide :
klinkdawg24 said:
This is a project that I whipped together to show developers how to implement a floating window type popup for their apps. An example usage would be quick reply to a sms message, email, or any other times quick reply would come in handy. It isn't limited to just that though, using this method, you will be able to completely recreate any activity in windowed form with very little code edits.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
My brother an I have used this to create a quick reply for our app, Sliding Messaging Pro. The idea is basically the exact same as Halo, with almost the exact same implementation. We do it by extending the MainActivity in a pop up class, then using an overrode setUpWindow() function to set up your MainActivity in the window.
The full source is availible on my GitHub, found here: Floating Window GitHub
Play Store Link: Floating Window Demo
So now to the actual steps to creating your own "floating window popup" out of your apps MainActivity:
1.) First, we should add the styles that we are going to use to the styles.xml sheet.
These are the styles for the animation, of course you can define your own animation if you choose, these will bring the window up from the bottom of the screen and the close it to the bottom of the screen.
Code:
@anim/activity_slide_up
@anim/activity_slide_down
Now, this style is for the actual popup dialog box. You can play with the different values here all you want. We end up overriding some of these when we set up the window anyways, such as the dimming.
Code:
false
stateUnchanged
@null
true
true
@null
@style/PopupAnimation
true
false
2.) Now that your styles are set up, lets go ahead and add the new Popup activity to the manifest. You can see that we used the Theme.FloatingWindow.Popup style that we just defined in the last step. You will also have to add
Code:
xmlns:tools="http://schemas.android.com/tools"
to the xmlns declarations at the top of the manifest.
Code:
3.) One more thing that we have to put in before you have resolved all the errors here, the animations. Just copy and paste these two animations in separate files under the res/anim folder. Name one activity_slide_down.xml and the other activity_slide_up.xml
activity_slide_down.xml
Code:
activity_slide_up.xml
Code:
4.) Now that everything should be set up, we will make the actual popup activity! You can view my full code for this from the GitHub link, but basically, we are going to make a PopupMainActivity.java class and extend the MainActivity class. If you understand inheritance at all with java, you will know what this is doing. It is going to allow this PopupMainActivity class, when started, to override methods from the MainActivity to produce different outcomes.
The method that actually makes the windowed pop up I have named setUpWindow(). This means that whenever this activity is called instead of the main activity, it will use this method instead of the main activities setUpWindow() method. This is very convieniet since we don't want to do any extra work than we need to in this class. So here is the code for that class:
Code:
/**
* This method overrides the MainActivity method to set up the actual window for the popup.
* This is really the only method needed to turn the app into popup form. Any other methods would change the behavior of the UI.
* Call this method at the beginning of the main activity.
* You can't call setContentView(...) before calling the window service because it will throw an error every time.
*/
@Override
public void setUpWindow() {
// Creates the layout for the window and the look of it
requestWindowFeature(Window.FEATURE_ACTION_BAR);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_DIM_BEHIND,
WindowManager.LayoutParams.FLAG_DIM_BEHIND);
// Params for the window.
// You can easily set the alpha and the dim behind the window from here
WindowManager.LayoutParams params = getWindow().getAttributes();
params.alpha = 1.0f; // lower than one makes it more transparent
params.dimAmount = 0f; // set it higher if you want to dim behind the window
getWindow().setAttributes(params);
// Gets the display size so that you can set the window to a percent of that
Display display = getWindowManager().getDefaultDisplay();
Point size = new Point();
display.getSize(size);
int width = size.x;
int height = size.y;
// You could also easily used an integer value from the shared preferences to set the percent
if (height > width) {
getWindow().setLayout((int) (width * .9), (int) (height * .7));
} else {
getWindow().setLayout((int) (width * .7), (int) (height * .8));
}
}
You can see what it does from the comments, but basically, it just initializes your main activity in a windowed for, exactly what we are trying to do!
5.) Now to actually have this method override something in the MainActivity, you must add the same method there. In my example, The method is just blank and has no implementation because i don't want it to do anything extra for the full app. So add this activity and the error that "your method doesn't override anything in the super class" will go away. Then you actually have to call this method from your MainActivity's onCreate() method so that it will set up the windowed app when it is suppose to.
Code:
/**
* Called when the activity is first created to set up all of the features.
*/
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
context = this;
// If the user is in the PopupMainActivity function, the setUpWindow function would be called from that class
// otherwise it would call the function from this class that has no implementation.
setUpWindow();
// Make sure to set your content view AFTER you have set up the window or it will crash.
setContentView(R.layout.main);
// Again, this will call either the function from this class or the PopupMainActivity one,
// depending on where the user is
setUpButton();
}
As you can see from the code, I called this method before i set the content view for the activity. This is very important, otherwise you will get a runtime exception and the app will crash every time.
After you are done with that, you have successfully set up a program that will change from a windowed state to a full app state depending on which class is called. For example, if you set up an intent to open the floating window app, it would look something like this:
Code:
Intent window = new Intent(context, com.klinker.android.floating_window.PopupMainActivity.class);
window.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(window);
You are still able to call the MainActivity class like you normally would then. with an intent like this:
Code:
Intent fullApp = new Intent(context, MainActivity.class);
fullApp.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(fullApp);
That is just about it, obviously my solution and demo are very simple implementations of what it can do. You can see from the demo app that I simply go back and forth between the activities with a button, then i have an edittext at the bottom to demonstrate how the window resizes.
You can literally do anything with this though. It will work with any layouts you have set up and is a very cool way for your users to experience your app in a different light. If you want to change how something works in the popup part of the app, all you have to do is function that portion of code out, then override that method from the popup activity, so the possibilities are endless with what can be done here.
Credit for this goes to the Paranoid Android team of course for inspiring the idea, then Jacob Klinker (klinkdawg) and myself (Luke Klinker) for the implementation without framework tweaks or custom roms. This should work for any users, no matter the phone, android version, or custom rom.
I honestly have no clue if it will work on anything below 4.0, but if you test and find out, sound off in the comments!
Thanks and enjoy! I hope to see lots of dynamic and awesome popups in the future because it really is just that simple, it won't take you more than a half hour to get this working!
Click to expand...
Click to collapse
Original Thread : http://forum.xda-developers.com/showthread.php?t=2488094

Related

[Q] Changing a textview in a seperate Activity?

Hey guys,
I am trying to change textview properties of a textview that looks like this:
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In a seperate Activity that looks like this:
I tried to do this with bundles but I can't get it to work.
This is how my BookActivity looks like:
Code:
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.book_activity);
//this is where the size property comes in
Integer size = getIntent().getExtras().getInt("SGRkey");
TextView test2 = (TextView) findViewById(R.id.booktext);
test2.setTextSize(size);
Spinner spinner = (Spinner) findViewById(R.id.kapitelspinner);
ArrayAdapter<CharSequence> adapter = ArrayAdapter.createFromResource(
this, R.array.kapitel_array, android.R.layout.simple_spinner_item);
adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
spinner.setAdapter(adapter);
spinner.setOnItemSelectedListener(new MyOnItemSelectedListener());
}
public class MyOnItemSelectedListener implements OnItemSelectedListener {
public void onItemSelected(AdapterView<?> parent,
View view, int pos, long id) {
Toast.makeText(parent.getContext(),
parent.getItemAtPosition(pos).toString(), Toast.LENGTH_LONG).show();
final String[] theporn = getResources().getStringArray(R.array.allkapitel);
TextView text = (TextView) findViewById(R.id.booktext);
text.setText(theporn[pos]);
}
public void onNothingSelected(AdapterView parent) {
// Do nothing.
}
(i pick the chapter string in the spinner and that works just fine.)
And this is how my SettingsActivity looks like:
Code:
public class SettingsActivity extends Activity {
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.settings_view);
Spinner SGRspinner = (Spinner) findViewById(R.id.schriftgroeße_spinner);
ArrayAdapter<CharSequence> SGRadapter = ArrayAdapter.createFromResource(
this, R.array.schriftgroesse_list, android.R.layout.simple_spinner_item);
SGRadapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
SGRspinner.setAdapter(SGRadapter);
}
public class SGROnItemSelectedListener implements OnItemSelectedListener {
public void onItemSelected(AdapterView<?> parent,
View view, int pos, long id) {
Intent answer = new Intent();
Toast.makeText(parent.getContext(),
parent.getItemAtPosition(pos).toString(), Toast.LENGTH_LONG).show();
final String[] SGRstring = getResources().getStringArray(R.array.schriftgroesse_list);
int SGRint = Integer.parseInt(SGRstring[pos]);
Bundle size = new Bundle();
size.putInt("SGRkey", SGRint);
Intent nextActivity = new Intent(com.asm.reader.SettingsActivity.this, com.asm.reader.BookActivity.class);
nextActivity.putExtras(size);
com.asm.reader.SettingsActivity.this.startActivity(nextActivity);
}
public void onNothingSelected(AdapterView parent) {
// Do nothing.
}
}
I get an error when I try this. All activities are declared in the manifest. I really don't know how to go on. I'm pretty new to this, so sorry if this is something simple, but any help would be greatly appreciated!!
What is the error and where does it occur? Have you tried stepping through with the debugger?
I think it would be better if you'd do your layout on xml.
Sent from my SGH-T959 using XDA App
Try using a PreferenceActivity instead of an Activity for your settings. Then tie in SharedPreferences so that a value there can be edited and displayed by two activities. One will write the value and the other will read it
also... your code seems to have a bunch of useless stuff. in your SGROnItemSelectedListener you have an Intent you never use Intent answer = new Intent();
I don't see an onRecieve() to do anything with the intent you sent
From something awesome
first of all, thanks for the replies!
when I run the application, it says it has stopped unexpectedly.
the debugger says this:
@iynfynity
I have my layout in xml. im trying to change it with the settings activity
@killersnowman
the google dev guide says that prior to honeycomb, the class only allows a single preference. do you think that would interfere with what i wanna do? (since i wanna make the app available for API level 4 or so. thanks for the advice!
I'm trying to figure out the flow of things here.
So I'm assuming that your app launches and you get the tab bar shown and "Home" is the default activity (and its code is the first code block) and it starts normally. Now you hit the "Settings" tab and its activity (and its code is in the second block) starts OK, then you "Select Max Size" (assuming my rusty German is correct), set it to 8, and you want this "size" to be used to change the font size (guessing here) of the textview in the "Home" activity?
If so, then I don't think you are doing this right. The code in the settings tab's activity should not try to relaunch the home activity. You should save the settings value somewhere (like a preference class) then when the Home activity is reactivated (by the user clicking back to that tab) you should read the preferences and adjust your textview using the saved value.
Gene Poole said:
If so, then I don't think you are doing this right. The code in the settings tab's activity should not try to relaunch the home activity. You should save the settings value somewhere (like a preference class) then when the Home activity is reactivated (by the user clicking back to that tab) you should read the preferences and adjust your textview using the saved value.
Click to expand...
Click to collapse
yep. thats what i was trying to say so inelegantly on my mobile
Gene Poole said:
I'm trying to figure out the flow of things here.
So I'm assuming that your app launches and you get the tab bar shown and "Home" is the default activity (and its code is the first code block) and it starts normally. Now you hit the "Settings" tab and its activity (and its code is in the second block) starts OK, then you "Select Max Size" (assuming my rusty German is correct), set it to 8, and you want this "size" to be used to change the font size (guessing here) of the textview in the "Home" activity?
thats exactly how i want it
If so, then I don't think you are doing this right. The code in the settings tab's activity should not try to relaunch the home activity. You should save the settings value somewhere (like a preference class) then when the Home activity is reactivated (by the user clicking back to that tab) you should read the preferences and adjust your textview using the saved value.
thank you very much!
Click to expand...
Click to collapse
I will try it out as soon as i get home from work tonight
I am really puzzled by the coding part of this.
The dev reference looks convoluted and it gets me really confused. I'm sorry that I am such a newbie at this but I couldn't find any good guides out there that helped me. All I got so far is the XML part of it:
Code:
<?xml version="1.0" encoding="utf-8"?>
<PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android">
<PreferenceCategory
android:title="Einstellungen am Text">
<ListPreference
android:title="Schriftgröße"
android:summary="Passen Sie die Schriftgröße ihres Textes an"
android:key="schriftgroesse"
android:defaultValue="8"
android:entries="@array/schriftgroesse_list"
android:entryValues="@array/schriftgroesse_values" />
<ListPreference
android:title="Schriftfarbe"
android:summary="Passen Sie die Farbe ihres Textes an"
android:key="schriftfarbe"
android:defaultValue="#ffffff"
android:entries="@array/schriftfarbe_list"
android:entryValues="@array/schriftfarbe_values" />
<ListPreference
android:title="Hintergrundfarbe"
android:summary="Passen Sie den Hintergrund an"
android:key="schriftfarbe"
android:defaultValue="#000000"
android:entries="@array/hintergrundfarbe_list"
android:entryValues="@array/hintergrundfarbe_values" />
</PreferenceCategory>
</PreferenceScreen>
(I know, nothing really..)
It would be very much appreciated if someone could help me !
I am stuck on the color setting
Code:
@Override
protected void onResume() {
// TODO Auto-generated method stub
super.onResume();
TextView text = (TextView) findViewById(R.id.booktext);
SharedPreferences PRFSize=PreferenceManager.getDefaultSharedPreferences(this);
String Size = PRFSize.getString("schriftgroesse", "8");
SharedPreferences PRFColor=PreferenceManager.getDefaultSharedPreferences(this);
String Color = PRFColor.getString("schriftfarbe", "#ffffff");
Integer SizeInt = Integer.parseInt(Size);
text.setTextSize(SizeInt);
//now what?
text.setTextColor();
Context context = getApplicationContext();
CharSequence hello = Size;
int duration = Toast.LENGTH_SHORT;
Toast toast = Toast.makeText(context, hello, duration);
toast.show();
}
How do I convert the String (Color) to an interger/float to get it to work for the method setTextColor?? I always get an error when I try to do
Code:
Integer.parseInt( string );
..
I am puzzled
Why not load/save to SharedPreferences as an integer (use Editor.putInt() and getInt() )?
Okay, sorry for the quadriple post, but I got it all now. Thanks for all the help!

How I Developed a Smile Filter for My App

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I recently read an article that explained how we as human beings are hardwired to enter the fight-or-flight mode when we realize that we are being watched. This feeling is especially strong when somebody else is trying to take a picture of us, which is why many of us find it difficult to smile in photos. This effect is so strong that we've all had the experience of looking at a photo right after it was taken and noticing straight away that the photo needs to be retaken because our smile wasn't wide enough or didn't look natural. So, the next time someone criticizes my smile in a photo, I'm just going to them, "It's not my fault. It's literally an evolutionary trait!"
Or, instead of making such an excuse, what about turning to technology for help? Actually, I have tried using some photo editor apps to modify my portrait photos, making my facial expression look nicer by, for example, removing my braces, whitening my teeth, and erasing my smile lines. However, maybe it's because of my rusty image editing skills, the modified images often turn out to be strange.
My lack of success with photo editing made me wonder: Wouldn't it be great if there was a function specially designed for people like me, who find it difficult to smile naturally in photos and who aren't good at photo editing, which could automatically give us picture-perfect smiles?
I then suddenly remembered that I had heard about an interesting function called smile filter that has been going viral on different apps and platforms. A smile filter is an app feature which can automatically add a natural-looking smile to a face detected in an image. I have tried it before and was really amazed by the result. In light of my sudden recall, I decided to create a demo app with a similar function, in order to figure out the principle behind it.
To provide my app with a smile filter, I chose to use the auto-smile capability provided by HMS Core Video Editor Kit. This capability automatically detects people in an image and then lightens up the detected faces with a smile (either closed- or open-mouth) that perfectly blends in with each person's facial structure. With the help of such a capability, a mobile app can create the perfect smile in seconds and save users from the hassle of having to use a professional image editing program.
Check the result out for yourselves:
Looks pretty natural, right? This is the result offered by my demo app integrated with the auto-smile capability. The original image looks like this:
Next, I will explain how I integrated the auto-smile capability into my app and share the relevant source code from my demo app.
Integration Procedure​Preparations​1, Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration​1. Set the app authentication information. This can be done via an API key or an access token.
Using an API key: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, using an access token: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID, which must be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development​
Code:
// Apply the auto-smile effect. Currently, this effect only supports image assets.
imageAsset.addFaceSmileAIEffect(new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
// Callback when the handling progress is received.
}
@Override
public void onSuccess() {
// Callback when the handling is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the handling failed.
}
});
// Stop applying the auto-smile effect.
imageAsset.interruptFaceSmile();
// Remove the auto-smile effect.
imageAsset.removeFaceSmileAIEffect();
And with that, I successfully integrated the auto-smile capability into my demo app, and now it can automatically add smiles to faces detected in the input image.
Conclusion​Research has demonstrated that it is normal for people to behave unnaturally when we are being photographed. Such unnaturalness becomes even more obvious when we try to smile. This explains why numerous social media apps and video/image editing apps have introduced smile filter functions, which allow users to easily and quickly add a naturally looking smile to faces in an image.
Among various solutions to such a function, HMS Core Video Editor Kit's auto-smile capability stands out by providing excellent, natural-looking results and featuring straightforward and quick integration.
What's better, the auto-smile capability can be used together with other capabilities from the same kit, to further enhance users' image editing experience. For example, when used in conjunction with the kit's AI color capability, you can add color to an old black-and-white photo and then use auto-smile to add smiles to the sullen expressions of the people in the photo. It's a great way to freshen up old and dreary photos from the past.
And that's just one way of using the auto-smile capability in conjunction with other capabilities. What ideas do you have? Looking forward to knowing your thoughts in the comments section.
References​How to Overcome Camera Shyness or Phobia
Introduction to Auto-Smile

How to Automatically Create a Scenic Timelapse Video

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Have you ever watched a video of the northern lights? Mesmerizing light rays that swirl and dance through the star-encrusted sky. It's even more stunning when they are backdropped by crystal-clear waters that flow smoothly between and under ice crusts. Complementing each other, the moving sky and water compose a dynamic scene that reflects the constant rhythm of the mother nature.
Now imagine that the video is frozen into an image: It still looks beautiful, but lacks the dynamism of the video. Such a contrast between still and moving images shows how videos are sometimes better than still images when it comes to capturing majestic scenery, since the former can convey more information and thus be more engaging.
This may be the reason why we sometimes regret just taking photos instead of capturing a video when we encounter beautiful scenery or a memorable moment.
In addition to this, when we try to add a static image to a short video, we will find that the transition between the image and other segments of the video appears very awkward, since the image is the only static segment in the whole video.
If we want to turn a static image into a dynamic video by adding some motion effects to the sky and water, one way to do this is to use a professional PC program to modify the image. However, this process is often very complicated and time-consuming: It requires adjustment of the timeline, frames, and much more, which can be a daunting prospect for amateur image editors.
Luckily, there are now numerous AI-driven capabilities that can automatically create time-lapse videos for users. I chose to use the auto-timelapse capability provided by HMS Core Video Editor Kit. It can automatically detect the sky and water in an image and produce vivid dynamic effects for them, just like this:
The movement speed and angle of the sky and water are customizable.
Now let's take a look at the detailed integration procedure for this capability, to better understand how such a dynamic effect is created.
Integration Procedure​Preparations​1. Configure necessary app information. This step requires you to register a developer account, create an app, generate a signing certificate fingerprint, configure the fingerprint, and enable the required services.
2. Integrate the SDK of the kit.
3. Configure the obfuscation scripts.
4. Declare necessary permissions.
Project Configuration​1. Set the app authentication information. This can be done via an API key or an access token.
Set an API key via the setApiKey method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setApiKey("your ApiKey");
Or, set an access token by using the setAccessToken method: You only need to set the app authentication information once during app initialization.
Code:
MediaApplication.getInstance().setAccessToken("your access token");
2. Set a License ID. This ID should be unique because it is used to manage the usage quotas of the service.
Code:
MediaApplication.getInstance().setLicenseId("License ID");
3. Initialize the runtime environment for the HuaweiVideoEditor object. Remember to release the HuaweiVideoEditor object when exiting the project.
Create a HuaweiVideoEditor object.
Code:
HuaweiVideoEditor editor = HuaweiVideoEditor.create(getApplicationContext());
Specify the preview area position. Such an area is used to render video images, which is implemented by SurfaceView created within the SDK. Before creating such an area, specify its position in the app first.
Code:
<LinearLayout
android:id="@+id/video_content_layout"
android:layout_width="0dp"
android:layout_height="0dp"
android:background="@color/video_edit_main_bg_color"
android:gravity="center"
android:orientation="vertical" />
// Specify the preview area position.
LinearLayout mSdkPreviewContainer = view.findViewById(R.id.video_content_layout);
// Specify the preview area layout.
editor.setDisplay(mSdkPreviewContainer);
Initialize the runtime environment. If license verification fails, LicenseException will be thrown.
After it is created, the HuaweiVideoEditor object will not occupy any system resources. You need to manually set when the runtime environment of the object will be initialized. Once you have done this, necessary threads and timers will be created within the SDK.
Code:
try {
editor.initEnvironment();
} catch (LicenseException error) {
SmartLog.e(TAG, "initEnvironment failed: " + error.getErrorMsg());
finish();
return;
}
Function Development​
Code:
// Initialize the auto-timelapse engine.
imageAsset.initTimeLapseEngine(new HVEAIInitialCallback() {
@Override
public void onProgress(int progress) {
// Callback when the initialization progress is received.
}
@Override
public void onSuccess() {
// Callback when the initialization is successful.
}
@Override
public void onError(int errorCode, String errorMessage) {
// Callback when the initialization failed.
}
});
// When the initialization is successful, check whether there is sky or water in the image.
int motionType = -1;
imageAsset.detectTimeLapse(new HVETimeLapseDetectCallback() {
@Override
public void onResult(int state) {
// Record the state parameter, which is used to define a motion effect.
motionType = state;
}
});
// skySpeed indicates the speed at which the sky moves; skyAngle indicates the direction to which the sky moves; waterSpeed indicates the speed at which the water moves; waterAngle indicates the direction to which the water moves.
HVETimeLapseEffectOptions options =
new HVETimeLapseEffectOptions.Builder().setMotionType(motionType)
.setSkySpeed(skySpeed)
.setSkyAngle(skyAngle)
.setWaterAngle(waterAngle)
.setWaterSpeed(waterSpeed)
.build();
// Add the auto-timelapse effect.
imageAsset.addTimeLapseEffect(options, new HVEAIProcessCallback() {
@Override
public void onProgress(int progress) {
}
@Override
public void onSuccess() {
}
@Override
public void onError(int errorCode, String errorMessage) {
}
});
// Stop applying the auto-timelapse effect.
imageAsset.interruptTimeLapse();
// Remove the auto-timelapse effect.
imageAsset.removeTimeLapseEffect();
Now, the auto-timelapse capability has been successfully integrated into an app.
Conclusion​When capturing scenic vistas, videos, which can show the dynamic nature of the world around us, are often a better choice than static images. In addition, when creating videos with multiple shots, dynamic pictures deliver a smoother transition effect than static ones.
However, for users not familiar with the process of animating static images, if they try do so manually using computer software, they may find the results unsatisfying.
The good news is that there are now mobile apps integrated with capabilities such as Video Editor Kit's auto-timelapse feature that can create time-lapse effects for users. The generated effect appears authentic and natural, the capability is easy to use, and its integration is straightforward. With such capabilities in place, a video/image app can provide users with a more captivating user experience.
In addition to video/image editing apps, I believe the auto-timelapse capability can also be utilized by many other types of apps. What other kinds of apps do you think would benefit from such a feature? Let me know in the comments section.

How to Continuously Locate a Device in the Background

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Nowadays, apps use device location to provide a wide variety of services. You may want to see a user's location to show services around them, learn the habits of users in different areas, or perform location-based actions, like a ride-hailing app calculating travel distance and trip fare.
In order to protect user privacy, however, most apps request device location only when they run in the foreground. Once switched to the background, or even with the device screen turned off, apps are usually not allowed to request device location. As a result, they cannot record the GPS track of a device, which negatively affects the user experience of core functions of many apps, such as ride-hailing, trip-sharing, and exercise, all of which need to track location in real time.
To ensure this kind of app is able to function properly, the app developer usually needs to give their app the capability to obtain device location when the app is running in the background. So, is there a service which the app can use to continuously request device location in the background? Fortunately, HMS Core Location Kit offers the background location function necessary to do just that.
In this article, I'll show you how to use the background location function to request device location continuously in the background.
Background Location Implementation Method​
An app is running on a non-Huawei phone with the location function enabled for the app using LocationCallback. After the app is switched to the background, it will be unable to request device location.
To enable the app to continuously request device location after it is switched to the background, the app developer can use the enableBackgroundLocation method to create a foreground service. This increases the frequency with which the app requests location updates in the background. The foreground service is the most important thing for enabling successful background location because it is a service with the highest priority and the lowest probability of being ended by the system.
Note that the background location function itself cannot request device location. It needs to be used together with the location function enabled using LocationCallback. Location data needs to be obtained using the onLocationResult(LocationResult locationResult) method in the LocationCallback object.
Background Location Notes​
1. The app needs to be running on a non-Huawei phone.
2. The app must be granted permission to always obtain device location.
3. The app must not be frozen by the power saving app on the device. For example, on a vivo phone, you need to allow the app to consume power when running in the background.
Demo Testing Notes​
1. It is recommended not to charge the device during the test. This is because the app may not be limited for power consumption while the device is charging.
2. To determine if the app is requesting device location, you can check whether the positioning icon is displayed in the device's status bar. On vivo, for example, the phone will show a positioning icon in the status bar when an app requests the phone's location. If background location is disabled for the app, the positioning icon in the status bar will disappear after the app is switched to the background. If background location is enabled for the app, however, the phone will continue to show the positioning icon in the status bar after the app is switched to the background.
Development Preparation​
Before getting started with the development, you will need to integrate the Location SDK into your app. If you are using Android Studio, you can integrate the SDK via the Maven repository.
Here, I won't be describing how to integrate the SDK. You can click here to learn about the details.
Background Location Development Procedure​
After integrating the SDK, perform the following steps to use the background location function:
1. Declare the background location permission in the AndroidManifest.xml file.
Code:
<service
android:name="com.huawei.location.service.BackGroundService"
android:foregroundServiceType="location" />
Generally, there are three location permissions in Android:
ACCESS_COARSE_LOCATION: This is the approximate location permission, which allows an app to obtain an approximate location, accurate to city block level.
ACCESS_FINE_LOCATION: This is the precise location permission, which allows an app to obtain a precise location, more accurate than the one obtained based on the ACCESS_COARSE_LOCATION permission.
ACCESS_BACKGROUND_LOCATION: This is the background location permission, which allows an app to obtain device location when running in the background in Android 10. In Android 10 or later, this permission is a dangerous permission and needs to be dynamically applied for. In versions earlier than Android 10, an app can obtain device location regardless of whether it runs in the foreground or background, as long as it is assigned the ACCESS_COARSE_LOCATION permission.
In Android 11 or later, these three permissions cannot be dynamically applied for at the same time. The ACCESS_BACKGROUND_LOCATION permission can only be applied for after the ACCESS_COARSE_LOCATION and ACCESS_FINE_LOCATION permissions are granted.
2. Declare the FOREGROUND_SERVICE permission in the AndroidManifest.xml file to ensure that the background location permission can be used properly.
Note that this step is required only for Android 9 or later. As I mentioned earlier, the foreground service has the highest priority and the lowest probability of being ended by the system. This is a basis for successful background location.
Code:
<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
3. Create a Notification object.
Code:
public class NotificationUtil {
public static final int NOTIFICATION_ID = 1;
public static Notification getNotification(Context context) {
Notification.Builder builder;
Notification notification;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) {
NotificationManager notificationManager =
(NotificationManager) context.getSystemService(NOTIFICATION_SERVICE);
String channelId = context.getPackageName();
NotificationChannel notificationChannel =
new NotificationChannel(channelId, "LOCATION", NotificationManager.IMPORTANCE_LOW);
notificationManager.createNotificationChannel(notificationChannel);
builder = new Notification.Builder(context, channelId);
} else {
builder = new Notification.Builder(context);
}
builder.setSmallIcon(R.drawable.ic_launcher)
.setContentTitle("Location SDK")
.setContentText("Running in the background ")
.setWhen(System.currentTimeMillis());
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN) {
notification = builder.build();
} else {
notification = builder.getNotification();
}
return notification;
}
}
4. Initialize the FusedLocationProviderClient object.
Code:
FusedLocationProviderClient mFusedLocationProviderClient = LocationServices.getFusedLocationProviderClient(this);
5. Enable the background location function.
Code:
private void enableBackgroundLocation() {
mFusedLocationProviderClient
.enableBackgroundLocation(NotificationUtil.NOTIFICATION_ID, NotificationUtil.getNotification(this))
.addOnSuccessListener(new OnSuccessListener<Void>() {
@Override
public void onSuccess(Void aVoid) {
LocationLog.i(TAG, "enableBackgroundLocation onSuccess");
}
})
.addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
LocationLog.e(TAG, "enableBackgroundLocation onFailure:" + e.getMessage());
}
});
}
Congratulations, your app is now able to request device location when running in the background.
Conclusion​
With the booming of mobile Internet, mobile apps have been playing a more and more important role in our daily lives. Many apps now offer location-based services to create a better user experience. However, most apps can request device location only when they run in the foreground, in order to protect user privacy. To ensure this kind of app functions properly, it is necessary for the app to receive location information while it is running in the background or the device screen is off.
My app can now just do that easily, thanks to the background location capability in Location Kit. If you want your app to do the same, have a try on this capability and you may be surprised.

Streamlining 3D Animation Creation via Rigging

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I dare say there are two types of people in this world: people who love Toy Story and people who have not watched it.
Well, this is just the opinion of a huge fan of the animation film. When I was a child, I always dreamed of having toys that could move and play with me, like my own Buzz Lightyear. Thanks to a fancy technique called rigging, I can now bring my toys to life, albeit I'm probably too old for them now.
What Is Rigging in 3D Animation and Why Do We Need It?​Put simply, rigging is a process whereby a skeleton is created for a 3D model to make it move. In other words, rigging creates a set of connected virtual bones that are used to control a 3D model.
It paves the way for animation because it enables a model to be deformed, making it moveable, which is the very reason that rigging is necessary for 3D animation.
What Is Auto Rigging​3D animation has been adopted by mobile apps in a number of fields (gaming, e-commerce, video, and more), to achieve more realistic animations than 2D.
However, this graphic technique has daunted many developers (like me) because rigging, one of its major prerequisites, is difficult and time-consuming for people who are unfamiliar with modeling. Specifically speaking, most high-performing rigging solutions have many requirements. An example of this is that the input model should be in a standard position, seven or eight key skeletal points should be added, as well as inverse kinematics which must be added to the bones, and more.
Luckily, there are solutions that can automatically complete rigging, such as the auto rigging solution from HMS Core 3D Modeling Kit.
This capability delivers a wholly automated rigging process, requiring just a biped humanoid model that is generated using images taken from a mobile phone camera. After the model is input, auto rigging uses AI algorithms for limb rigging and generates the model skeleton and skin weights (which determine the degree to which a bone can influence a part of the mesh). Then, the capability changes the orientation and position of the skeleton so that the model can perform a range of preset actions, like walking, running, and jumping. Besides, the rigged model can also be moved according to an action generated by using motion capture technology, or be imported into major 3D engines for animation.
Lower requirements do not compromise rigging accuracy. Auto rigging is built upon hundreds of thousands of 3D model rigging data records. Thanks to some fine-tuned data records, the capability delivers ideal algorithm accuracy and generalization.
I know that words alone are no proof, so check out the animated model I've created using the capability.
Movement is smooth, making the cute panda move almost like a real one. Now I'd like to show you how I created this model and how I integrated auto rigging into my demo app.
Integration Procedure​Preparations​Before moving on to the real integration work, make necessary preparations, which include:
Configure app information in AppGallery Connect.
Integrate the HMS Core SDK with the app project, which includes Maven repository address configuration.
Configure obfuscation scripts.
Declare necessary permissions.
Capability Integration​1. Set an access token or API key — which can be found in agconnect-services.json — during app initialization for app authentication.
Using the access token: Call setAccessToken to set an access token. This task is required only once during app initialization.
Code:
ReconstructApplication.getInstance().setAccessToken("your AccessToken");
Using the API key: Call setApiKey to set an API key. This key does not need to be set again.
Code:
ReconstructApplication.getInstance().setApiKey("your api_key");
The access token is recommended. And if you prefer the API key, it is assigned to the app when it is created in AppGallery Connect.
2. Create a 3D object reconstruction engine and initialize it. Then, create an auto rigging configurator.
Code:
// Create a 3D object reconstruction engine.
Modeling3dReconstructEngine modeling3dReconstructEngine = Modeling3dReconstructEngine.getInstance(context);
// Create an auto rigging configurator.
Modeling3dReconstructSetting setting = new Modeling3dReconstructSetting.Factory()
// Set the working mode of the engine to PICTURE.
.setReconstructMode(Modeling3dReconstructConstants.ReconstructMode.PICTURE)
// Set the task type to auto rigging.
.setTaskType(Modeling3dReconstructConstants.TaskType.AUTO_RIGGING)
.create();
3. Create a listener for the result of uploading images of an object.
Code:
private Modeling3dReconstructUploadListener uploadListener = new Modeling3dReconstructUploadListener() {
@Override
public void onUploadProgress(String taskId, double progress, Object ext) {
// Callback when the upload progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructUploadResult result, Object ext) {
// Callback when the upload is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when the upload failed.
}
};
4. Use a 3D object reconstruction configurator to initialize the task, set an upload listener for the engine created in step 1, and upload images.
Code:
// Use the configurator to initialize the task, which should be done in a sub-thread.
Modeling3dReconstructInitResult modeling3dReconstructInitResult = modeling3dReconstructEngine.initTask(setting);
String taskId = modeling3dReconstructInitResult.getTaskId();
// Set an upload listener.
modeling3dReconstructEngine.setReconstructUploadListener(uploadListener);
// Call the uploadFile API of the 3D object reconstruction engine to upload images.
modeling3dReconstructEngine.uploadFile(taskId, filePath);
5. Query the status of the auto rigging task.
Code:
// Initialize the task processing class.
Modeling3dReconstructTaskUtils modeling3dReconstructTaskUtils = Modeling3dReconstructTaskUtils.getInstance(context);
// Call queryTask in a sub-thread to query the task status.
Modeling3dReconstructQueryResult queryResult = modeling3dReconstructTaskUtils.queryTask(taskId);
// Obtain the task status.
int status = queryResult.getStatus();
6. Create a listener for the result of model file download.
Code:
private Modeling3dReconstructDownloadListener modeling3dReconstructDownloadListener = new Modeling3dReconstructDownloadListener() {
@Override
public void onDownloadProgress(String taskId, double progress, Object ext) {
// Callback when download progress is received.
}
@Override
public void onResult(String taskId, Modeling3dReconstructDownloadResult result, Object ext) {
// Callback when download is successful.
}
@Override
public void onError(String taskId, int errorCode, String message) {
// Callback when download failed.
}
};
7. Pass the download listener to the 3D object reconstruction engine, to download the rigged model.
Code:
// Set download configurations.
Modeling3dReconstructDownloadConfig downloadConfig = new Modeling3dReconstructDownloadConfig.Factory()
// Set the model file format to OBJ or glTF.
.setModelFormat(Modeling3dReconstructConstants.ModelFormat.OBJ)
// Set the texture map mode to normal mode or PBR mode.
.setTextureMode(Modeling3dReconstructConstants.TextureMode.PBR)
.create();
// Set the download listener.
modeling3dReconstructEngine.setReconstructDownloadListener(modeling3dReconstructDownloadListener);
// Call downloadModelWithConfig, passing the task ID, path to which the downloaded file will be saved, and download configurations, to download the rigged model.
modeling3dReconstructEngine.downloadModelWithConfig(taskId, fileSavePath, downloadConfig);
Where to Use​Auto rigging is used in many scenarios, for example:
Gaming. The most direct way of using auto rigging is to create moveable characters in a 3D game. Or, I think we can combine it with AR to create animated models that can appear in the camera display of a mobile device, which will interact with users.
Online education. We can use auto rigging to animate 3D models of popular toys, and liven them up with dance moves, voice-overs, and nursery rhymes to create educational videos. These models can be used in educational videos to appeal to kids more.
E-commerce. Anime figurines look rather plain compared to how they behave in animes. To spice up the figurines, we can use auto rigging to animate 3D models that will look more engaging and dynamic.
Conclusion​3D animation is widely used in mobile apps, because it presents objects in a more fun and interactive way.
A key technique for creating great 3D animations is rigging. Conventional rigging requires modeling know-how and expertise, which puts off many amateur modelers.
Auto rigging is the perfect solution to this challenge because its full-automated rigging process can produce highly accurate rigged models that can be easily animated using major engines available on the market. Auto rigging not only lowers the costs and requirements of 3D model generation and animation, but also helps 3D models look more appealing.

Categories

Resources