More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201334296322460044&fid=0101187876626530001
Introduction
Do you already know Image Kit? If you havent have the opportuniy to use it, Image Kit incorporates intelligent design and animation production functions in your App. Giving us the power of efficient image reproduction while providing image editing for our users.
For this example we will use the Image Render service. This service provides us with basic animation effects and nine advanced ones. In this example we will create an animated Splash Screen where we will apply the Waterfall effect. Without further ado let's see the steps to follow
Steps:
1. Download the Image Render Example code
2. Create an App in AGC
3. Connect our Android project with the App
4. Downloading the necessary repositories
5. Create our Splash Screen
6. Add the necessary assets manifest.xml for our animation
7. Obtain the instance
8. Initialize the Rendering service
9. Run the animation
10. After the execution we launch the Main Activity
Download the Image Render Example code
In order to implement this functionality we must download the source code provided by the developer Huawei, this is the link to the repository.
https://github.com/huaweicodelabs/ImageKit
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
You can git clone or download the zip to your computer. I recommend that you run the app. In my case there were some problems with my version of gradle. In case the same thing happens to you. What I did was download the Gradle version of my project, use the following one and you modify this in the gradle-wrapper.properties file
Code:
distributionUrl=https\://services.gradle.org/distributions/gradle-4.10.1-all.zip
Run the app so that you know how the Image Render works.
Create an App in AGC
In order to use this service it will be necessary to have an App in AGC. I share a guide so that you can create an App and when creating an App and a project in AGC you will have the instructions to be able to add the necessary elements to your App.
https://developer.huawei.com/consumer/en/codelab/HMSPreparation/index.html#1
Connect our Android project with the App
To connect them we need to add the following lines
. Configure the Maven repository address and AppGallery Connect plug-in in the project's build.gradle file.
Go to allprojects> repositories and configure the Maven repository address for the HMS Core SDK
Code:
allprojects {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
...
}
}
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
Code:
buildscript {
repositories {
maven {url 'https://developer.huawei.com/repo/'}
...
}
...
}
Go to buildscript > dependencies and add dependency configurations
Code:
Buildscript {
dependencies {
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
Configure the dependency package in the app's build.gradle file.
Add a dependency package to the dependencies section in the build.gradle file.
Code:
dependencies {
...
implementation 'com.huawei.hms:image-render:1.0.2.302'
...
}
Downloading the necessary repositories
Configure the dependency package in the app's build.gradle file.
Add a dependency package to the dependencies section in the build.gradle file.
Code:
dependencies {
...
implementation 'com.huawei.hms:image-render:1.0.2.302'
...
}
Configure minSdkVersion
android {
...
defaultConfig {
...
minSdkVersion 26
...
}
...
}
Add the AppGallery Connect plug-in dependency to the file header.
Code:
apply plugin: 'com.huawei.agconnect'
Create our Splash Screen
Create our style
Code:
<style name="SplashTheme" parent="Theme.AppCompat.NoActionBar">
<item name="android:windowBackground">@drawable/splash_background</item>
</style>
Add our Activity to the manifest
Code:
<activity android:name="SplashActivity"
android:theme="@style/SplashTheme"
android:screenOrientation="portrait">
<intent-filter>
<action android:name="android.intent.action.MAIN"
/>
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
We also need to add these permissions
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Splash Activity
For the Splash we need to pause the main thread we declare a time in milliseconds
Code:
private final int DURACION_SPLASH = 8500;
private void onUIThread(){
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
Intent intent = new Intent(SplashActivity.this, MainActivity.class);
//SplashActivity.this.startActivity(intent);
}
},DURACION_SPLASH);
}
We get the instance
For this I have created a method we call initImageRender () and we call it from onCreate ()
Code:
private void initImageRender() {
// Obtain an ImageRender object.
ImageRender.getInstance(this, new ImageRender.RenderCallBack() {
@Override
public void onSuccess(ImageRenderImpl imageRender) {
Log.i(TAG, "getImageRenderAPI success");
imageRenderAPI = imageRender;
useImageRender();
}
@Override
public void onFailure(int i) {
Log.e(TAG, "getImageRenderAPI failure, errorCode = " + i);
}
});
}
Now we initialize the Service for this We will have to pass it the path of the file that we will use from splash
Code:
private void useImageRender() {
// Initialize the ImageRender object.
if (imageRenderAPI == null) {
Log.e(TAG, "initRemote fail, please check kit version");
return;
}
Log.d(TAG, sourcePath);
int initResult = imageRenderAPI.doInit(sourcePath, getAuthJson());
Log.i(TAG, "DoInit result == " + initResult);
if (initResult == 0) {
// Obtain the rendered view.
RenderView renderView = imageRenderAPI.getRenderView();
if (renderView.getResultCode() == ResultCode.SUCCEED) {
View view = renderView.getView();
if (null != view) {
// Add the rendered view to the layout.
contentView.addView(view);
int playResult = imageRenderAPI.playAnimation();
} else {
Log.w(TAG, "GetRenderView fail, view is null");
}
} else if (renderView.getResultCode() == ResultCode.ERROR_GET_RENDER_VIEW_FAILURE) {
Log.w(TAG, "GetRenderView fail");
} else if (renderView.getResultCode() == ResultCode.ERROR_XSD_CHECK_FAILURE) {
Log.w(TAG, "GetRenderView fail, resource file parameter error, please check resource file.");
} else if (renderView.getResultCode() == ResultCode.ERROR_VIEW_PARSE_FAILURE) {
Log.w(TAG, "GetRenderView fail, resource file parsing failed, please check resource file.");
} else if (renderView.getResultCode() == ResultCode.ERROR_REMOTE) {
Log.w(TAG, "GetRenderView fail, remote call failed, please check HMS service");
} else if (renderView.getResultCode() == ResultCode.ERROR_DOINIT) {
Log.w(TAG, "GetRenderView fail, init failed, please init again");
}
} else {
Log.w(TAG, "Do init fail, errorCode == " + initResult);
}
}
Add the necessary assets manifest.xml for our animation
We need to add to our folder assets manifest.xml file which is the animation.
Obtain the instance
Now we have to get the instance of the ImageRender
Code:
private void initImageRender() {
// Obtain an ImageRender object.
ImageRender.getInstance(this, new ImageRender.RenderCallBack() {
@Override
public void onSuccess(ImageRenderImpl imageRender) {
Log.i(TAG, "getImageRenderAPI success");
imageRenderAPI = imageRender;
useImageRender();
}
@Override
public void onFailure(int i) {
Log.e(TAG, "getImageRenderAPI failure, errorCode = " + i);
}
});
}
If everithing goes well we should get printed in the console the following message
Code:
getImageRenderAPI success
Initialize the Rendering service
Once that the instance has been initialized we can init the service by executing the doInit() method
Code:
Log.d(TAG, sourcePath);
int initResult = imageRenderAPI.doInit(sourcePath, getAuthJson());
Log.i(TAG, "DoInit result == " + initResult);
if (initResult == 0) {
// Obtain the rendered view.
RenderView renderView = imageRenderAPI.getRenderView();
if (renderView.getResultCode() == ResultCode.SUCCEED) {
View view = renderView.getView();
if (null != view) {
// Add the rendered view to the layout.
contentView.addView(view);
int playResult = imageRenderAPI.playAnimation();
} else {
Log.w(TAG, "GetRenderView fail, view is null");
}
} else if (renderView.getResultCode() == ResultCode.ERROR_GET_RENDER_VIEW_FAILURE) {
Log.w(TAG, "GetRenderView fail");
} else if (renderView.getResultCode() == ResultCode.ERROR_XSD_CHECK_FAILURE) {
Log.w(TAG, "GetRenderView fail, resource file parameter error, please check resource file.");
} else if (renderView.getResultCode() == ResultCode.ERROR_VIEW_PARSE_FAILURE) {
Log.w(TAG, "GetRenderView fail, resource file parsing failed, please check resource file.");
} else if (renderView.getResultCode() == ResultCode.ERROR_REMOTE) {
Log.w(TAG, "GetRenderView fail, remote call failed, please check HMS service");
} else if (renderView.getResultCode() == ResultCode.ERROR_DOINIT) {
Log.w(TAG, "GetRenderView fail, init failed, please init again");
}
} else {
Log.w(TAG, "Do init fail, errorCode == " + initResult);
}
Run the animation
Once that we have initialized and init the Image Render is time to play the animation
So Now we have to add a View to our Activity and play the animation
Code:
if (null != imageRenderAPI) {
int playResult = imageRenderAPI.playAnimation();
if (playResult == ResultCode.SUCCEED) {
Log.i(TAG, "Start animation success");
} else {
Log.i(TAG, "Start animation failure");
}
} else {
Log.w(TAG, "Start animation fail, please init first.");
}
Conclusion
With this little example we haver started with the implementation of Image Kit in one project. For further information here you have docs.
https://developer.huawei.com/consumer/en/hms/huawei-imagekit/
Can we do like gif animation using Image kit?
Related
More articles like this, visit HUAWEI Developer Forum and Medium.
About This Document
In the previous article, we have introduced the quick integration method of HMS Scan Kit and the competitiveness comparison between HMS Scan Kit and other open-source code scanning tools. If you do not find the method, you can click Previous Issue at the bottom of the article to find the method. We are now used to scanning the QR code to pay, scanning the QR code to follow social accounts, scanning the QR code to learn about product information, scanning the QR code to shop, and so on. Today, I'd like to introduce the development process of QR code purchase.
Scenario
The shopping app provides an entry for scanning the QR code of an offering. After the QR code is scanned, the offering information and purchase link are displayed, facilitating offering selection for customers.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Development Step
Open the project-level build.gradle file.
Choose allprojects > repositories and configure the Maven repository address of HMS SDK.
Code:
allprojects {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Configure the Maven repository address of HMS SDK in buildscript->repositories.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'http://developer.huawei.com/repo/'}
}
}
Adding Compilation Dependencies
Open the application levelbuild.gradle file.
SDK integration
Code:
dependencies{
implementation 'com.huawei.hms:scan:1.1.3.301'
}
Specifying Permissions and Features
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-feature android:name="android.hardware.camera" />
<uses-feature android:name="android.hardware.camera.autofocus" />
The QR code scanning page is declared in the AndroidManifest.xml file because the defaultview is used.
Code:
<activity android:name="com.huawei.hms.hmsscankit.ScanKitActivity" />
Scan to buy key Steps
There are two functions: adding products and querying products. You can scan the QR code and take a photo to bind a group of products. After saving the settings, you can scan the products.
Dynamic permission application
Code:
private static final int PERMISSION_REQUESTS = 1;
@Override
public void onCreate(Bundle savedInstanceState) {
// Checking camera permission
if (!allPermissionsGranted()) {
getRuntimePermissions();
}
}
Page for adding a product
Click the add product button to trigger the offering adding page.
Code:
public void addProduct(View view) {
Intent intent = new Intent(MainActivity.this, AddProductActivity.class);
startActivityForResult(intent, REQUEST_ADD_PRODUCT);
}
Scan the barcode and enter the product barcode information.
Invoke the defaultview to scan the QR code.
Code:
private void scanBarcode(int requestCode) {
HmsScanAnalyzerOptions options = new HmsScanAnalyzerOptions.Creator().setHmsScanTypes(HmsScan.ALL_SCAN_TYPE).create();
ScanUtil.startScan(this, requestCode, options);
}
Save the QR code scanning result in the callback function.
Code:
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (data == null) {
return;
}
if ((requestCode == this.REQUEST_CODE_SCAN_ALL)
&& (resultCode == Activity.RESULT_OK)) {
HmsScan obj = data.getParcelableExtra(ScanUtil.RESULT);
if (obj != null && obj.getOriginalValue() != null) {
this.barcode = obj.getOriginalValue();
}
} else if ((requestCode == this.REQUEST_TAKE_PHOTO)
&& (resultCode == Activity.RESULT_OK)) {
……
}
}
Searching for an Offering by Scanning the QR Code
The method of scanning the QR code is similar. You can directly perform the query on the home page and display the result in the callback function.
Code:
public void queryProduct(View view) {
HmsScanAnalyzerOptions options = new HmsScanAnalyzerOptions.Creator().setHmsScanTypes(HmsScan.ALL_SCAN_TYPE).create();
ScanUtil.startScan(this, REQUEST_QUERY_PRODUCT, options);
}
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (data == null) {
return;
}
if ((requestCode == this.REQUEST_ADD_PRODUCT) && (resultCode == Activity.RESULT_OK)) {
barcodeToProduct.put(data.getStringExtra(Constant.BARCODE_VALUE), data.getStringExtra(Constant.IMAGE_PATH_VALUE));
} else if ((requestCode == this.REQUEST_QUERY_PRODUCT) && (resultCode == Activity.RESULT_OK)) {
HmsScan obj = data.getParcelableExtra(ScanUtil.RESULT);
String path = "";
if (obj != null && obj.getOriginalValue() != null) {
path = barcodeToProduct.get(obj.getOriginalValue());
}
if (path != null && !path.equals("")) {
loadCameraImage(path);
showPictures();
}
}
}
Demo
Use the add product in the demo to enter the QR code information of the offering and take a photo. Then use the query product to scan the QR code of the offering. If the offering has been recorded in the system, the offering information is returned.
Any questions about the process, visit HUAWEI Developer Forum.
More information like this, you can visit HUAWEI Developer Forum
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Image Kit Vision Service of HMS (Huawei Mobile Services) offers us very stylish filters to build a photo editor app. In this article, we will design a nice filtering screen using Vision Service. Moreover, it will be very easy to develop and it will allow you to make elegant beauty apps.
The Image Vision service provides 24 color filters for image optimization. It renders the images you provide and returns them as filtered bitmap objects.
Requirements :
Huawei Phone (It doesn’t support non-Huawei Phones)
EMUI 8.1 or later (Min Android SDK Version 26)
Restrictions :
When using the filter function of Image Vision, make sure that the size of the image to be parsed is not greater than 15 MB, the image resolution is not greater than 8000 x 8000, and the aspect ratio is between 1:3 and 3:1. If the image resolution is greater than 8000 x 8000 after the image is decompressed by adjusting the compression settings or the aspect ratio is not between 1:3 and 3:1, a result code indicating parameter error will be returned. In addition, a larger image size can lead to longer parsing and response time as well as higher memory and CPU usage and power consumption.
Let’s start to build a nice filtering screen. First of all, please follow these steps to create a regular app on App Gallery.
Then we need to add dependencies to the app level gradle file. (implementation ‘com.huawei.hms:image-vision:1.0.2.301’)
Don’t forget to add agconnect plugin. (apply plugin: ‘com.huawei.agconnect’)
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'kotlin-kapt'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.huawei.hmsimagekitdemo"
minSdkVersion 26
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
lintOptions {
abortOnError false
}
}
repositories {
flatDir {
dirs 'libs'
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.aar'])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.0'
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.1.0'
implementation 'androidx.lifecycle:lifecycle-extensions:2.2.0'
implementation 'com.google.android.material:material:1.0.0'
implementation 'androidx.legacy:legacy-support-v4:1.0.0'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
implementation 'com.huawei.hms:image-vision:1.0.2.301'
}
Add maven repo url and agconnect dependency to the project level gradle file.
Code:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
ext.kotlin_version = "1.3.72"
repositories {
google()
jcenter()
maven { url 'http://developer.huawei.com/repo/' }
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.0"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'http://developer.huawei.com/repo/' }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
After added dependencies, we need to create an ImageVision instance to perform related operations such as obtaining the filter effect. From now on, you can initialize the service.
Code:
private fun initFilter() {
coroutineScope.launch { // CoroutineScope is used for the async calls
// Create an ImageVision instance and initialize the service.
imageVisionAPI = ImageVision.getInstance(baseContext)
imageVisionAPI.setVisionCallBack(object : VisionCallBack {
override fun onSuccess(successCode: Int) {
val initCode = imageVisionAPI.init(baseContext, authJson)
// initCode must be 0 if the initialization is successful.
if (initCode == 0)
Log.d(TAG, "getImageVisionAPI rendered image successfully")
}
override fun onFailure(errorCode: Int) {
Log.e(TAG, "getImageVisionAPI failure, errorCode = $errorCode")
Toast.makeText([email protected], "initFailed", Toast.LENGTH_SHORT).show()
}
})
}
}
Our app is allowed to use the Image Vision service only after the successful verification. So we should provide an authJson object with app credentials. The value of initCode must be 0, indicating that the initialization is successful.
Code:
private fun startFilter(filterType: String, intensity: String, compress: String) {
coroutineScope.launch { // CoroutineScope is used for the async calls
val jsonObject = JSONObject()
val taskJson = JSONObject()
try {
taskJson.put("intensity", intensity) //Filter strength. Generally, set this parameter to 1.
taskJson.put("filterType", filterType) // 24 different filterType code
taskJson.put("compressRate", compress) // Compression ratio.
jsonObject.put("requestId", "1")
jsonObject.put("taskJson", taskJson)
jsonObject.put("authJson", authJson) // App can use the service only after it is successfully authenticated.
coroutineScope.launch {
var deferred: Deferred<ImageVisionResult?> = async(Dispatchers.IO) {
imageVisionAPI?.getColorFilter(jsonObject, bitmapFromGallery)
// Obtain the rendering result from visionResult
}
visionResult = deferred.await() // wait till obtain ImageVisionResult object
val image = visionResult?.image
if (image == null)
Log.e(TAG, "FilterException: Couldn't render the image. Check the restrictions while rendering an image by Image Vision Service")
channel.send(image)
// Sending image bitmap with an async channel to make it receive with another channel
}
} catch (e: JSONException) {
Log.e(TAG, "JSONException: " + e.message)
}
}
}
Select an image from the Gallery. Call Init filter method and then start filtering images one by one which are located in recyclerView.
Code:
override fun onActivityResult(requestCode: Int, resultCode: Int, intent: Intent?) {
super.onActivityResult(requestCode, resultCode, intent)
if (resultCode == RESULT_OK) {
when (requestCode) {
PICK_REQUEST ->
if (intent != null) {
coroutineScope.launch {
var deferred: Deferred<Uri?> =
async(Dispatchers.IO) {
intent.data
}
imageUri = deferred.await()
imgView.setImageURI(imageUri)
bitmapFromGallery = (imgView.getDrawable() as BitmapDrawable).bitmap
initFilter()
startFilterForSubImages()
}
}
}
}
}
In our scenario, a user clicks a filter to render the selected image we provide and the Image Vision Result object returns the filtered bitmap. So we need to implement onSelected method of the interface to our activity which gets the FilterItem object of the clicked item from the adapter.
Code:
// Initialize and start a filter operation when a filter item is selected
override fun onSelected(item: BaseInterface) {
if (!channelIsFetching)
{
if (bitmapFromGallery == null)
Toast.makeText(baseContext, getString(R.string.select_photo_from_gallery), Toast.LENGTH_SHORT).show()
else
{
var filterItem = item as FilterItem
initFilter() // initialize the vision service
startFilter(filterItem.filterId, "1", "1") // intensity and compress are 1
coroutineScope.launch {
withContext(Dispatchers.Main)
{
imgView.setImageBitmap(channel.receive()) // receive the filtered bitmap result from another channel
stopFilter() // stop the vision service
}
}
}
}
else
Toast.makeText(baseContext, getString(R.string.wait_to_complete_filtering), Toast.LENGTH_SHORT).show()
}
FilterType codes of 24 different filters as follows:
When the user opens the gallery and selects an image from a directory, we will produce 24 different filtered versions of the image. I used async coroutine channels to render images with first in first out manner. So we can obtain the filter images one by one. Using Image Vision Service with Kotlin Coroutines is so fast and performance-friendly.
To turn off hardware acceleration, configure the AndroidManifest.xml file as follows:
Code:
<application
android:allowBackup="true"
android:hardwareAccelerated="false"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity
android:name=".ui.MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
If you do not need to use filters any longer, call the imageVisionAPI.stop() API to stop the Image Vision service. If the returned stopCode is 0, the service is successfully stopped.
Code:
private fun stopFilter() {
if(imageVisionAPI != null)
imageVisionAPI.stop() // Stop the service if you don't need anymore
}
We have designed an elegant filtering screen quite easily. Preparing a filter page will no longer take your time. You will be able to develop quickly without having to know OpenGL. You should try Image Kit Vision Service as soon as possible.
And the result :
For more information about HMS Image Kit Vision Service please refer to :
HMS Image Kit Vision Service Documentation
Introduction
In this article, we will learn how to implement Huawei HiAI kit using Text Recognition service into android application, this service helps us to extract the data from screen shots and photos.
Now a days everybody lazy to type the content, there are many reasons why we want to integrate this service into our apps. User can capture or pic image from gallery to retrieve the text, so that user can edit the content easily.
UseCase: Using this HiAI kit, user can extract the unreadble image content to make useful, let's start.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required process kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
How to integrate HMS Dependencies
1. First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. For more information check this link
2. Download agconnect-services.json file from AGC and add into app’s root directory.
3 Add the required dependencies to the build.gradle file under root folder.
Code:
maven {url 'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
4. Add the App level dependencies to the build.gradle file under app folder.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the required permission to the Manifestfile.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.hardware.camera"/>
<uses-permission android:name="android.permission.HARDWARE_TEST.camera.autofocus"/>
6. Now, sync your project.
How to apply for HiAI Engine Library
1. Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
2. Click Apply for HUAWEI HiAI kit.
3. Enter required information like product name and Package name, click Next button.
4. Verify the application details and click Submit button.
5. Click the Download SDK button to open the SDK list.
6. Unzip downloaded SDK and add into your android project under lib folder.
7. Add jar files dependences into app build.gradle file.
Code:
implementationfileTree(<b><span style="font-size: 10.0pt;font-family: Consolas;">include</span></b>: [<b><span style="font-size: 10.0pt;">'*.aar'</span></b>, <b><span style="font-size: 10.0pt;">'*.jar'</span></b>], <b><span style="font-size: 10.0pt;">dir</span></b>: <b><span style="font-size: 10.0pt;">'libs'</span></b>)
implementation <b><span style="font-size: 10.0pt;font-family: Consolas;">'com.google.code.gson:gson:2.8.6'
</span></b>repositories <b>{
</b>flatDir <b>{
</b>dirs <b><span style="font-size: 10.0pt;line-height: 115.0%;font-family: Consolas;color: green;">'libs'
}
}</span></b><b><span style="font-size: 10.0pt;font-family: Consolas;">
</span></b>
8. After completing this above setup, now Sync your gradle file.
Let’s do code
I have created a project on Android studio with empty activity let’s start coding.
In the MainActivity.java we can create the business logic.
Java:
public class MainActivity extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Bitmap resultBitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView contentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
requestPermissions(permission, REQUEST_CODE);
initHiAI();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
contentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
btnImage.setOnClickListener(v -> {
selectImage();
});
}
private void initHiAI() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DeviceCompatibility();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DeviceCompatibility() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
setBitmap();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void setBitmap() {
int height = bitmap.getHeight();
int width = bitmap.getWidth();
if (width <= 1440 && height <= 15210) {
originalImage.setImageBitmap(bitmap);
setTextHiAI();
} else {
Toast.makeText(this, "Image size should be below 1440*15210 pixels", Toast.LENGTH_SHORT).show();
}
}
private void setTextHiAI() {
textView.setText("Extraction Text");
contentText.setVisibility(View.VISIBLE);
TextDetector detector = new TextDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
TextConfiguration config = new TextConfiguration();
config.setEngineType(TextConfiguration.AUTO);
config.setEngineType(TextDetectType.TYPE_TEXT_DETECT_FOCUS_SHOOT_EF);
detector.setTextConfiguration(config);
Text result = new Text();
int statusCode = detector.detect(image, result, null);
if (statusCode != 0) {
Log.e("TAG", "Failed to start engine, try restart app,");
}
if (result.getValue() != null) {
contentText.setText(result.getValue());
Log.d("TAG", result.getValue());
} else {
Log.e("TAG", "Result test value is null!");
}
}
}
Demo
Tips and Tricks
1. Download latest Huawei HiAI SDK.
2. Set minSDK version to 23 or later.
3. Do not forget to add jar files into gradle file.
4. Screenshots size should be 1440*15210 pixels.
5. Photos recommended size is 720p.
6. Refer this URL for supported Countries/Regions list.
Conclusion
In this article, we have learned how to implement HiAI Text Recognition service in android application to extract the content from screen shots and photos.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Huawei HiAI Kit URL
Original Source
Introduction
In this article, we will learn how to implement Huawei HiAI kit using Table Recognition service into android application, this service helps us to extract the table content from images.
Table recognition algorithms, this one is based on the line structure of table. Clear and detectable lines are necessary for the proper identification of cells.
Use case: Imagine you have lots of paperwork and documents where you would be using tables, and using the same you would like to manipulate data. Conventionally you can copy them manually or generate excel files for third party apps.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Any IDE with Android SDK installed (IntelliJ, Android Studio).
3. HiAI SDK.
4. Minimum API Level 23 is required.
5. Required EMUI 9.0.0 and later version devices.
6. Required process kirin 990/985/980/970/ 825Full/820Full/810Full/ 720Full/710Full
Features
1. Restores the table information including text in the cells, and identifies merged cells as well.
2. Fast recognition it returns the text in a cell containing 50 lines within 3seconds
3. Recognition accuracy level >85%
4. Recall rate >80%
How to integrate HMS Dependencies
1. First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. For more information check this link
2. Download agconnect-services.json file from AGC and add into app’s root directory.
3. Add the required dependencies to the build.gradle file under root folder.
Code:
maven {url'https://developer.huawei.com/repo/'}
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
4. Add the App level dependencies to the build.gradle file under app folder.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the required permission to the Manifestfile.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.hardware.camera"/>
<uses-permission android:name="android.permission.HARDWARE_TEST.camera.autofocus"/>
6. Now, sync your project.
How to apply for HiAI Engine Library
1. Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
2. Click Apply for HUAWEI HiAI kit.
3. Enter required information like Product name and Package name, click Next button.
4. Verify the application details and click Submit button.
5. Click the Download SDK button to open the SDK list.
6. Unzip downloaded SDK and add into your android project under lib folder.
7. Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(<b><span style="font-size: 10.0pt;font-family: Consolas;">include</span></b>: [<b><span style="font-size: 10.0pt;">'*.aar'</span></b>, <b><span style="font-size: 10.0pt;">'*.jar'</span></b>], <b><span style="font-size: 10.0pt;">dir</span></b>: <b><span style="font-size: 10.0pt;">'libs'</span></b>)
implementation <b><span style="font-size: 10.0pt;">'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}</span></b><b style="font-family: Consolas;font-size: 10.0pt;background-color: white;">
</b>
8. After completing this above setup, now Sync your gradle file.
Let’s do code
I have created a project on Android studio with empty activity let’s start coding.
In the MainActivity.java we can create the business logic.
Code:
MainActivity extends AppCompatActivity {
private boolean isConnection = false;
private int REQUEST_CODE = 101;
private int REQUEST_PHOTO = 100;
private Bitmap bitmap;
private Bitmap resultBitmap;
private Button btnImage;
private ImageView originalImage;
private ImageView conversionImage;
private TextView textView;
private TextView contentText;
private final String[] permission = {
Manifest.permission.CAMERA,
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE};
private ImageSuperResolution resolution;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
requestPermissions(permission, REQUEST_CODE);
initHiAI();
originalImage = findViewById(R.id.super_origin);
conversionImage = findViewById(R.id.super_image);
textView = findViewById(R.id.text);
contentText = findViewById(R.id.content_text);
btnImage = findViewById(R.id.btn_album);
btnImage.setOnClickListener(v -> {
selectImage();
});
}
private void initHiAI() {
VisionBase.init(this, new ConnectionCallback() {
@Override
public void onServiceConnect() {
isConnection = true;
DeviceCompatibility();
}
@Override
public void onServiceDisconnect() {
}
});
}
private void DeviceCompatibility() {
resolution = new ImageSuperResolution(this);
int support = resolution.getAvailability();
if (support == 0) {
Toast.makeText(this, "Device supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(this, "Device doesn't supports HiAI Image super resolution service", Toast.LENGTH_SHORT).show();
}
}
public void selectImage() {
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_PHOTO);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK) {
if (data != null && requestCode == REQUEST_PHOTO) {
try {
bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), data.getData());
if (isConnection) {
setTableAI();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
private void setTableAI() {
textView.setText("Extraction Table Text");
contentText.setVisibility(View.VISIBLE);
TableDetector mTableDetector = new TableDetector(this);
VisionImage image = VisionImage.fromBitmap(bitmap);
VisionTableConfiguration mTableConfig = new VisionTableConfiguration.Builder()
.setAppType(VisionTableConfiguration.APP_NORMAL)
.setProcessMode(VisionTableConfiguration.MODE_OUT)
.build();
mTableDetector.setVisionConfiguration(mTableConfig);
mTableDetector.prepare();
Table table = new Table();
int mResult_code = mTableDetector.detect(image, table, null);
if (mResult_code == 0) {
int count = table.getTableCount();
List<TableContent> tc = table.getTableContent();
StringBuilder sbTableCell = new StringBuilder();
List<TableCell> tableCell = tc.get(0).getBody();
for (TableCell c : tableCell) {
List<String> words = c.getWord();
StringBuilder sb = new StringBuilder();
for (String s : words) {
sb.append(s).append(",");
}
String cell = c.getStartRow() + ":" + c.getEndRow() + ": " + c.getStartColumn() + ":" +
c.getEndColumn() + "; " + sb.toString();
sbTableCell.append(cell).append("\n");
contentText.setText("Count = " + count + "\n\n" + sbTableCell.toString());
}
}
}
}
Demo
Tips and Tricks
1. Download latest Huawei HiAI SDK.
2. Set minSDK version to 23 or later.
3. Do not forget to add jar files into gradle file.
4. It supports slides images.
5. Input resolution larger than 720p and with aspect ratio smaller than 2:1.
6. It supports only printed text, images, formulas, handwritten content, seals, watermarks cannot be identified.
7. Refer this URL for supported Countries/Regions list.
Conclusion
That’s it! Now your table content extracted from image, for further analysis with statistics or just for editing it. This works for tables with clear and simple structure information.
Thanks for reading! If you enjoyed this story, please click the Like button and Follow. Feel free to leave a Comment below.
Reference
Huawei HiAI Table Recognition Kit URL
Original Source
Let's be honest: We can't function without the Internet. No matter where we go, we're always looking for ways to hook up the net.
Although more and more public places are offering free Wi-Fi networks, connecting to them remains a tiresome process. Many free Wi-Fi networks require users to register on a web page, click on an ad, or download a certain app, before granting Internet access.
As a developer, I have been scratching my head over an easier way for connecting to Wi-Fi networks. And then I came across the barcode-scanning feature of Scan Kit, which allows business owner to create a QR code that customers can scan with their phones to quickly connect to a Wi-Fi network. What's more, customers can even share the QR code with people around them. This speeds up the Wi-Fi connection process with customers' personal data properly protected.
Technical PrinciplesThe barcode-scanning Wi-Fi connection solution requires only two capabilities: barcode generation and barcode scanning.
Using the codeBuilding Scanning Capabilities1. Configure the Huawei Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK. Repeat this step for allprojects > repositories.
Java:
buildscript {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
}
allprojects {
repositories {
google()
jcenter()
// Configure the Maven repository address for the HMS Core SDK.
maven {url 'https://developer.huawei.com/repo/'}
}
}
In Gradle 7.0 or later, configuration under allprojects > repositories is migrated to the project-level settings.gradle file. The following is a configuration example of the settings.gradle file:
Java:
dependencyResolutionManagement {
...
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Add build dependencies
Java:
dependencies{
// Scan SDK.
implementation 'com.huawei.hms:scan:2.3.0.300'
}
3. Configure obfuscation scripts
Open the obfuscation configuration file proguard-rules.pro in the app's root directory of the project, and add configurations to exclude the HMS Core SDK from obfuscation.
Java:
-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keepattributes SourceFile,LineNumberTable
-keep class com.huawei.hianalytics.**{*;}
-keep class com.huawei.updatesdk.**{*;}
-keep class com.huawei.hms.**{*;}
4. Add permissions to the AndroidManifest.xml file
Java:
<!-- Camera permission -->
<uses-permission android:name="android.permission.CAMERA" />
<!-- File read permission -->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
5. Dynamically request the permissions
Java:
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA, Manifest.permission.READ_EXTERNAL_STORAGE}, requestCode);
6. Check the permission request result
Java:
@Override
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (permissions == null || grantResults == null) {
return;
}
// The permissions are successfully requested or have been assigned.
if (requestCode == CAMERA_REQ_CODE) {
// Scan barcodes in Default View mode.
// Parameter description:
// activity: activity that requests barcode scanning.
// requestCode: request code, which is used to check whether the scanning result is obtained from Scan Kit.
ScanUtil.startScan(this, REQUEST_CODE_SCAN_ONE, new HmsScanAnalyzerOptions.Creator().create());
}
}
7. Receive the barcode scanning result through the callback API, regardless of whether it is captured by the camera or from an image
Java:
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode != RESULT_OK || data == null) {
return;
}
if (requestCode == REQUEST_CODE_SCAN_ONE) {
// Input an image for scanning and return the result.
HmsScan hmsScan = data.getParcelableExtra(ScanUtil.RESULT);
if (hmsScan != null) {
// Show the barcode parsing result.
showResult(hmsCan);
}
}
}
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
QR Code Scanner Demo
Building the Barcode Generation Function1. Repeat the first three steps for building scanning capabilities
2. Declare the necessary permission in the AndroidManifest.xml file
Java:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
3. Dynamically request the permission
Java:
ActivityCompat.requestPermissions(this,new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE},requestCode);
4. Check the permission request result
Java:
@Override
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
if (permissions == null || grantResults == null) {
return;
}
if (grantResults[0] == PackageManager.PERMISSION_GRANTED && requestCode == GENERATE_CODE) {
Intent intent = new Intent(this, GenerateCodeActivity.class);
this.startActivity(intent);
}
}
5. Generate a QR code
Java:
public void generateCodeBtnClick(View v) {
try {
HmsBuildBitmapOption options = new HmsBuildBitmapOption.Creator()
.setBitmapMargin(margin)
.setBitmapColor(color)
.setBitmapBackgroundColor(background)
.create();
resultImage = ScanUtil.buildBitmap(content, type, width, height, options);
barcodeImage.setImageBitmap(resultImage);
} catch (WriterException e) {
Toast.makeText(this, "Parameter Error!", Toast.LENGTH_SHORT).show();
}
}
6. Save the QR code
INI:
public void saveCodeBtnClick(View v) {
if (resultImage == null) {
Toast.makeText(GenerateCodeActivity.this, "Please generate barcode first!", Toast.LENGTH_LONG).show();
return;
}
try {
String fileName = System.currentTimeMillis() + ".jpg";
String storePath = Environment.getExternalStorageDirectory().getAbsolutePath();
File appDir = new File(storePath);
if (!appDir.exists()) {
appDir.mkdir();
}
File file = new File(appDir, fileName);
FileOutputStream fileOutputStream = new FileOutputStream(file);
boolean isSuccess = resultImage.compress(Bitmap.CompressFormat.JPEG, 70, fileOutputStream);
fileOutputStream.flush();
fileOutputStream.close();
Uri uri = Uri.fromFile(file);
GenerateCodeActivity.this.sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, uri));
if (isSuccess) {
Toast.makeText(GenerateCodeActivity.this, "Barcode has been saved locally", Toast.LENGTH_LONG).show();
} else {
Toast.makeText(GenerateCodeActivity.this, "Barcode save failed", Toast.LENGTH_SHORT).show();
}
} catch (Exception e) {
Toast.makeText(GenerateCodeActivity.this, "Unknown Error", Toast.LENGTH_SHORT).show();
}
}
QR Code generator Demo
Wi-Fi QR code demoWi-Fi QR Code Demo is a test program showing how to generate QR Code that contains Wi-Fi information and scan the QR Code for connecting to Wi-Fi networks.
Click "Default View Mode" button to open the QR Code Scanner
Click "Connect to network" to join
Type in Wifi Name and Wifi Password
Type in Barcode width and height
Click "Generate Barcode" button to get a new QR Code
Click "Save Barcode" button to save the QR Code as an image file.
References>> HMS Core Scan Kit
>> HMS Core
>> Reddit for discussion with other developers
>> GitHub for demos and sample codes
>> Stack Overflow for solutions to integration issues