Beginner : Building Custom model using Huawei ML kit Custom Model - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
Are you new to machine learning?
If yes. Then let’s start from the scratch.
What is machine learning?
Definition: “Field of study that gives computer capability to learn without being explicitly programmed.”
In general: Machine learning is an Application of the Artificial intelligence (AI) it gives devices the ability to learn from their experience improve their self-learning without doing any coding. For example if you search something related Ads will be shown on the screen.
Machine Learning is a subset of Artificial Intelligence. Machine Learning is the study of making machines more human-like in their behavior and decisions by giving them the ability to learn and develop their own programs. This is done with minimum human intervention, that is, no explicit programming. The learning process is automated and improved based on the experiences of the machines throughout the process. Good quality data is fed to the machines, and different algorithms are used to build ML models to train the machines on this data. The choice of algorithm depends on the type of data at hand, and the type of activity that needs to be automated.
Do you have question like what is the difference between machine learning and traditional programming?
Traditional programming
We would feed the input data and well written and tested code into machine to generate output.
Machine Learning
We feed the Input data along with the output is fed into the machine during the learning phase, and it works out a program for itself.
Steps of machine learning
1. Gathering Data
2. Preparing that data
3. Choosing a model
4. Training
5. Evaluation
6. Hyper parameter Tuning
7. Prediction
How does Machine Learning work?
The three major building blocks of a Machine Learning system are the model, the parameters, and the learner.
Model is the system which makes predictions.
The parameters are the factors which are considered by the model to make predictions.
The learner makes the adjustments in the parameters and the model to align the predictions with the actual results.
Now let’s build on the water example from the above and learn how machine learning works. A machine learning model here has to predict whether water is useful to drink or not. The parameters selected are as follows
Dissolved oxygen
pH
Temperature
Decayed organic materials
Pesticides
Toxic and hazardous substances
Oils, grease, and other chemicals
Detergents
Learning from the training set.
This involves taking a sample data set of several place water for which the parameters are specified. Now, we have to define the description of each classification that is useful to drink water, in terms of the value of parameters for each type. The model can use the description to decide if a new sample of water is useful to drink or not.
You can represent the values of the parameters, ‘pH’ ,‘Temperature’ , ‘Dissolved oxygen’ etc, as ‘x’ , ‘y’ and ‘z’ etc. Then (x, y, z) defines the parameters of each drink in the training data. This set of data is called a training set. These values, when plotted on a graph, present a hypothesis in the form of a line, a rectangle, or a polynomial that fits best to the desired results.
Now we have learnt what machine learning is and how it works, now let’s understand about Huawei ML kit.
Huawei ML kit
HUAWEI ML Kit allows your apps to easily leverage Huawei's long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries.
Huawei has already provided some built in feature in SDK which are as follows.
Text related service.
Text recognition
Document recognition
Id card recognition
Bank card recognition
General card recognition
Form Recognition
Language/Voice related services.
Translation
Language detection
Text to speech
Image related services.
Image classification
Object detection and Tracking
Landmark recognition
Product visual search
Image super resolution
Document skew correction
Text image super resolution
Scene detection
Face/Body related services.
Face detection
Skeleton detection
Liveness detection
Hand gesture recognition
Face verification
Natural language processing services.
Text embedding
Custom model.
AI create
Model deployment and Inference
Pre-trained model
In this series of article we learn about Huawei Custom model. As an on-device inference framework of the custom model, the on-device inference framework MindSpore Lite provided by ML Kit facilitates integration and development and can be running on devices. By introducing this inference framework, you can define your own model and implement model inference at the minimum cost.
Advantages of MindSpore Lite
It provides simple and complete APIs for you to integrate the inference framework of an on-device custom model.
Customize model in simple and quickest with excellent experience with Machine learning.
It is compatible with all mainstream model inference platforms or frameworks, such as MindSpore Lite, TensorFlow Lite, Caffe, and Onnx in the market. Different models can be converted into the .ms format without any loss, and then run perfectly through the on-device inference framework.
Custom models occupy small storage space and can be quantized and compressed. Models can be quickly deployed and executed. In addition, models can be hosted on the cloud and downloaded as required, reducing the APK size.
Steps to be followed to Implement Custom model
Step 1: Install HMS Toolkit from Android Studio Marketplace.
Step 2: Transfer learning by using AI Create.
Step 3: Model training
Step 4: Model verification
Step 5: Upload model to AGC
Step 6: Load the remote model
Step 7: Perform inference using model inference engine
Let us start one by one.
Step 1: Install HMS Toolkit from Android Studio Marketplace. After the installation, restart Android Studio.
· Choose File > Setting > Plugins
Result
Coming soon in upcoming article.
Tips and Tricks
Make sure you are already registered as Huawei Developer.
Learn basic of machine learning.
Install HMS tool in android studio
Conclusion
In this article, we have learnt what exactly machine learning is and how it works. And difference between traditional programming and machine learning. Steps required to build custom model and also how to install HMS tool in android studio. In upcoming article I’ll continue the remaining steps in custom model of machine learning.
Reference
ML Kit Official document
Checkout in forum

Related

All About Maps - Episode 1: Showing Routes from GPX files on Maps

This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home​
All About Maps
Let's talk about maps. I started an open source project called All About Maps (https://github.com/ulusoyca/AllAboutMaps). In this project I aim to demonstrate how we can implement the same map related use cases with different map providers in one codebase. We will use Mapbox Maps, Google Maps, and Huawei HMS Map Kit. This project uses following libraries and patterns:
MVVM pattern with Android Jetpack Libraries
Kotlin Coroutines for asynchronous operations
Dagger2 Dependency Injection
Android Clean Architecture
Note: The codebase changes by time. You can always find the latest code in develop branch. The code when this article is written can be seen by choosing the tag: episode_1-parse-gpx:
https://github.com/ulusoyca/AllAboutMaps/tree/episode_1-parse-gpx/
Motivation
Why do we need maps in our apps? What are the features a developer would expect from a map SDK? Let's try to list some:
Showing a coordinate on a map with camera options (zoom, tilt, latitude, longitude, bearing)
Adding symbols, photos, polylines, polygons to map
Handle user gestures (click, pinch, move events)
Showing maps with different map styles (Outdoor, Hybrid, Satallite, Winter, Dark etc.)
Data visualization (heatmaps, charts, clusters, time-lapse effect)
Offline map visualization (providing map tiles without network connectivity)
Generate snapshot image of a bounded region
We can probably add more items but I believe this is the list of features which all map provider companies would most likely provide. Knowing that we can achieve the same tasks with different map providers, we should not create huge dependencies to any specific provider in our codebase. When a product owner (PO) tells to developers to switch from Google Maps to Mapbox Maps, or Huawei Maps, developers should never see it as a big deal. It is software development. Business as usual.
One would probably think why a PO would want to switch from one map provider to another. In many cases, the reason is not the technical details. For example, Google Play Services may not be available in some devices or regions like China. Another case is when a company X which has a subscription to Mapbox, acquires company Y which uses Google Maps. In this case the transition to one provider is more efficient. Change in the terms of services, and pricing might be other motivations.
We need competition in the market! Let's switch easily when needed but how dependencies make things worse? Problematic dependencies in the codebase are usually created by developing software like there is no tomorrow. It is not always developers' fault. Tight schedules, anti-refactoring minded teams, unorganized plannings may cause careless coding and then eventually to technical depts. In this project, I aim to show how we can encapsulate the import lines below belonging to three different map providers to minimum number of classes with minimum lines:
Code:
import com.huawei.hms.maps.*
import com.google.android.gms.maps.*
import com.mapbox.mapboxsdk.maps.*
It should be noted that the way of achieving this in this post is just one proposal. There are always alternative and better ways of implementations. In the end, as software developers, we should deliver our tasks time-efficiently, without over-engineering.
About the project
In the home page of the project you will see the list of tutorials. Since this is the first blog post, there is only one item for now. To make our life easier with RecyclerViews, I use Epoxy library by Airbnb in the project. Once you click the buttons in the card, it will take to the detail page. Using bottom sheet we can switch between map providers. Note that Huawei Map Kit requires a Huawei mobile phone.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this first blog post, we will parse the GPX file of 120 km route of Cappadocia Ultra Trail race and show the route and check points (food stations) on map. I finished this race in 23 hours 45 mins and you can also read my experience here (https://link.medium.com/uWmrWLAzR6). GPX is an open standart which contains route points that constructs a polyline and waypoints which are the attraction location. In this case, the waypoints represents the food and aid stations in the race. We will show the route with a polyline and waypoints with markers on map.
Architecture
Architecture is definitely not an overrated concept. Since the early days of Android, we have been seeking for the best architectural patterns that suits with Android development. We have heard of MVC, MVP, MVVM, MVI and many other patterns will emerge. The change and adaptation to a new pattern is inevitable by time. We should keep in mind some basic and commonly accepted concepts like SOLID principles, seperation of concerns, maintainability, readibility, testablity etc. so that we can switch to between patterns easily when needed.
Nowadays, widely accepted architecture in Android community is modularization with Clean Architecture. If you have time to invest more, I would strongly suggest Joe Birch's clean architecture tutorials. As Joe suggests in his tutorials, we do not have to apply every rule line by line but instead we take whatever we feel like is needed. Here is my take and how I modularized the All About Maps app:
Note that dependency injection with Dagger2 is the core of this implementation. If you are not familiar with the concept, I strongly suggest you to read the best Dagger2 tutorial in the wild Dagger2 world by Nimrod Dayan.
Domain Module
Many of us are excited to start implementation with UI to see the results immediately but we should patiently build our blocks. We shall start with the domain module since we will put our business logic and define the entities and user interactions there.
This is not the end. For full content, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201260729166420263&fid=0101187876626530001

Custom Model Generation with MindSpore Lite | HMS ML Kit

MindSpore is an open-sourced framework for AI based application development which is announced by Huawei. It is a robust alternative to AI frameworks such as TensorFlow and PyTorch which are widely used in the market.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s start by emphasizing the features and advantages of MindSpore framework:
MindSpore implements AI algorithms for easier model development and provides cutting-edge technologies with Huawei AI processors to improve runtime efficiency and computing performance.
One of its advantages is that it can be used in several environments like on devices, cloud and edge environments. It supports operating systems like IOS and Android, and AI applications on various devices such as mobile phones, tablets and IoT devices.
MindSpore supports parallel training across hardware to reduce training times. It maximizes both hardware computing power and minimizes inference latency and power consumption.
It provides dynamic debugging ability for developers which enables to find out bugs in the apps easily.
According to Huawei, MindSpore does not process data by itself but ingests the gradient and model information that has been processed. This ensures the integrity of sensitive data.
MindSpore Lite is an inference framework for custom models which is provided by HMS ML Kit to simplify the integration and development. The developers can define their own model and implement model inference thanks to MindSpore Lite capabilities.
MindSpore Lite is compatible with commonly used AI platforms like TensorFlow Lite, Caffe, and Onnx. Different models can be converted into .ms (MindSpore) format and then run perfectly.
Custom models can be deployed and executed easily since they are compressed and occupy small storage.
It provides complete APIs to integrate inference framework of an on-device custom model.
HMS ML Kit enables to train and generate custom models with deep machine learning. It also offers pre-trained image classification model. You can develop your own custom model by using Transfer Learning feature of ML Kit with a specific dataset.
I will basically explain you how to train your own model over an example which contains three plant categories. We will use a small data set for reference and train the image classification model to identify cactus, pine and succulent plants. The model will be created by using HMS Toolkit plug-in and AI Create.
HMS Toolkit: As a lightweight IDE tool plugin, HMS Toolkit implements app creation, coding, conversion, debugging, test, and release. It helps you integrate HMS Core APIs with lower costs and higher efficiency.
AI Create: Provides the transfer learning capabilities of image classification and text classification. Images and texts can be identified thanks to AI Create. It uses MindSpore as a training framework and MindSpore Lite as inference framework.
Note: Use the Android Studio marketplace to install HMS Toolkit plug-in. Please go to File > Settings > Plugins > Marketplace, enter HMS Toolkit into the search box and click install. After installation complete, restart Android Studio.
We should prepare the environment to train our model first. AI Create only supports Windows operating system currently. Please open Coding Assistant by using the new HMS section that came with HMS Toolkit plug-in. Go to AI > AI Create in Coding Assistant and select Image and click Confirm for Image Classification.
After this step HMS Toolkit automatically downloads resources for you. If the Python environment is not configured, the dialog box will be displayed as below.
Note: You should download and install Python 3.7.5 from the link to use AI Create. After installation complete, please do not forget to add Python installation path into the Path variable in Environment Variables and restart the Android Studio.
After environment is ready, if you select Image and click Confirm from AI Create it will automatically start to install MindSpore. Please be sure the framework has been installed successfully by checking the Event logs.
From now, new model section will be opened to select an image folder to train our own model. You should prepare your data set in accordance with the requirements. We will train the model for our demo to identify cactus, succulent and pine plants with a small data set.
The folder structure should be like below :
The following conditions should be met for image resources:
The minimum number of pictures for each category of training data is 10.
The lower limit of the classification number of the training data set is 2, and the upper limit is 1000.
Supported image formats: .bmp, .jpg, .jpeg, .png or .gif.
After training image folder is selected, please set Output model file path and training parameters. If you check HMS Custom Model, a complete model will be generated. The train parameters affects the accuracy of image recognition model. You can modify them if you have a experience with deep learning. When you click on Create Model, MindSpore will start to train your model according to the data set.
Training process will take a time depending on your data set. As we used a small data set with the minimum number of the requirements it is completed quickly. You can also track training logs, your model will be created on the specified path at the end of the process.
The training results will be shared after model training is completed. AI create enables to test your model by adding the test images before using it in any project. You can also generate a demo project that implements your new model with Generate Demo option.
You should create a new test image folder with the same structure of provided data set.
As you see above, our test average accuracy is calculated as 57.1%. This accuracy can be improved by providing comprehensive data set and training.
You can also use and experience results of your new model over a demo project which can be created by HMS Toolkit. After the demo is created, you may directly run and build the project and check the results on real device.
In this article, I wanted to share basic information about MindSpore and how we can use Transfer Learning function of HMS for custom models.
You can also develop your own classification model by using this post as a reference. I hope that it will be useful for you !
Please follow our next articles for more details about ML Kit Custom Model and MindSpore.
References
https://developer.huawei.com/consum...ore-Guides/ml-mindspore-lite-0000001055328885
https://www.mindspore.cn/lite/en

Huawei Mobile Services delivers next-level success for Wongnai and Line Man on AppGallery

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2020 was a roaring year for the food delivery sector – the already thriving industry was boosted by an increasingly digital population that had no other means to enjoy their favourite dish due to social distancing measures. According to Sensor Tower, these factors contributed to a huge 21% year-on-year growth of 1.7 billion food and drink category apps installs globally in 2020.
However, competition within this sector also intensified last year as new companies looked to stake their claim in the growing market. In order to do this, food apps need to be able to compete with the top dogs on all fronts to ensure they are equally effective at acquiring and retaining users. For many, the answer was evident – develop a more intuitive, powerful, and enjoyable app experience. Given the industry offers little space for differentiation in terms of services, companies are dependent on driving a superior user experience to build their brand loyalty.
This set of challenge holds true for two apps in Thailand – Delivery platform Line Man as well as its lifestyle services review platform partner Wongnai. Line Man was looking to expand in 2020 but the lack of robust map service features and location tracking capabilities severely hamper its ability to achieve that goal. Similarly, Wongnai’s lacklustre location tracking capability and platform manipulation issue was affecting its standing among users during a pivotal period.
Step in Huawei. The partners sought out Huawei for guidance to address their respective issues for their apps on AppGallery. The discussion then evolved to a collaboration with Huawei where the two developers integrated the open HMS (Huawei Mobile Services) Core capabilities, and were able to successfully eradicate their roadblocks and unleash better location and tracking capabilities.
Triangulating users’ locations swiftly, accurately and clearly
Wongnai, as a leading lifestyle services review platform, needed to accurately pinpoint users’ location to recommend relevant nearby shops and attractions. By integrating with the HMS Core Location Kit, Wongnai was able to leverage the kit’s Global Navigation Satellite System (GNSS), Wi-Fi, and base station location functionalities to provide flexible location-based services. This allows it to make dynamic, relevant recommendations for its users.
Line Man was facing a similar situation as well – it needed to constantly provide users with the location of their food and delivery riders to provide the most optimal ordering experience. Not only does the integration with Location kit achieve this, it also enables advanced features such as geofencing and activity identification. These capabilities enable the app to share reliable updates on the status of the order unceasingly, building goodwill and positive word-of-mouth among customers.
Displaying a map that is not only relevant but intuitive
Given that both services are heavily reliant on users’ location, the developers also needed a map service that can be customised according to their needs and is capable to convey all the relevant information effectively. In consideration of this, Wongnai and Line Man chose to integrate HUAWEI Map Kit with their respective apps as well.
With Map Kit, apps can leverage Huawei’s powerful map service and route planning capabilities to provide an elevated app experience for the users. The two developers were able to introduce an assortment of map functions based on their respective userbase’s unique needs. As a result, the users are able to effortlessly identify the various participating restaurants and businesses in one simple glance, improving the overall user experience.
Huawei and HMS Core drive success for their partners
The partnership with Huawei was a resounding success for the developers, both commercially and operationally. The integration with HMS Core not only propelled both apps past the two million downloads milestone, but it also helped Line Man boost its daily user figures by three-fold in the first two months.
Huawei is acutely aware of the different operational challenges facing developers, including the cost associated with launching apps across multiple distribution platforms. As part of its commitment to developers, Huawei provides developers with free access to certain HMS Core Kits to help alleviate the operational cost. Furthermore, HMS Core offers a unique function that can identify whether a device supports Huawei Mobile Service, allowing the app to call upon the corresponding SDK. This essentially meant that developers only need to maintain one app, reducing the time spent in terms of deployment and testing, as well as the operational costs associated with these activities.
In addition, Huawei extends its support for developers across the entire operation management spectrum, including marketing efforts. Wongnai and Line Man collaborated with AppGallery on a week-long campaign last year where users were able to redeem HUAWEI points through purchases in-app. The campaign was particularly effective for Wongnai who saw its new user download figures skyrocket 10-fold. The campaign boosted the apps’ visibility among AppGallery users, leading to a significant increase in daily downloads even after the campaign ended.
Another notable feat from this collaboration is the time it took to complete the entire integration process, especially in the case of Wongnai. The developer implemented three other HMS Core kits – Site Kit, Safety Detect, and Push Kit – on top of the two examples discussed above, and despite the mountainous amount of technical work that comes with it, the team was able to successfully incorporate the five kits into their app in just three weeks. This achievement was made possible through an intimate collaboration with Huawei’s developer support team, who offered extensive support as well as technical insights to facilitate the development process.

Principles Behind HUAWEI Prediction How We Trained Models for the Service

HUAWEI Prediction utilizes machine learning, based on user behavior and attributes reported by HUAWEI Analytics Kit, to predict target audiences with next-level precision. The service can help you with carrying out and optimizing operations. For example, it can work with A/B Testing to evaluate how effective your promotions have been and it can also join hands with Remote Configuration to configure dedicated plans for specific audiences through Remote Configuration. This is likely to result in dramatically improved user retention and conversion.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Integrating the Analytics SDK into your app enables the Prediction service to run preset tasks for predicting lost, paying, and returning users. On the details page of a specific prediction task, you'll find audiences with high, medium, and low probabilities of triggering a specific event, with meticulous profiling. For example, an audience with a high churn probability will include users who are very likely to quit using the app over the next 7 days. The characteristics of these users are displayed on cards, which makes it easy for you to pursue targeted operations.
The following figures give you a sense of how the prediction task list and details page look in practice.
* Data in these figures is for reference only.
Ø How we built these prediction models
First of all, we made it clear what our goal was to make predictions, so the type of data we collect reflects this. We then cleansed and sampled the collected data based on user characteristics to obtain a data set. This data set was divided into a 20% validation set and an 80% training set; multiple rounds of offline experiments were then conducted to determine the features and most suitable parameters for forming models. The generated models were later trained online to perform prediction tasks.
This process is outlined in detail below:
Ø Feature and model selection and optimization
Feature exploration
At the early stage of the project, we made sure to analyze user attributes, behavior, and requirements, in order to determine the business-relevant variables, such as user active days over the last 7 days and app use durations, through which we built a feature table.
After the features were identified, we chose a method that best suited our service and optimized parameters by performing multiple rounds of experiments. Common tree boosting methods that can be found across the industry include XGBoost, random forests, and Gradient Boost Decision Tree (GBDT). We trained our data set using these methods, and found that random forests perform best. Then the bagging method was adopted to improve models' fitting and generalization capabilities.
In addition to parameter optimization, the sampling ratio was also considered, especially for the payment prediction scenario, in which the proportion between positive samples and negative samples was large (about 1:100). For such cases, the accuracy and recall indicators should both be ensured. Then we adjusted the ratio of positive samples to negative samples to 1.5:1 during model training for payment prediction, in order to boost the recall of the model.
Hyperparameter and feature determination
Unnecessary features in a model can undermine the efficacy of predictions made by the model, or slow down model training. During experiments at this early stage, features were sorted by weight, and the top features were selected. In the model that would actually come to be, these features and relevant hyperparameters were configured.
Even after a model is applied for prediction, the data still needs to be observed and analyzed to supplement necessary features. In later iterations, we added a range of features, including the event and trend features, bringing the feature count over 400.
Automatic hyperparameter search
Model training involving full features can be quite time-consuming, and fail to produce the optimal output. In addition, the optimal hyperparameters and features may vary depending on the app. Therefore, the training should be performed by app.
To address this issue, we applied the automatic hyperparameter search function to search for optimal parameters in the configured parameter space. Matched parameters are stored in a Hive table.
The following figures show the modeling procedure and relevant external support.
Ø Research emphasis
We will continue optimizing our models, by researching the following:
l Neural network
As the number of features continues to grow (400+ currently), and user behaviors become too complex to mine common rules, our prediction models will need to be enhanced to ensure that predictions remain accurate. This will require that we introduce neural networks with strong expressive power, in addition to decision trees to train models based on behavioral features.
l Federated learning
Currently, data is isolated between apps and tenants. Horizontal federated learning can be used to train models across apps and tenants on a collaborative basis.
l Time series feature
A typical app user's device will report hundreds of events (among 1,000+ event types) and access nearly 100 pages within the app on a weekly basis. These times series can be used to build both short- and long-term user behavioral features, with the goal of improving prediction accuracy across a wide range of scenarios. Page access user behavioral data can be valuable for research, as such data bear characteristics of time series data.
l Feature mining and processing
The feature set is still being expanded. We will explore additional relevant features, such as the average app use interval, device attributes, download sources, and locations. In addition, we will also undertake such measures as discretization, normalization, square and square root operations, Cartesian product calculation, and Cartesian product calculation for multiple data sets, to build subsequent features that are based on existing features.
For more on HUAWEI Prediction, visit>>
For more details, you can go to:
l Our official website
l Our Development Documentation page, to find the documents you need
l Reddit to join our developer discussion
l GitHub to download demos and sample codes
l Stack Overflow to solve any integration problems

HiAI Foundation: The Newest, Brightest Way to Develop AI Apps

Now, AI has been rolled-out in education, finance, logistics, retail, transportation, and healthcare, to fill niches based on user needs or production demands. As a developer, to stay ahead of the competition, you'll need to translate genius insights efficiently into AI-based apps.
HMS Core HiAI Foundation is designed to streamline development of new apps. It opens the innate hardware capabilities of the HiAI ecosystem and provides 300+ AI operators compatible with major models, allowing you to easily and quickly build AI apps.
HiAI Foundation offers premium computing environments that boast high performance and low power consumption to facilitate development, with solutions including device-cloud synergy, Model Zoo, automatic optimization toolkits, and Intellectual Property Cores (IP cores) that collaborate with each other.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Five Cores of Efficient, Flexible Development​HiAI Foundation is home to cutting-edge tools and features that compliment any development strategy. Here are the five cores that facilitate flexible, low-cost AI development:
Device-cloud synergy: support for platform update and performance optimization with operators for new and typical scenarios
Because AI services and algorithm models are constantly evolving, it is difficult for AI computing platforms to keep up. HiAI Foundation is equipped with flexible computing frameworks and adaptive model structures that support synergy across device and cloud. This allows you to build and launch new models and services and enhance user experience.
Model Zoo: optimizes model structures to make better use of NPU (neural processing unit) based AI acceleration
During development, you may need to perform model adjustment on the underlying hardware structure to maximize computing power. Such adjustment can be costly in time and resources. Model Zoo provides NPU-friendly model structures, backbones, and operators to improve model structure and make better use of Kirin chip's NPU for AI acceleration.
Toolkit for lightweight models: make your apps smaller and run faster
32-bit models used for training provide high calculation precision, but equally consume a lot of power and memory. Our toolkit converts original models into smaller, more lightweight ones that are better suited to NPU, without comprising much on computing precision. This single adjustment helps save phone space and computing resources.
HiAI Foundation toolkit for lightweight models​Network architecture search (NAS) toolkit: simple and effective network design
HiAI Foundation provides a toolkit supporting multiple types of network architecture search, including classification, detection, and segmentation. With specific precision and performance requirements, the toolkit runs optimization algorithms to determine the most appropriate network architecture and performance based on the hardware information. It is compatible with mainstream training frameworks, including PyTorch, TensorFlow, and Caffe, and supports high computing power for multiple mainstream hardware platforms.
HiAI Foundation NAS toolkit​IP core collaboration: improved performance at reduced power with the DDR memory shared among computing units
HiAI Foundation ensures full collaboration between IP cores and open computing power for hardware. Now, IP cores (CPU, NPU, ISP, GPU) share the DDR memory, minimizing data copy and transfers across cores, for better performance at a lower power consumption.
HiAI Foundation connects smart services with underlying hardware capabilities. It is compatible with mainstream frameworks like MNN, TNN, MindSpore Lite, Paddle Lite, and KwaiNN, and can perform AI calculation in NPU, CPU, GPU, or DSP using the inference acceleration platform Foundation DDK and the heterogeneous computing platform Foundation HCL. It is the perfect framework to build the new-generation AI apps that run on devices like mobile phones, tablets, Vision devices, vehicles, and watches.
HiAI Foundation open architecture
Leading AI Standards with Daily Calls Exceeding 60 Billion​Nowadays, AI technologies, such as speech, facial, text, and image recognition, image segmentation, and image super-resolution, are common features of mobile devices. In fact, it has become the norm, with users expecting better AI apps. HiAI Foundation streamlines app development and reduces resource wastage, to help you focus on the design and implementation of innovative AI functions.
High performance, low power consumption, and ease of use are reasons why more and more developers are choosing HiAI Foundation. Since it moved to open source in 2018, the calls have increased from 1 million times/day to 60 billion/day, providing its worth in the global developer community.
Daily calls of HiAI Foundation exceed 60 billion​HiAI Foundation has joined the Artificial Intelligence Industry Technology Innovation Strategy Alliance, or AITISA, and takes part in drafting the device-side AI standards (currently under review). Through joint efforts to building industry standards, HiAI Foundation is committed to harnessing new technology for AI industry evolution.
Visit HUAWEI Developers to find out more about HiAI Foundation.

Categories

Resources