More information like this, you can visit HUAWEI Developer Forum
Original link: https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201272771294910088&fid=0101187876626530001
1 Foreword
Two previous articles have introduced the bank card recognition function of Huawei HMS MLKit.:
Ultra-simple integration of Huawei HMS MLKit Bank card recognition SDK, one-click bank card binding:
https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201258006396920241&fid=0101187876626530001
One article to understand Huawei HMS ML Kit text recognition, bank card recognition, general card identification
https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201253487604250240&fid=0101187876626530001
Through the above two articles, you must already know the usage scenarios of bank card recognition and how to use Huawei's bank card recognition SDK. But is the SDK provided by Huawei good or bad? How about competitiveness? I will give you an in-depth evaluation to see how effective it is.
2 Pick a competing product-Card.IO
In order to better reflect the evaluation results, we need to choose a competitor. We choose a more popular github open source project Card.io for comparison, see which one is better. I downloaded a card.io demo code from github, compiled and generated an APK and installed it on the phone, let's start our comparison.
3 Contrast dimension
As a developer, if you want to choose an easy-to-use SDK, you will mainly consider whether the SDK will charge or not, if it has a high accuracy, whether the recognition speed is fast, etc.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
4 Is it free? - Use cost and expense comparison
4.1 Comparison conclusion: both are*free
5 Which devices are supported? - Device type coverage comparison
5.1 Contrast conclusion: both are all phone types*covered
6 How much storage space is required for the integrated SDK? - SDK package size comparison
card.io provides a variety of algorithm libraries such as x86_64, arm64-v8a, X86, armeabi-v7a, armeabi, mips, etc. This is mainly used to adapt to different CPUs, because the current Android phones are basically arm architecture*, And armeabi is currently obsolete, so in order to be fair, we only calculate the algorithm library of arm64-v8a and armeabi-v7a.
6.1 Cardio SDK package*size:
This is the size of the entire sample APK, and the size of the SDK part after removing the sample code is about 6.1M.
6.2 Huawei MLKit bank card recognition SDK package*size:
By analyzing the size of the sample APK, you can see that the SDK contains two parts: algorithm and assets. The total size of the SDK parts is about 3.1M
6.3 Comparison conclusion: Obviously, Huawei HMS MLKit is*better
The comparison is summarized as follows:
7 Which countries and types of cards can be identified? - Comparison of supported card*types
7.1 Analysis of common card*types
Common bank cards in the world are Visa, JCB, MasterCard, American Express Card, China UnionPay, etc. We will select some examples for these cards and see how the two SDKs identify the results.
7.2 Visa card identification comparison
I search several visa cards for identification and confirmation to see if the starting card type and card number can be correctly identified.
7.2.1 Card.io: Visa card can be correctly recognized
7.2.2 Huawei HMS MLKit: Visa card can be recognized correctly
It seems that for Visa cards, the card type and card number can be correctly identified.
7.3 MasterCard identification comparison
7.3.1 Card.io: can be correctly identified
7.3.2 Huawei HMS MLKit: can be correctly identified
7.4 JCB card identification comparison
7.4.1 Card.io: The recognition test was not successful
Since there is no real JCB card in hand, I search some cards on the Internet and tested many of them, but none of them were recognized. if you has real cards*,can experience it by yourself and feedback the test results.
7.4.2 Huawei HMS MLKit: can be correctly identified
However, since there is no real card, the organization is not identified, if you have real card can experience it and feedback the test results.
7.5 American Express Card Comparison
7.5.1 Card.io: The recognition test was not successful
Since there is no real American Express Card card in hand, I search some cards on the Internet and tested many of them, but none of them were recognized. if you has real cards*,can experience it by yourself and feedback the test results.
7.5.2 Huawei HMS MLKit: can be correctly identified
Since there is no real card, the organization is not identified, if you have real cards, can experience it by yourself s and feedback the test results.
This is not the end. For full content, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201272771294910088&fid=0101187876626530001
Related
This article is originally from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
All About Maps
Let's talk about maps. I started an open source project called All About Maps (https://github.com/ulusoyca/AllAboutMaps). In this project I aim to demonstrate how we can implement the same map related use cases with different map providers in one codebase. We will use Mapbox Maps, Google Maps, and Huawei HMS Map Kit. This project uses following libraries and patterns:
MVVM pattern with Android Jetpack Libraries
Kotlin Coroutines for asynchronous operations
Dagger2 Dependency Injection
Android Clean Architecture
Note: The codebase changes by time. You can always find the latest code in develop branch. The code when this article is written can be seen by choosing the tag: episode_1-parse-gpx:
https://github.com/ulusoyca/AllAboutMaps/tree/episode_1-parse-gpx/
Motivation
Why do we need maps in our apps? What are the features a developer would expect from a map SDK? Let's try to list some:
Showing a coordinate on a map with camera options (zoom, tilt, latitude, longitude, bearing)
Adding symbols, photos, polylines, polygons to map
Handle user gestures (click, pinch, move events)
Showing maps with different map styles (Outdoor, Hybrid, Satallite, Winter, Dark etc.)
Data visualization (heatmaps, charts, clusters, time-lapse effect)
Offline map visualization (providing map tiles without network connectivity)
Generate snapshot image of a bounded region
We can probably add more items but I believe this is the list of features which all map provider companies would most likely provide. Knowing that we can achieve the same tasks with different map providers, we should not create huge dependencies to any specific provider in our codebase. When a product owner (PO) tells to developers to switch from Google Maps to Mapbox Maps, or Huawei Maps, developers should never see it as a big deal. It is software development. Business as usual.
One would probably think why a PO would want to switch from one map provider to another. In many cases, the reason is not the technical details. For example, Google Play Services may not be available in some devices or regions like China. Another case is when a company X which has a subscription to Mapbox, acquires company Y which uses Google Maps. In this case the transition to one provider is more efficient. Change in the terms of services, and pricing might be other motivations.
We need competition in the market! Let's switch easily when needed but how dependencies make things worse? Problematic dependencies in the codebase are usually created by developing software like there is no tomorrow. It is not always developers' fault. Tight schedules, anti-refactoring minded teams, unorganized plannings may cause careless coding and then eventually to technical depts. In this project, I aim to show how we can encapsulate the import lines below belonging to three different map providers to minimum number of classes with minimum lines:
Code:
import com.huawei.hms.maps.*
import com.google.android.gms.maps.*
import com.mapbox.mapboxsdk.maps.*
It should be noted that the way of achieving this in this post is just one proposal. There are always alternative and better ways of implementations. In the end, as software developers, we should deliver our tasks time-efficiently, without over-engineering.
About the project
In the home page of the project you will see the list of tutorials. Since this is the first blog post, there is only one item for now. To make our life easier with RecyclerViews, I use Epoxy library by Airbnb in the project. Once you click the buttons in the card, it will take to the detail page. Using bottom sheet we can switch between map providers. Note that Huawei Map Kit requires a Huawei mobile phone.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this first blog post, we will parse the GPX file of 120 km route of Cappadocia Ultra Trail race and show the route and check points (food stations) on map. I finished this race in 23 hours 45 mins and you can also read my experience here (https://link.medium.com/uWmrWLAzR6). GPX is an open standart which contains route points that constructs a polyline and waypoints which are the attraction location. In this case, the waypoints represents the food and aid stations in the race. We will show the route with a polyline and waypoints with markers on map.
Architecture
Architecture is definitely not an overrated concept. Since the early days of Android, we have been seeking for the best architectural patterns that suits with Android development. We have heard of MVC, MVP, MVVM, MVI and many other patterns will emerge. The change and adaptation to a new pattern is inevitable by time. We should keep in mind some basic and commonly accepted concepts like SOLID principles, seperation of concerns, maintainability, readibility, testablity etc. so that we can switch to between patterns easily when needed.
Nowadays, widely accepted architecture in Android community is modularization with Clean Architecture. If you have time to invest more, I would strongly suggest Joe Birch's clean architecture tutorials. As Joe suggests in his tutorials, we do not have to apply every rule line by line but instead we take whatever we feel like is needed. Here is my take and how I modularized the All About Maps app:
Note that dependency injection with Dagger2 is the core of this implementation. If you are not familiar with the concept, I strongly suggest you to read the best Dagger2 tutorial in the wild Dagger2 world by Nimrod Dayan.
Domain Module
Many of us are excited to start implementation with UI to see the results immediately but we should patiently build our blocks. We shall start with the domain module since we will put our business logic and define the entities and user interactions there.
This is not the end. For full content, you can visit https://forums.developer.huawei.com/forumPortal/en/topicview?tid=0201260729166420263&fid=0101187876626530001
Tao Xinle is excited about the upcoming HUAWEI DEVELOPER CONFERENCE 2020 (Together), not only because he can share stories and thoughts with developers across the world at this annual event, but also because he will bring his new innovation to the HDC. It is a text recognition app called ScanScan that has been downloaded more than 9 million times in AppGallery.
ScanScan was born out of a romance. Three years ago, Tao quit his job in Beijing and moved to Yunnan Province to live with his girlfriend Baibai. As a book lover who enjoys reading and noting down her favorite sentences, she tried various types of OCR software but was frustrated by the complicated procedures, low precision, and high costs. Therefore, Tao decided to develop a handy OCR tool for her.
Tao used the white cat he raised with his girlfriend as the logo for ScanScan to symbolize their love and togetherness.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Story Behind ScanScan: Huawei Developer Platforms Power Mobile App Accessibility
Baibai giving feedback about ScanScan
As a beta user of ScanScan, Baibai offered a lot of useful feedback, which inspired Tao to add more functions to the original version, including document scanning, chart recognition, batch recognition, and translation.
During app development, Tao used the OCR and document correction capabilities powered by HUAWEI HiAI to improve the accuracy of text recognition and speed of boundary detection, and also integrated HMS Core's ML Kit, all free of charge. In addition to helping Tao save on resources, these two platforms also allowed the OCR feature to be compatible with various mobile phones, from low-end to high-end models, from Huawei brands to non-Huawei brands, even without the need to connect to a network.
The full-coverage capabilities provided by Huawei allow developers to develop features or apps compatible with all device models. ScanScan offers an offline recognition capability, which keeps user data safe with the recognized results stored locally on your phone, and allows users to use it anywhere, even in remote areas where the network signal is often patchy.
Story Behind ScanScan: Huawei Developer Platforms Power Mobile App Accessibility
Tao Xinle and Baibai trying out the app
At the very beginning, ScanScan aimed to offer more convenience to users like Baibai. However, it turned out to be a blessing for another unexpected group of users.
"ScanScan really helps me see the world," said Anzhi, a visually impaired user of the app. "I use it to read my schedule, musical notation, user guide for electronic device, and the label on medicine packets. Sometimes when I am not sure which floor I am on, ScanScan can help me identify the floor by taking a picture." Anzhi described her user experience with high praise for the app: "If I was only allowed to use one app on my phone, it would be ScanScan because it really helps me see more in my life."
By integrating HMS Core's AI capabilities and adapting to some accessibility functions on phones, ScanScan can easily recognize text in photos and convert it into audio output, which enables people with visual impairments to read in daily life. It also adds voice alerts to instruct users to adjust the camera angle for a more precise recognition result.
"When I found out that ScanScan can actually help people, it felt like I’ve done something worthwhile," said Tao.
Such powerful technology should be accessed by everyone, though it is sometimes still out of reach for certain groups. Accessibility features are crucial to apps, just like tactile paving is an indispensable part of our streets. By creating an app like ScanScan, Tao has demonstrated that he is as much of a pioneer as he is a developer who has paved the way for more newcomers.
MindSpore is an open-sourced framework for AI based application development which is announced by Huawei. It is a robust alternative to AI frameworks such as TensorFlow and PyTorch which are widely used in the market.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Let’s start by emphasizing the features and advantages of MindSpore framework:
MindSpore implements AI algorithms for easier model development and provides cutting-edge technologies with Huawei AI processors to improve runtime efficiency and computing performance.
One of its advantages is that it can be used in several environments like on devices, cloud and edge environments. It supports operating systems like IOS and Android, and AI applications on various devices such as mobile phones, tablets and IoT devices.
MindSpore supports parallel training across hardware to reduce training times. It maximizes both hardware computing power and minimizes inference latency and power consumption.
It provides dynamic debugging ability for developers which enables to find out bugs in the apps easily.
According to Huawei, MindSpore does not process data by itself but ingests the gradient and model information that has been processed. This ensures the integrity of sensitive data.
MindSpore Lite is an inference framework for custom models which is provided by HMS ML Kit to simplify the integration and development. The developers can define their own model and implement model inference thanks to MindSpore Lite capabilities.
MindSpore Lite is compatible with commonly used AI platforms like TensorFlow Lite, Caffe, and Onnx. Different models can be converted into .ms (MindSpore) format and then run perfectly.
Custom models can be deployed and executed easily since they are compressed and occupy small storage.
It provides complete APIs to integrate inference framework of an on-device custom model.
HMS ML Kit enables to train and generate custom models with deep machine learning. It also offers pre-trained image classification model. You can develop your own custom model by using Transfer Learning feature of ML Kit with a specific dataset.
I will basically explain you how to train your own model over an example which contains three plant categories. We will use a small data set for reference and train the image classification model to identify cactus, pine and succulent plants. The model will be created by using HMS Toolkit plug-in and AI Create.
HMS Toolkit: As a lightweight IDE tool plugin, HMS Toolkit implements app creation, coding, conversion, debugging, test, and release. It helps you integrate HMS Core APIs with lower costs and higher efficiency.
AI Create: Provides the transfer learning capabilities of image classification and text classification. Images and texts can be identified thanks to AI Create. It uses MindSpore as a training framework and MindSpore Lite as inference framework.
Note: Use the Android Studio marketplace to install HMS Toolkit plug-in. Please go to File > Settings > Plugins > Marketplace, enter HMS Toolkit into the search box and click install. After installation complete, restart Android Studio.
We should prepare the environment to train our model first. AI Create only supports Windows operating system currently. Please open Coding Assistant by using the new HMS section that came with HMS Toolkit plug-in. Go to AI > AI Create in Coding Assistant and select Image and click Confirm for Image Classification.
After this step HMS Toolkit automatically downloads resources for you. If the Python environment is not configured, the dialog box will be displayed as below.
Note: You should download and install Python 3.7.5 from the link to use AI Create. After installation complete, please do not forget to add Python installation path into the Path variable in Environment Variables and restart the Android Studio.
After environment is ready, if you select Image and click Confirm from AI Create it will automatically start to install MindSpore. Please be sure the framework has been installed successfully by checking the Event logs.
From now, new model section will be opened to select an image folder to train our own model. You should prepare your data set in accordance with the requirements. We will train the model for our demo to identify cactus, succulent and pine plants with a small data set.
The folder structure should be like below :
The following conditions should be met for image resources:
The minimum number of pictures for each category of training data is 10.
The lower limit of the classification number of the training data set is 2, and the upper limit is 1000.
Supported image formats: .bmp, .jpg, .jpeg, .png or .gif.
After training image folder is selected, please set Output model file path and training parameters. If you check HMS Custom Model, a complete model will be generated. The train parameters affects the accuracy of image recognition model. You can modify them if you have a experience with deep learning. When you click on Create Model, MindSpore will start to train your model according to the data set.
Training process will take a time depending on your data set. As we used a small data set with the minimum number of the requirements it is completed quickly. You can also track training logs, your model will be created on the specified path at the end of the process.
The training results will be shared after model training is completed. AI create enables to test your model by adding the test images before using it in any project. You can also generate a demo project that implements your new model with Generate Demo option.
You should create a new test image folder with the same structure of provided data set.
As you see above, our test average accuracy is calculated as 57.1%. This accuracy can be improved by providing comprehensive data set and training.
You can also use and experience results of your new model over a demo project which can be created by HMS Toolkit. After the demo is created, you may directly run and build the project and check the results on real device.
In this article, I wanted to share basic information about MindSpore and how we can use Transfer Learning function of HMS for custom models.
You can also develop your own classification model by using this post as a reference. I hope that it will be useful for you !
Please follow our next articles for more details about ML Kit Custom Model and MindSpore.
References
https://developer.huawei.com/consum...ore-Guides/ml-mindspore-lite-0000001055328885
https://www.mindspore.cn/lite/en
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
Are you new to machine learning?
If yes. Then let’s start from the scratch.
What is machine learning?
Definition: “Field of study that gives computer capability to learn without being explicitly programmed.”
In general: Machine learning is an Application of the Artificial intelligence (AI) it gives devices the ability to learn from their experience improve their self-learning without doing any coding. For example if you search something related Ads will be shown on the screen.
Machine Learning is a subset of Artificial Intelligence. Machine Learning is the study of making machines more human-like in their behavior and decisions by giving them the ability to learn and develop their own programs. This is done with minimum human intervention, that is, no explicit programming. The learning process is automated and improved based on the experiences of the machines throughout the process. Good quality data is fed to the machines, and different algorithms are used to build ML models to train the machines on this data. The choice of algorithm depends on the type of data at hand, and the type of activity that needs to be automated.
Do you have question like what is the difference between machine learning and traditional programming?
Traditional programming
We would feed the input data and well written and tested code into machine to generate output.
Machine Learning
We feed the Input data along with the output is fed into the machine during the learning phase, and it works out a program for itself.
Steps of machine learning
1. Gathering Data
2. Preparing that data
3. Choosing a model
4. Training
5. Evaluation
6. Hyper parameter Tuning
7. Prediction
How does Machine Learning work?
The three major building blocks of a Machine Learning system are the model, the parameters, and the learner.
Model is the system which makes predictions.
The parameters are the factors which are considered by the model to make predictions.
The learner makes the adjustments in the parameters and the model to align the predictions with the actual results.
Now let’s build on the water example from the above and learn how machine learning works. A machine learning model here has to predict whether water is useful to drink or not. The parameters selected are as follows
Dissolved oxygen
pH
Temperature
Decayed organic materials
Pesticides
Toxic and hazardous substances
Oils, grease, and other chemicals
Detergents
Learning from the training set.
This involves taking a sample data set of several place water for which the parameters are specified. Now, we have to define the description of each classification that is useful to drink water, in terms of the value of parameters for each type. The model can use the description to decide if a new sample of water is useful to drink or not.
You can represent the values of the parameters, ‘pH’ ,‘Temperature’ , ‘Dissolved oxygen’ etc, as ‘x’ , ‘y’ and ‘z’ etc. Then (x, y, z) defines the parameters of each drink in the training data. This set of data is called a training set. These values, when plotted on a graph, present a hypothesis in the form of a line, a rectangle, or a polynomial that fits best to the desired results.
Now we have learnt what machine learning is and how it works, now let’s understand about Huawei ML kit.
Huawei ML kit
HUAWEI ML Kit allows your apps to easily leverage Huawei's long-term proven expertise in machine learning to support diverse artificial intelligence (AI) applications throughout a wide range of industries.
Huawei has already provided some built in feature in SDK which are as follows.
Text related service.
Text recognition
Document recognition
Id card recognition
Bank card recognition
General card recognition
Form Recognition
Language/Voice related services.
Translation
Language detection
Text to speech
Image related services.
Image classification
Object detection and Tracking
Landmark recognition
Product visual search
Image super resolution
Document skew correction
Text image super resolution
Scene detection
Face/Body related services.
Face detection
Skeleton detection
Liveness detection
Hand gesture recognition
Face verification
Natural language processing services.
Text embedding
Custom model.
AI create
Model deployment and Inference
Pre-trained model
In this series of article we learn about Huawei Custom model. As an on-device inference framework of the custom model, the on-device inference framework MindSpore Lite provided by ML Kit facilitates integration and development and can be running on devices. By introducing this inference framework, you can define your own model and implement model inference at the minimum cost.
Advantages of MindSpore Lite
It provides simple and complete APIs for you to integrate the inference framework of an on-device custom model.
Customize model in simple and quickest with excellent experience with Machine learning.
It is compatible with all mainstream model inference platforms or frameworks, such as MindSpore Lite, TensorFlow Lite, Caffe, and Onnx in the market. Different models can be converted into the .ms format without any loss, and then run perfectly through the on-device inference framework.
Custom models occupy small storage space and can be quantized and compressed. Models can be quickly deployed and executed. In addition, models can be hosted on the cloud and downloaded as required, reducing the APK size.
Steps to be followed to Implement Custom model
Step 1: Install HMS Toolkit from Android Studio Marketplace.
Step 2: Transfer learning by using AI Create.
Step 3: Model training
Step 4: Model verification
Step 5: Upload model to AGC
Step 6: Load the remote model
Step 7: Perform inference using model inference engine
Let us start one by one.
Step 1: Install HMS Toolkit from Android Studio Marketplace. After the installation, restart Android Studio.
· Choose File > Setting > Plugins
Result
Coming soon in upcoming article.
Tips and Tricks
Make sure you are already registered as Huawei Developer.
Learn basic of machine learning.
Install HMS tool in android studio
Conclusion
In this article, we have learnt what exactly machine learning is and how it works. And difference between traditional programming and machine learning. Steps required to build custom model and also how to install HMS tool in android studio. In upcoming article I’ll continue the remaining steps in custom model of machine learning.
Reference
ML Kit Official document
Checkout in forum
The HUAWEI DEVELOPER CONFERENCE 2022 (Together) kicked off on Nov. 4 at Songshan Lake in Dongguan, Guangdong, and showcased HMS Core 3D Modeling Kit, one of the critical services that illustrates HMS Core's 3D tech. At the conference, the kit revealed its latest auto rigging function that is highly automated, is incredibly robust, and delivers great skinning results, helping developers bring their ideas to life.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The auto rigging function of 3D Modeling Kit leverages AI to deliver a range of services such as automatic rigging for developers whose apps cover product display, online learning, AR gaming, animation creation, and more.
This function lets users generate a 3D model of a biped humanoid object simply by taking photos with a standard mobile phone camera, and then lets users simultaneously perform rigging and skin weight generation. In this way, the model can be easily animated.
Auto rigging simplifies the process of generating 3D models, particularly for those who want to create their own animations. Conventional animation methods require a model to be created first, and then a rigger has to make the skeleton of this model. Once the skeleton is created, the rigger needs to manually rig the model using skeleton points, one by one, so that the skeleton can support the model. With auto rigging, all the complexities of manual modeling and rigging can be done automatically.
There are several other automatic rigging solutions available. However, they all require the object to be modeled be in a standard position. Auto rigging from 3D Modeling Kit is free of this restriction. This AI-driven function supports multiple positions, allowing the object's body to move asymmetrically.
The function's AI algorithms deliver remarkable accuracy and a great generalization ability — due to a Huawei-developed 3D character data generation framework built upon hundreds of thousands of 3D rigging data. Most rigging solutions can recognize and track 17 skeleton points, but auto rigging delivers 23, meaning it can recognize a posture more accurately.
3D Modeling Kit has been working extensively for developers and their partners across a wide range of fields. This year, Bilibili merchandise (online market provided by the video streaming and sharing platform Bilibili) has cooperated with HMS Core to adopt the auto rigging function, allowing for virtually displaying products. This has created a more immersive shopping experience for Bilibili users through the application of 3D product models that can make movements like dancing.
This is not the first time Bilibili cooperated with HMS Core as it previously implemented HMS Core AR Engine's capabilities in 2021 for its tarot card product series. Backed by AR technology, the cards feature 3D effects and users are able to interact with the cards, which are well received by users.
3D Modeling Kit can play an important role in many other fields.
For example, an education app can use auto rigging to create a 3D version of the teaching material and bring it to life, which is fun to watch and helps keep students engaged. A game can use auto rigging, 3D object reconstruction, and material generation functions from 3D Modeling Kit to streamline the process for creating 3D animations and characters.
HMS Core strives to open up more software-hardware and device-cloud capabilities and to lay a solid foundation for the HMS ecosystem with intelligent connectivity. Moving forward, 3D Modeling Kit, along with other HMS Core services, will be committed to offering straightforward coding to help developers create apps that deliver an immersive 3D experience to users.