{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Washboard abs, buff biceps, or a curvy figure — a body shape that most of us probably desire. However, let's be honest: We're too lazy to get it.
Hitting the gym is a great choice to getting ourselves in shape, but paying attention to what we eat and how much we eat requires not only great persistence, but also knowledge about what goes in food.
The food recognition function can be integrated into fitness apps, letting users use their phone's camera to capture food and displaying on-screen details about the calories, nutrients, and other bits and pieces of the food in question. This helps health fanatics keep track of what they eat on a meal-by-meal basis.
The GIF below shows the food recognition function in action.
Technical PrinciplesThis fitness assistant is made possible thanks to the image classification technology which is a widely-adopted basic branch of the AI field. Traditionally, image classification works by initially pre-processing images, extracting their features, and developing a classifier. The second part of the process entails a huge amount of manual labor, meaning such a process can merely classify images with limited information. Forget about the images having lists of details.
Luckily, in recent years, image classification has developed considerably with the help of deep learning. This method adopts a specific inference framework and the neural network to classify and tag elements in images, to better determine the image themes and scenarios.
Image classification from HMS Core ML Kit is one service that adopts such a method. It works by: detecting the input image in static image mode or camera stream mode → analyzing the image by using the on-device or on-cloud algorithm model → returning the image category (for example, plant, furniture, or mobile phone) and its corresponding confidence.
The figure below illustrates the whole procedure.
Advantages of ML Kit's Image ClassificationThis service is built upon deep learning. It recognizes image content (such as objects, scenes, behavior, and more) and returns their corresponding tag information. It is able to provide accuracy, speed, and more by utilizing:
Transfer learning algorithm: The service is equipped with a higher-performance image-tagging model and a better knowledge transfer capability, as well as a regularly refined deep neural network topology, to boost accuracy by 38%.
Semantic network WordNet: The service optimizes the semantic analysis model and analyzes images semantically. It can automatically deduce the image concepts and tags, and supports up to 23,000 tags.
Acceleration based on Huawei GPU cloud services: Huawei GPU cloud services increase the cache bandwidth by 2 times and the bit width by 8 times, which are vastly superior to the predecessor. These improvements mean that image classification requires only 100 milliseconds to recognize an image.
Sound tempting, right? Here's something even better if you want to use the image classification service from ML Kit for your fitness app: You can either directly use the classification categories offered by the service, or customize your image classification model. You can then train your model with the images collected for different foods, and import their tag data into your app to build up a huge database of food calorie details. When your user uses the app, the depth of field (DoF) camera on their device (a Huawei phone, for example) measures the distance between the device and food to estimate the size and weight of the food. Your app then matches the estimation with the information in its database, to break down the food's calories.
In addition to fitness management, ML Kit's image classification can also be used in a range of other scenarios, for example, image gallery management, product image classification for an e-commerce app, and more.
All these can be realized with the image classification categories of the mentioned image classification service. I have integrated it into my app, so what are you waiting for?
Related
Huawei's kits ensure Huawei smartphone users can unlock enhanced functionality and enjoy new benefits
A fast-growing and innovative photo editing tool is now available to users outside of China through AppGallery.
The innovative photo editing tool Cut Cut gives everyday users access to professional retouching and editing software, enabling them to easily create crisper, cleaner, and prettier images from their Huawei smartphone. It is quickly becoming one of the most popular and highest-rated photography apps in the world, amassing almost 100 million global users in less than 18 months.
Cut Cut integrates some of Huawei's leading technology to give users a unique and seamless photo editing experience. As well as allowing users to remove or edit backgrounds, add filters and stickers to images, access a massive material library, and create new collages and artworks, it also includes features made possible by Huawei's chip, device, and cloud capabilities. For example, Huawei's Machine Learning Kit reduces on-device processing time and improves image segmentation precision even if the user is offline. The image segmentation service also unlocks other enhanced functionality features such as image area optimization, portrait colouring, and sky filter effects - Cut Cut can even identify pixels of different kinds of elements more accurately to enhance images of things like the sky or grass.
The app uses HiAI, Huawei's latest AI technology. This ground-breaking platform provides capabilities at the chip, device, and cloud level, allowing Cut Cut to automatically detect people in photos and intelligently cut out backgrounds and differentiate objects within images.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Cut Cut also integrates a range of other HMS Core and HMS Capabilities kits to enable a smoother, easier, and more tailored user experience. For example, the Analytics Kit feeds developers useful insights on user behaviour, helping them better understand user preferences and therefore allowing them to constantly improve applications. Other kits integrated into Cut Cut that deliver user advantages include the Account Kit, allowing users to easily sign in using their Huawei ID; the Push Kit, notifying users of any important promotions or messages; Ads Kit, helping ensure ads are high quality and personalized and therefore less intrusive; and a range of other user benefits, such as convenient in-app purchases and easy sharing to social media and other platforms.
Furthermore, Cut Cut developers APUS has given out a certain amount of all Huawei smartphone users a six-month VIP gift packages as early-bird-incentives to Huawei smartphone users who download the App from AppGallery, which unlocks all paid materials to give members free access to more than 20,000 resources such as background, stickers, and filters.
Cut Cut, along with thousands of other quality apps across 18 categories, are available on Huawei's open, innovative, and secure app distribution platform, AppGallery. One of the top three app marketplaces globally, AppGallery connects 400 million monthly active users throughout more than 170 countries and regions to Huawei''s smart and innovative ecosystem.
For more information about AppGallery, please visit consumer.huawei.com/en/mobileservices/appgallery/
Interested in knowing more about HMS kits and capabilities? Please visit: developer.huawei.com/consumer/en/hms
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
New Kits
AR Engine:
Added the function of health check through facial recognition, which analyzes facial images of individuals to determine various health indicators and personal attributes such as the heart rate, respiration rate, age, and gender, assisting with preventative health management. Further health indicators will be made available in the near future.
Added the Native API to meet performance requirements. (only for the Chinese mainland)
Learn more
ML Kit:
Added a pre-trained text classification model, which classifies input text to help define the application scenarios for the text.
Face detection: Supported the 3D face detection capability, which obtains a range of information, such as the face keypoint coordinates, 3D projection matrix, and face angle.
On-device text to speech: Added eagle timbres for Chinese and English to meet broad-ranging needs.
Real-time translation and real-time language detection: Supported integration into iOS systems.
Other updates:
(1) Audio file transcription: Supported setting of the user data deletion period.
(2) Real-time translation: Supported seven additional languages.
(3) On-device translation: Supported eight additional languages.
(4) Real-time language detection: Supported two additional languages.
Learn more
Analytics Kit:
Added e-commerce industry analysis reports, which help developers of e-commerce apps with refined operations in two areas: product sales analysis and category analysis.
Added game industry analysis reports, which provide invaluable data such as core revenue indicators and user analysis data for game developers to gain in-depth insight into player attributes.
Enhanced the attribution analysis function, which analyzes the attribution of marketing push services to compare their conversion effect.
Added installation source analysis, which helps developers analyze new users drawn from various marketing channels.
Learn more
Accelerate Kit:
Multithread-lib: Optimized the wakeup overhead, buffer pool, and cache mechanisms to provide enhanced performance.
Added the performance acceleration module PerfGenius, which supports frame rate control, key thread control, and system status monitoring. The module effectively solves problems such as frame freezing and frame loss in some scenarios and avoids performance waste in light-load scenarios, maximizing the energy-efficiency ratio of the entire device.
Learn more
Health Kit:
Added the data sharing function, which now enables users to view the list of apps (including app names and icons) for which their health data is shared, as well as the list of authorized data (displayed in groups) that can be shared.
Added the authorization management function, through which users can authorize specific apps to read or write certain data, or revoke the authorization on a more flexible basis.
Added the stress details and stress statistics data types.
Learn more
Other kits:
Made necessary updates to other kits.
Learn more
New Resources
Shopping App :
Sample Code: Added hms-ecommerce-demo, which provides developers with one-stop services related to the e-commerce industry. The app incorporates 13 capabilities, such as ad placement, message pushing, and scan-to-shop QR code. You can quickly build capabilities required for wide-ranging shopping scenarios in apps via the sample code.
Learn more
Account Kit:
Sample Code: Added the function of automatically reading an SMS verification code after user authorization to huawei-account-demo.
Learn more
Map Kit:
Sample Code: Added the Kotlin sample code to hms-mapkit-demo-java, which is used to set a fixed screen center for zooming.
Learn more
Site Kit:
Sample Code: Added the Kotlin sample code to hms-sitekit-demo.
Learn more
Nice update.
Core Value
Real-time overview provides real-time data feedback and analysis, which are significant to improve the efficiency of product operations. For key marketing scenarios related to user attraction, such as online operations activities, new version releases, and abnormal traffic warnings, its low-latency data feedback can benefit your agile business decision-making.
Application Scenarios
Scenario 1: Real-Time Evaluation of Activity Traffic
In most cases, after a new user acquisition activity is rolled out online, traffic is monitored hourly or daily. This makes it difficult for operations personnel to accurately locate the root cause of an exception and make timely adjustments, which may hinder the effectiveness of the activity.
Luckily, real-time overview can analyze traffic by the minute and present real-time fluctuations of new users in an app accurately, indicating when the best activity effect is achieved and how to optimize subsequent activities.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
* Test environment data is for reference only.
Scenario 2: Optimization of New Versions
Operations personnel require real-time data to measure the performance and acceptance of new versions, in the face of fast product iterations driven by ever-changing user requirements.
For example, after a game update is released, how the players respond to the new content directly impacts the game's revenue.
To understand how users respond to the update and mitigate its problems, real-time overview can be used as a reference. By referring to real-time overview, you can easily spot abnormal fluctuations, and then quickly optimize your app and take corresponding operations methods.
* Test environment data is for reference only.
Scenario 3: Real-Time View of User Characteristics
Real-time overview helps you understand whether the in-app journey of users matches the product design, whether you have attracted the target users who use specific device models and come from specific places, as well as their in-app behaviors.
The User analysis report clearly displays the real-time distribution of users by each attribute, like channels and countries/regions, in the form of cards.
* Test environment data is for reference only.
With the Event analysis report, you can learn about users' frequent in-app behaviors, so that you can identify the best time to send push notifications and in-app messages.
* Test environment data is for reference only.
How to Use Real-Time Overview
Sign in to AppGallery Connect, click My projects, find your project, and go to HUAWEI Analytics > Overview > Real-time overview.
Visit our official website to learn more.
Artificial reality (AR) has been widely deployed in many fields, such as marketing, education, and gaming fields, as well as in exhibition halls. 2D image and 3D object tracking technologies allow users to add AR effects to photos or videos taken with their phones, like a 2D poster or card, or a 3D cultural relic or garage kit. More and more apps are using AR technologies to provide innovative and fun features. But to stand out from the pack, more resources must be put into app development, which is time-consuming and entails huge workload.
HMS Core AR Engine makes development easier than ever. With 2D image and 3D object tracking based on device-cloud synergy, you will be able to develop apps that deliver premium experience.
2D Image TrackingReal-time 2D image tracking technology is largely employed by online shopping platforms for product demonstration, where shoppers interact with the AR effects to view products from different angles. According to the background statistics of one platform, the sales volume of products with AR special effects is much higher than other products, involving twice as much interaction in AR-based activities than common activities. This is one example of how platforms can deploy AR technologies to make profit.
To apply AR effects to more images on an app using traditional device-side 2D image tracking solutions, you need to release a new app version, which can be costly. In addition, increasing the number of images will stretch the app size. That's why AR Engine adopts device-cloud synergy, which allows you to easily apply AR effects to new images by simply uploading images to the cloud, without updates to your app, or occupying extra space.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2D image tracking with device-cloud synergy
This technology consists of the following modules:
Cloud-side image feature extraction
Cloud-side vector retrieval engine
Device-side visual tracking
In terms of response speed to and from the cloud, AR Engine runs a high-performance vector retrieval engine by virtue of the platform's hardware acceleration capability, to ensure millisecond-level retrieval from massive volumes of feature data.
3D Object TrackingAR Engine also allows real-time tracking of 3D objects like cultural relics and products. It presents 3D objects as holograms to supercharge images.
3D objects can be mundane and stem from various textures and materials, such as a textureless sculpture, or metal utensils that reflect light and appear shiny. In addition, as the light changes, 3D objects can cast shadows. These conditions pose a great challenge to 3D object tracking. AR Engine implements quick, accurate object recognition and tracking with multiple deep neutral networks (DNNs) in three major steps: object detection, coarse positioning of object poses, and pose optimization.
3D object tracking with device-cloud synergyThis technology consists of the following modules:
Cloud-side AI-based generation of training samples
Cloud-side automatic training of DNNs
Cloud-side DNN inference
Device-side visual tracking
Training algorithms for DNNs by manual labeling is labor-and time-consuming. Based on massive offline data and generative adversarial networks (GANs), AR Engine designs an AI-based algorithm for generating training samples, so as to accurately identify 3D objects in complex scenarios without manual labeling.
Currently, Huawei Cyberverse uses the 3D object tracking capability of AR Engine to create an immersive tour guide for Mogao Caves, to reveal never-before-seen details about the caves to tourists.
These premium technologies were constructed, built, and released by Central Media Technology Institute, 2012 Labs. They are open for you to bring users differentiated AR experience.
Learn more about AR Engine at HMS Core AR Engine.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What do you usually do if you like a particular cartoon character? Buy a figurine of it?
That's what most people would do. Unfortunately, however, it is just for decoration. Therefore, I tried to create a way of sending these figurines back to the virtual world — In short, I created a virtual but moveable 3D model of a figurine.
This is done with auto rigging, a new capability of HMS Core 3D Modeling Kit. It can animate a biped humanoid model that can even interact with users.
Check out what I've created using the capability.
What a cutie.
The auto rigging capability is ideal for many types of apps when used together with other capabilities. Take those from HMS Core as an example:
Audio-visual editing capabilities from Audio Editor Kit and Video Editor Kit. We can use auto rigging to animate 3D models of popular stuffed toys that can be livened up with proper dances, voice-overs, and nursery rhymes, to create educational videos for kids. With the adorable models, such videos can play a better role in attracting kids and thus imbuing them with knowledge.
The motion creation capability. This capability, coming from 3D Engine, is loaded with features like real-time skeletal animation, facial expression animation, full body inverse kinematic (FBIK), blending of animation state machines, and more. These features help create smooth 3D animations. Combining models animated by auto rigging and the mentioned features, as well as numerous other 3D Engine features such as HD rendering, visual special effects, and intelligent navigation, is helpful for creating fully functioning games.
AR capabilities from AR Engine, including motion tracking, environment tracking, and human body and face tracking. They allow a model animated by auto rigging to appear in the camera display of a mobile device, so that users can interact with the model. These capabilities are ideal for a mobile game to implement model customization and interaction. This makes games more interactive and fun, which is illustrated perfectly in the image below.
As mentioned earlier, the auto rigging capability supports only the biped humanoid object. However, I think we can try to add two legs to an object (for example, a candlestick) for auto rigging to animate, to recreate the Be Our Guest scene from Beauty and the Beast.
How It WorksAfter a static model of a biped humanoid is input, auto rigging uses AI algorithms for limb rigging and automatically generates the skeleton and skin weights for the model, to finish the skeleton rigging process. Then, the capability changes the orientation and position of the model skeleton so that the model can perform a range of actions such as walking, jumping, and dancing.
AdvantagesDelivering a wholly automated rigging processRigging can be done either manually or automatically. Most highly accurate rigging solutions that are available on the market require the input model to be in a standard position and seven or eight key skeletal points to be added manually.
Auto rigging from 3D Modeling Kit does not have any of these requirements, yet it is able to accurately rig a model.
Utilizing massive data for high-level algorithm accuracy and generalizationAccurate auto rigging depends on hundreds of thousands of 3D model rigging data records that are used to train the Huawei-developed algorithms behind the capability. Thanks to some fine-tuned data records, auto rigging delivers ideal algorithm accuracy and generalization. It can implement rigging for an object model that is created from photos taken from a standard mobile phone camera.
Input Model SpecificationsThe capability's official document lists the following suggestions for an input model that is to be used for auto rigging.
Source: a biped humanoid object (like a figurine or plush toy) that is not holding anything.
Appearance: The limbs and trunk of the object model are not separate, do not overlap, and do not feature any large accessories. The object model should stand on two legs, without its arms overlapping.
Posture: The object model should face forward along the z-axis and be upward along the y-axis. In other words, the model should stand upright, with its front facing forward. None of the model's joints should twist beyond 15 degrees, while there is no requirement on symmetry.
Mesh: The model meshes can be triangle or quadrilateral. The number of mesh vertices should not exceed 80,000. No large part of meshes is missing on the model.
Others: The limbs-to-trunk ratio of the object model complies with that of most toys. The limbs and trunk cannot be too thin or short, which means that the ratio of the arm width to the trunk width and the ratio of the leg width to the trunk width should be no less than 8% of the length of the object's longest edge.
Driven by AI, the auto rigging capability lowers the threshold of 3D modeling and animation creation, opening them up to amateur users.
While learning about this capability, I also came across three other fantastic capabilities of the 3D Modeling Kit. Wanna know what they are? Check them out here. Let me know in the comments section how your auto rigging has come along.