{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
ML Kit:
Added the face verification service, which compares captured faces with existing face records to generate a similarity value, and then determines whether the faces belong to the same person based on the value. This service helps safeguard your financial services and reduce security risks.
Added the capability of recognizing Vietnamese ID cards.
Reduced hand keypoint detection delay and added gesture recognition capabilities, with support for 14 gestures. Gesture recognition is widely used in smart household, interactive entertainment, and online education apps.
Added on-device translation support for 8 new languages, including Hungarian, Dutch, and Persian.
Added support for automatic speech recognition (ASR), text to speech (TTS), and real time transcription services in Chinese and English on all mobile phones, and French, Spanish, German, and Italian on Huawei and Honor phones.
Other updates: Optimized image segmentation, document skew correction, sound detection, and the custom model feature; added audio file transcription support for uploading multiple long audio files on devices.
Learn More
Nearby Service:
Added Windows to the list of platforms that Nearby Connection supports, allowing you to receive and transmit data between Android and Windows devices. For example, you can connect a phone as a touch panel to a computer, or use a phone to make a payment after placing an order on the computer.
Added iOS and MacOS to the list of systems that Nearby Message supports, allowing you to receive beacon messages on iOS and MacOS devices. For example, users can receive marketing messages of a store with beacons deployed, after entering the store.
Learn More
Health Kit:
Added the details and statistical data type for medium- and high-intensity activities.
Learn More
Scene Kit:
Added fine-grained graphics APIs, including those of classes for resources, scenes, nodes, and components, helping you realize more accurate and flexible scene rendering.
Shadow features: Added support for real-time dynamic shadows and the function of enabling or disabling shadow casting and receiving for a single model.
Animation features: Added support for skeletal animation and morphing, playback controller, and playback in forward, reverse, or loop mode.
Added support for asynchronous loading of assets and loading of glTF files from external storage.
Learn More
Computer Graphics Kit:
Added multithreaded rendering capability to significantly increase frame rates in scenes with high CPU usage.
Learn More
Made necessary updates to other kits. Learn More
New Resources
Analytics Kit:
Sample Code: Added the Kotlin sample code to hms-analytics-demo-android and the Swift sample code to hms-analytics-demo-ios.
Learn More
Dynamic Tag Manager:
Sample Code: Added the iOS sample code to hms-dtm-demo-ios.
Learn More
Identity Kit:
Sample Code: Added the Kotlin sample code to hms-identity-demo.
Learn More
Location Kit:
Sample Code: Updated methods for checking whether GNSS is supported and whether the GNSS switch is turned on in hms-location-demo-android-studio; optimized the permission verification process to improve user experience.
Learn More
Map Kit:
Sample Code: Added guide for adding dependencies on two fallbacks to hms-mapkit-demo, so that Map Kit can be used on non-Huawei Android phones and in other scenarios where HMS Core (APK) is not available.
Learn More
Site Kit:
Sample Code: Added the strictBounds attribute for NearbySearchRequest in hms-sitekit-demo, which indicates whether to strictly restrict place search in the bounds specified by location and radius, and added the attribute for QuerySuggestionRequest and SearchFilter for showing whether to strictly restrict place search in the bounds specified by Bounds.
Learn More
Related
More information like this, you can visit HUAWEI Developer Forum
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
ML Kit
Features:
Added the liveness detection service, which supports silent liveness detection and captures faces in real time. It can determine whether a face is of a real user without requiring the user to follow specific instructions.
Added the image super-resolution service, which removes the compression noise of images to obtain clearer images, with the resolution unchanged.
Added the document skew correction service, which automatically identifies the location of a document in an image and adjusts the shooting angle to the angle facing the document, even if the document is tilted.
Added the hand keypoint setection service, improved the speed and accuracy of bank card recognition, enhanced the translation service, optimized the text to speech service, and other features.
Link: https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/version-changehistory-0000001050040023
Scene Kit
Features:
Added APIs for you to create apps which blend the virtual and the real, with features such as dynamic face stickers, 3D Qmojis, and virtual object placement.
Link: https://developer.huawei.com/consumer/en/hms/huawei-scenekit/
Push Kit
Features:
Added the messaging by user time zone, scenario, and geofence, improving user experience.*
*The functions involving user data must be implemented with users' authorization.
Link: https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides/automated-notifications-0000001051072150#EN-US_TOPIC_0000001051072150__section17240145319447
Analytics Kit
Features:
Supported integration of web apps with Analytics Kit to implement data collection and unified analysis.
Supported automatic collection from mobile phones and tablets as well as configuration of the app installation source.
Link: https://developer.huawei.com/consumer/en/hms/huawei-analyticskit
Updates of all HMS Core versions
Learn more: https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/hmssdk-kit-0000001050042513-V5
Scene Kit
Sample Code:
Added the ARView and FaceView development procedures.
Github: https://github.com/HMS-Core/hms-scene-demo
ML Kit
Sample Code:
Added a demo for the messaging by scenario.
Github: https://github.com/HMS-Core/hms-ml-demo
Push Kit
Sample Code:
Added a demo to illustrate how to integrate web apps with Analytics Kit.
Github: https://github.com/HMS-Core/hms-push-clientdemo-android
Analytics Kit
Sample Code:
Added a demo to illustrate how to integrate web apps with Analytics Kit.
Github: https://github.com/HMS-Core/hms-analytics-demo-javascript
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
New Kits
AR Engine:
Added the function of health check through facial recognition, which analyzes facial images of individuals to determine various health indicators and personal attributes such as the heart rate, respiration rate, age, and gender, assisting with preventative health management. Further health indicators will be made available in the near future.
Added the Native API to meet performance requirements. (only for the Chinese mainland)
Learn more
ML Kit:
Added a pre-trained text classification model, which classifies input text to help define the application scenarios for the text.
Face detection: Supported the 3D face detection capability, which obtains a range of information, such as the face keypoint coordinates, 3D projection matrix, and face angle.
On-device text to speech: Added eagle timbres for Chinese and English to meet broad-ranging needs.
Real-time translation and real-time language detection: Supported integration into iOS systems.
Other updates:
(1) Audio file transcription: Supported setting of the user data deletion period.
(2) Real-time translation: Supported seven additional languages.
(3) On-device translation: Supported eight additional languages.
(4) Real-time language detection: Supported two additional languages.
Learn more
Analytics Kit:
Added e-commerce industry analysis reports, which help developers of e-commerce apps with refined operations in two areas: product sales analysis and category analysis.
Added game industry analysis reports, which provide invaluable data such as core revenue indicators and user analysis data for game developers to gain in-depth insight into player attributes.
Enhanced the attribution analysis function, which analyzes the attribution of marketing push services to compare their conversion effect.
Added installation source analysis, which helps developers analyze new users drawn from various marketing channels.
Learn more
Accelerate Kit:
Multithread-lib: Optimized the wakeup overhead, buffer pool, and cache mechanisms to provide enhanced performance.
Added the performance acceleration module PerfGenius, which supports frame rate control, key thread control, and system status monitoring. The module effectively solves problems such as frame freezing and frame loss in some scenarios and avoids performance waste in light-load scenarios, maximizing the energy-efficiency ratio of the entire device.
Learn more
Health Kit:
Added the data sharing function, which now enables users to view the list of apps (including app names and icons) for which their health data is shared, as well as the list of authorized data (displayed in groups) that can be shared.
Added the authorization management function, through which users can authorize specific apps to read or write certain data, or revoke the authorization on a more flexible basis.
Added the stress details and stress statistics data types.
Learn more
Other kits:
Made necessary updates to other kits.
Learn more
New Resources
Shopping App :
Sample Code: Added hms-ecommerce-demo, which provides developers with one-stop services related to the e-commerce industry. The app incorporates 13 capabilities, such as ad placement, message pushing, and scan-to-shop QR code. You can quickly build capabilities required for wide-ranging shopping scenarios in apps via the sample code.
Learn more
Account Kit:
Sample Code: Added the function of automatically reading an SMS verification code after user authorization to huawei-account-demo.
Learn more
Map Kit:
Sample Code: Added the Kotlin sample code to hms-mapkit-demo-java, which is used to set a fixed screen center for zooming.
Learn more
Site Kit:
Sample Code: Added the Kotlin sample code to hms-sitekit-demo.
Learn more
Nice update.
Artificial reality (AR) has been widely deployed in many fields, such as marketing, education, and gaming fields, as well as in exhibition halls. 2D image and 3D object tracking technologies allow users to add AR effects to photos or videos taken with their phones, like a 2D poster or card, or a 3D cultural relic or garage kit. More and more apps are using AR technologies to provide innovative and fun features. But to stand out from the pack, more resources must be put into app development, which is time-consuming and entails huge workload.
HMS Core AR Engine makes development easier than ever. With 2D image and 3D object tracking based on device-cloud synergy, you will be able to develop apps that deliver premium experience.
2D Image TrackingReal-time 2D image tracking technology is largely employed by online shopping platforms for product demonstration, where shoppers interact with the AR effects to view products from different angles. According to the background statistics of one platform, the sales volume of products with AR special effects is much higher than other products, involving twice as much interaction in AR-based activities than common activities. This is one example of how platforms can deploy AR technologies to make profit.
To apply AR effects to more images on an app using traditional device-side 2D image tracking solutions, you need to release a new app version, which can be costly. In addition, increasing the number of images will stretch the app size. That's why AR Engine adopts device-cloud synergy, which allows you to easily apply AR effects to new images by simply uploading images to the cloud, without updates to your app, or occupying extra space.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2D image tracking with device-cloud synergy
This technology consists of the following modules:
Cloud-side image feature extraction
Cloud-side vector retrieval engine
Device-side visual tracking
In terms of response speed to and from the cloud, AR Engine runs a high-performance vector retrieval engine by virtue of the platform's hardware acceleration capability, to ensure millisecond-level retrieval from massive volumes of feature data.
3D Object TrackingAR Engine also allows real-time tracking of 3D objects like cultural relics and products. It presents 3D objects as holograms to supercharge images.
3D objects can be mundane and stem from various textures and materials, such as a textureless sculpture, or metal utensils that reflect light and appear shiny. In addition, as the light changes, 3D objects can cast shadows. These conditions pose a great challenge to 3D object tracking. AR Engine implements quick, accurate object recognition and tracking with multiple deep neutral networks (DNNs) in three major steps: object detection, coarse positioning of object poses, and pose optimization.
3D object tracking with device-cloud synergyThis technology consists of the following modules:
Cloud-side AI-based generation of training samples
Cloud-side automatic training of DNNs
Cloud-side DNN inference
Device-side visual tracking
Training algorithms for DNNs by manual labeling is labor-and time-consuming. Based on massive offline data and generative adversarial networks (GANs), AR Engine designs an AI-based algorithm for generating training samples, so as to accurately identify 3D objects in complex scenarios without manual labeling.
Currently, Huawei Cyberverse uses the 3D object tracking capability of AR Engine to create an immersive tour guide for Mogao Caves, to reveal never-before-seen details about the caves to tourists.
These premium technologies were constructed, built, and released by Central Media Technology Institute, 2012 Labs. They are open for you to bring users differentiated AR experience.
Learn more about AR Engine at HMS Core AR Engine.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Washboard abs, buff biceps, or a curvy figure — a body shape that most of us probably desire. However, let's be honest: We're too lazy to get it.
Hitting the gym is a great choice to getting ourselves in shape, but paying attention to what we eat and how much we eat requires not only great persistence, but also knowledge about what goes in food.
The food recognition function can be integrated into fitness apps, letting users use their phone's camera to capture food and displaying on-screen details about the calories, nutrients, and other bits and pieces of the food in question. This helps health fanatics keep track of what they eat on a meal-by-meal basis.
The GIF below shows the food recognition function in action.
Technical PrinciplesThis fitness assistant is made possible thanks to the image classification technology which is a widely-adopted basic branch of the AI field. Traditionally, image classification works by initially pre-processing images, extracting their features, and developing a classifier. The second part of the process entails a huge amount of manual labor, meaning such a process can merely classify images with limited information. Forget about the images having lists of details.
Luckily, in recent years, image classification has developed considerably with the help of deep learning. This method adopts a specific inference framework and the neural network to classify and tag elements in images, to better determine the image themes and scenarios.
Image classification from HMS Core ML Kit is one service that adopts such a method. It works by: detecting the input image in static image mode or camera stream mode → analyzing the image by using the on-device or on-cloud algorithm model → returning the image category (for example, plant, furniture, or mobile phone) and its corresponding confidence.
The figure below illustrates the whole procedure.
Advantages of ML Kit's Image ClassificationThis service is built upon deep learning. It recognizes image content (such as objects, scenes, behavior, and more) and returns their corresponding tag information. It is able to provide accuracy, speed, and more by utilizing:
Transfer learning algorithm: The service is equipped with a higher-performance image-tagging model and a better knowledge transfer capability, as well as a regularly refined deep neural network topology, to boost accuracy by 38%.
Semantic network WordNet: The service optimizes the semantic analysis model and analyzes images semantically. It can automatically deduce the image concepts and tags, and supports up to 23,000 tags.
Acceleration based on Huawei GPU cloud services: Huawei GPU cloud services increase the cache bandwidth by 2 times and the bit width by 8 times, which are vastly superior to the predecessor. These improvements mean that image classification requires only 100 milliseconds to recognize an image.
Sound tempting, right? Here's something even better if you want to use the image classification service from ML Kit for your fitness app: You can either directly use the classification categories offered by the service, or customize your image classification model. You can then train your model with the images collected for different foods, and import their tag data into your app to build up a huge database of food calorie details. When your user uses the app, the depth of field (DoF) camera on their device (a Huawei phone, for example) measures the distance between the device and food to estimate the size and weight of the food. Your app then matches the estimation with the information in its database, to break down the food's calories.
In addition to fitness management, ML Kit's image classification can also be used in a range of other scenarios, for example, image gallery management, product image classification for an e-commerce app, and more.
All these can be realized with the image classification categories of the mentioned image classification service. I have integrated it into my app, so what are you waiting for?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
New FeaturesAnalytics Kit
Released the function of saving churned users as an audience to the retention analysis function. This function enables multi-dimensional examination on churned users, thus contributing to making targeted measures for winning back such users.
Changed Audience analysis to Audience insight that has two submenus: User grouping and User profiling. User grouping allows for segmenting users into different audiences according to different dimensions, and user profiling provides audience features like profiles and attributes to facilitate in-depth user analysis.
Added the Page access in each time segment report to Page analysis. The report compares the numbers of access times and users in different time segments. Such vital information gives you access to your users' product usage preferences and thus helps you seize operations opportunities.
Added the Page access in each time segment report to Page analysis. The report compares the numbers of access times and users in different time segments. Such vital information gives you access to your users' product usage preferences and thus helps you seize operations opportunities.
Learn more>>
3D Modeling Kit
Debuted the auto rigging capability. Auto rigging can load a preset motion to a 3D model of a biped humanoid, by using the skeleton points on the model. In this way, the capability automatically rigs and animates such a biped humanoid model, lowering the threshold of 3D animation creation and making 3D models appear more interesting.
Added the AR-based real-time guide mode. This mode accurately locates an object, provides real-time image collection guide, and detects key frames. Offering a series of steps for modeling, the mode delivers a fresh, interactive modeling experience.
Learn more>>
Video Editor Kit
Offered the auto-smile capability in the fundamental capability SDK. This capability detects faces in the input image and then lightens up the faces with a smile (closed-mouth or open-mouth).
Supplemented the fundamental capability SDK with the object segmentation capability. This AI algorithm-dependent capability separates the selected object from a video, to facilitate operations like background removal and replacement.
Learn more>>
ML Kit
Released the interactive biometric verification service. It captures faces in real time and determines whether a face is of a real person or a face attack (like a face recapture image, face recapture video, or a face mask), by checking whether the specified actions are detected on the face. This service delivers a high-level of security, making it ideal in face recognition-based payment scenarios.
Improved the on-device translation service by supporting 12 more languages, including Croatian, Macedonian, and Urdu. Note that the following languages are not yet supported by on-device language detection: Maltese, Bosnian, Icelandic, and Georgian.
Learn more>>
Audio Editor Kit
Added the on-cloud REST APIs for the AI dubbing capability, which makes the capability accessible on more types of devices.
Added the asynchronous API for the audio source separation capability. On top of this, a query API was added to maintain an audio source separation task via taskId. This serves as the solution to the issue that in some scenarios, a user failed to find their previous audio source separation task when they exited and re-opened the app, because of the long time taken by an audio source separation task to complete.
Enriched on-device audio source separation with the following newly supported sound types: accompaniment, bass sound, stringed instrument sound, brass stringed instrument sound, drum sound, accompaniment with the backing vocal voice, and lead vocalist voice.
Learn more>>
Health Kit
Added two activity record data types: apnea training and apnea testing in diving, and supported the free diving record data type on the cloud-side service, giving access to the records of more activity types.
Added the sampling data type of the maximum oxygen uptake to the device-side service. Each data record indicates the maximum oxygen uptake in a period. This sampling data type can be used as an indicator of aerobic capacity.
Added the open atomic sampling statistical data type of location to the cloud-side service. This type of data records the GPS location of a user at a certain time point, which is ideal for recording data of an outdoor sport like mountain hiking and running.
Opened the activity record segment statistical data type on the cloud-side service. Activity records now can be collected by time segment, to better satisfy requirements on analysis of activity records.
Added the subscription of scenario-based events and supported the subscription of total step goal events. These fresh features help users set their running/walking goals and receive push messages notifying them of their goals.
Learn more>>
Video Kit
Released the HDR Vivid SDK that provides video processing features like opto-electronic transfer function (OETF), tone mapping, and HDR2SDR. This SDK helps you immerse your users with high-definition videos that get rid of overexposure and have clear details even in dark parts of video frames.
Added the capability for killing the WisePlayer process. This capability frees resources occupied by WisePlayer after the video playback ends, to prevent WisePlayer from occupying resources for too long.
Added the capability to obtain the list of video source thumbnails that cover each frame of the video source — frame by frame, time point by time point — when a user slowly drags the video progress bar, to improve video watching experience.
Added the capability to accurately play video via dragging on the progress bar. This capability can locate the time point on the progress bar, to avoid the inaccurate location issue caused by using the key frame for playback location.
Learn more>>
Scene Kit
Added the 3D fluid simulation component. This component allows you to set the boundaries and volume of fluid (VOF), to create interactive liquid sloshing.
Introduced the dynamic diffuse global illumination (DDGI) plugin. This plugin can create diffuse global illumination in real time when the object position or light source in the scene changes. In this way, the plugin delivers a more natural-looking rendering effect.
Learn more>>
New ResourcesMap Kit
For the hms-mapkit-demo sample code: Added the MapsInitializer.initialize API that is used to initialize the Map SDK before it can be used.
Added the public layer (precipitation map) in the enhanced SDK.
Go to GitHub>>
Site Kit
For the hms-sitekit-demo sample code: Updated the Gson version to 2.9.0 and optimized the internal memory.
Go to GitHub >>
Game Service
For the hms-game-demo sample code: Added the configuration of removing the dependency installation boost of HMS Core (APK), and supported HUAWEI Vision.
Go to GitHub >>
Made necessary updates to other kits. Learn more >>