HiAI Foundation: The Newest, Brightest Way to Develop AI Apps - Huawei Developers

Now, AI has been rolled-out in education, finance, logistics, retail, transportation, and healthcare, to fill niches based on user needs or production demands. As a developer, to stay ahead of the competition, you'll need to translate genius insights efficiently into AI-based apps.
HMS Core HiAI Foundation is designed to streamline development of new apps. It opens the innate hardware capabilities of the HiAI ecosystem and provides 300+ AI operators compatible with major models, allowing you to easily and quickly build AI apps.
HiAI Foundation offers premium computing environments that boast high performance and low power consumption to facilitate development, with solutions including device-cloud synergy, Model Zoo, automatic optimization toolkits, and Intellectual Property Cores (IP cores) that collaborate with each other.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Five Cores of Efficient, Flexible Development​HiAI Foundation is home to cutting-edge tools and features that compliment any development strategy. Here are the five cores that facilitate flexible, low-cost AI development:
Device-cloud synergy: support for platform update and performance optimization with operators for new and typical scenarios
Because AI services and algorithm models are constantly evolving, it is difficult for AI computing platforms to keep up. HiAI Foundation is equipped with flexible computing frameworks and adaptive model structures that support synergy across device and cloud. This allows you to build and launch new models and services and enhance user experience.
Model Zoo: optimizes model structures to make better use of NPU (neural processing unit) based AI acceleration
During development, you may need to perform model adjustment on the underlying hardware structure to maximize computing power. Such adjustment can be costly in time and resources. Model Zoo provides NPU-friendly model structures, backbones, and operators to improve model structure and make better use of Kirin chip's NPU for AI acceleration.
Toolkit for lightweight models: make your apps smaller and run faster
32-bit models used for training provide high calculation precision, but equally consume a lot of power and memory. Our toolkit converts original models into smaller, more lightweight ones that are better suited to NPU, without comprising much on computing precision. This single adjustment helps save phone space and computing resources.
HiAI Foundation toolkit for lightweight models​Network architecture search (NAS) toolkit: simple and effective network design
HiAI Foundation provides a toolkit supporting multiple types of network architecture search, including classification, detection, and segmentation. With specific precision and performance requirements, the toolkit runs optimization algorithms to determine the most appropriate network architecture and performance based on the hardware information. It is compatible with mainstream training frameworks, including PyTorch, TensorFlow, and Caffe, and supports high computing power for multiple mainstream hardware platforms.
HiAI Foundation NAS toolkit​IP core collaboration: improved performance at reduced power with the DDR memory shared among computing units
HiAI Foundation ensures full collaboration between IP cores and open computing power for hardware. Now, IP cores (CPU, NPU, ISP, GPU) share the DDR memory, minimizing data copy and transfers across cores, for better performance at a lower power consumption.
HiAI Foundation connects smart services with underlying hardware capabilities. It is compatible with mainstream frameworks like MNN, TNN, MindSpore Lite, Paddle Lite, and KwaiNN, and can perform AI calculation in NPU, CPU, GPU, or DSP using the inference acceleration platform Foundation DDK and the heterogeneous computing platform Foundation HCL. It is the perfect framework to build the new-generation AI apps that run on devices like mobile phones, tablets, Vision devices, vehicles, and watches.
HiAI Foundation open architecture
Leading AI Standards with Daily Calls Exceeding 60 Billion​Nowadays, AI technologies, such as speech, facial, text, and image recognition, image segmentation, and image super-resolution, are common features of mobile devices. In fact, it has become the norm, with users expecting better AI apps. HiAI Foundation streamlines app development and reduces resource wastage, to help you focus on the design and implementation of innovative AI functions.
High performance, low power consumption, and ease of use are reasons why more and more developers are choosing HiAI Foundation. Since it moved to open source in 2018, the calls have increased from 1 million times/day to 60 billion/day, providing its worth in the global developer community.
Daily calls of HiAI Foundation exceed 60 billion​HiAI Foundation has joined the Artificial Intelligence Industry Technology Innovation Strategy Alliance, or AITISA, and takes part in drafting the device-side AI standards (currently under review). Through joint efforts to building industry standards, HiAI Foundation is committed to harnessing new technology for AI industry evolution.
Visit HUAWEI Developers to find out more about HiAI Foundation.

Related

Distributed AI: Smart, Seamless Living with No Bounds

This article is from HUAWEI Developer Forum
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
​
AI has reshaped the world around us, and how we interact with our surroundings.
When you take a photo with your phone, the AI camera can identify what's in the frame (for example, the blue sky or spread of food), and apply intelligent enhancements that correspond to the scene, generating detailed and vibrant images. You can remain productive, even when your hands are occupied, thanks to voice assistants like HiVoice or Siri on your phone. Make payments with AI-powered face recognition, in which your phone stores and analyzes your facial profile, even recognizing your behavioral patterns, to determine when it is you making a purchase, or logging in to an account.
In 2017, the release of Huawei's Kirin 970 captured the industry's imagination. As the first-ever mobile chipset to integrate an independent neural processing unit (NPU), the Kirin 970 represented a milestone for mobile AI. The following year, Huawei unveiled its HiAI platform, opening up Huawei's chipset, device, and cloud capabilities to global developers, and providing invaluable assistance by bolstering a wide range of apps with newfound intelligence.
Since its advent, HUAWEI HiAI has focused on applying innovative new technology from the bottom layer of the system on up. Distributed intelligence forms the key to transforming device software and hardware from isolated capabilities into a collaborative, mutually-reinforcing ecosystem. In this way, HUAWEI HiAI enables software and hardware makers to facilitate innovation in their respective areas of expertise, and contribute to the seamless user experience of tomorrow.
AI cares…
Technology seems like magic, in a sense. It helps unlock unforeseen potential, enriching the lives of countless individuals, in particular, those with disabilities. New apps are constantly being introduced, offering life-altering capabilities.
Huawei has teamed up with IIS Aragon and DIVE Medical to jointly launch the TrackAI project, which is dedicated to helping ophthalmologists run visual tests for children, using Huawei smart devices equipped with HiAI. Numerous medical institutes around the world, in China, Spain, Vietnam, Mexico, and Russia, among other countries, have begun amassing the plethora of data required to train the AI algorithm, through their work with over 2,000 visually-impaired children.
TrackAI's complete detection system consists of the Device for an Integral Visual Examination (DIVE), a Huawei P30 mobile phone, and a Huawei MateBook E tablet. The system displays visual stimuli on a screen, and uses an eye track to detect the child's focus. The system can also learn the differences between children with and without an eye disease. Lastly, the Huawei P30 smartphone runs a pre-trained machine learning model, powered by HiAI, to detect whether the child has a visual impairment.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The HiAI-powered Track AI system can help stop an eye disease in its tracks​
Compared with traditional models, in which data is uploaded to the cloud for analysis, with the results transferred back to devices, HUAWEI HiAI's on-device analysis is remarkably efficient. In leveraging the local Kirin chipset for AI processing, users have access to real-time analysis, with less latency. Even in rural areas where Internet access is spotty, doctors can still use the TrackAI system for testing. This provides for tremendous benefits for children's healthcare in developing countries.
On-device processing also provides for enhanced privacy safeguards. Users can rest assured that data is stored only on their device, avoiding the risks associated with cloud storage, such as data leak.
Huawei has drawn on AI to improve the lives of those living with disabilities, in a myriad of other ways.
In collaboration with the Polish Blind Association, Huawei developed Facing Emotions, an app designed to assist the blind and visually-impaired perceive emotions through the power of sound. The app uses the rear camera and AI on Huawei phones to translate human emotions into unique sounds. For the millions of people who are unable to see faces and read emotional cues, this offers a truly life-altering capability, bringing them closer to friends and loved ones. Imagine the joy of "hearing" a smile for the first time!
There's also StorySign, an app that helps deaf children read by translating the text from selected books into sign language. Huawei partnered with the European Union of the Deaf, Penguin Books and Aardman Animations, as well as other organizations, in developing StorySign. When a child opens a selected reading book in front of them, then opens the app and holds the phone over the page, an avatar signs the story, while the app highlights each word that has been signed. Thanks to multi-lingual Optical Character Recognition (OCR) and document adjustment technology, StorySign now supports more than 10 European sign languages. StorySign enriches of lives of deaf children and their parents, opening up a wondrous world of storytelling and literature, in which they learn to read and sign together.
HUAWEI HiAI powered StorySign supports more than 10 European sign languages​
Huawei has also provided crucial assistance to the Chinese-developed Qimingtong app, which is designed to help the visually-impaired better interact with the world around them. The app reads text out loud, such as that from newspaper articles, letters, and product user guides. It also leverages HUAWEI HiAI capabilities, including face detection and facial feature detection, and enables the visually-impaired to take pictures following simple voice instructions.
All of the above apps are powered by AI, and it is HUAWEI HiAI's primary mission is to make app development effortless.
HUAWEI HiAI provides developers with access to the truly boundless potential from Huawei's chipset, device, and cloud technology. On the chipset side, developers benefit from cutting-edge NPU acceleration. Device capabilities bolster face, image, text, and speech recognition, while cloud technology enables apps to provide timely scenario-based services.
AI empowers...
Developers are dreamers, and HUAWEI HiAI is the platform that helps them fulfill their dreams. Since its debut in 2018, HUAWEI HiAI has connected more than one million developers, and 4000 partners.
SketchAR, a tool for teaching drawing using augmented reality (AR) and AI, offers a prime exactly of how HiAI has revolutionized user experience. It enables users to transmit an image from their device onto any surface, such as a sheet of paper or white wall. The image can then be used on their device as a template for manually drawing on the surface of choice. SketchAR utilizes HUAWEI HiAI's NPU acceleration, to boost image recognition speeds by up by 40%, for improved accuracy and greater responsiveness.
The Chinese-developed app Lvmuxia (Green Screen Compositor) helps users composite a captured video with a background video. Typically, green screen compositing requires powerful computing capabilities, and could previously only be accomplished on the cloud. But as on-cloud computing poses a number of challenges for developers, including high costs, high latency, and privacy risks, it was not practical. By working with HUAWEI HiAI, the Lvmuxia app has overcome those challenges, providing on-device AI capabilities, and shortening the app's time to market.
While attending a conference or lecture, you may want to take photos of the PowerPoint slides for future reference, but the quality of the images can be poor, particularly if you were seated in a corner, or there is some sort of visual obstruction. HiAI provides an elegant solution, with its document adjustment feature, in which photos are straightened, clarifying text and removing unwanted corners. With the OCR feature, you can even add notes or correct mistakes in the text.
From single-device AI to distributed AI
In November 2019, Huawei introduced HiAI 3.0, an open AI capability platform that allows smart devices to share AI computing power between them, representing the tremendous leap from single-device AI to distributed AI.
In its infancy, HUAWEI HiAI 1.0 only supported single type devices. 2.0 expanded to support such devices as phones, tablets, and smart screens. HUAWEI HiAI 3.0 goes even further, pooling hardware resources to form a super device. Powered by distributed AI, devices mutually reinforce each other, providing users with the best possible experience, given the resources at its disposal.
Smart devices are designed to fulfill specific needs, but each category of device comes with drawbacks. For example, smart TVs, watches, and earphones excel in collecting images, videos, and sensor data, but fall short in terms of sheer AI computing power. Smartphones are equipped with increasingly powerful photography features, but still don't compare to dedicated televisions and surveillance cameras, in many regards. Their sound collection capabilities also pale in comparison to a speaker's microphone array.
Huawei developed HiAI 3.0 in response to the ever-present need for enhanced capabilities on smart devices. It works by pooling the hardware resources from different devices to form a super intelligent system. When enriched with shared AI capabilities, all of the participating devices are equipped to provide seamless, cross-device intelligence that is responsive to any and every user whim.
By drawing from such basic distributed technologies as distributed virtual bus and device virtualization, HUAWEI HiAI 3.0 facilitates high-speed connectivity between devices, allowing them to share capabilities and reinforce each other. This has seemingly countless applications in real life.
For example, fitness apps have often been regarded as less effective than professional personal trainers. But HiAI 3.0 helps turn such apps into viable personal trainers in their own right. HiAI 3.0 connects the user's smart TV, phone, and speaker to form a wholly-integrated, super intelligent system. The system can use the TV's camera to capture the user's posture, the phone's AI computing power to analyze the images, and determine whether the user's posture is standard based on their skeleton information, before finally having the speaker remind the user to correct their posture by voice.
When the user gets into their car, their phone can automatically connect to the car and utilize the in-car microphone to pick up sounds, while the in-car camera uses the phone chipset's AI computing for driver monitoring. If any driver fatigue or distractions are detected, an alert will be played.
On-device, distributed AI allows devices to "see", "hear", "sense", and "calculate" with greater precision and sensitivity. Fragmented experiences are merged into one consistent, cross-device experience, and device silos are connected to form a super device.
This basic understanding underpins the new paradigm that is Huawei's Seamless AI Life strategy — unbounded intelligence in all scenarios. Powered by AI, diverse hardware resources, including those from smartphones, are pooled, and mutually reinforcing, providing the seamless flow of information across all usage scenarios, and the connected intelligence that will power innovation.
Expect better experience from it

Distributed AI: Smart, Seamless Living with No Bounds

AI has reshaped the world around us, and how we interact with our surroundings.
When you take a photo with your phone, the AI camera can identify what's in the frame (for example, the blue sky or spread of food), and apply intelligent enhancements that correspond to the scene, generating detailed and vibrant images. You can remain productive, even when your hands are occupied, thanks to voice assistants like HiVoice or Siri on your phone. Make payments with AI-powered face recognition, in which your phone stores and analyzes your facial profile, even recognizing your behavioral patterns, to determine when it is you making a purchase, or logging in to an account.
In 2017, the release of Huawei's Kirin 970 captured the industry's imagination. As the first-ever mobile chipset to integrate an independent neural processing unit (NPU), the Kirin 970 represented a milestone for mobile AI. The following year, Huawei unveiled its HiAI platform, opening up Huawei's chipset, device, and cloud capabilities to global developers, and providing invaluable assistance by bolstering a wide range of apps with newfound intelligence.
Since its advent, HUAWEI HiAI has focused on applying innovative new technology from the bottom layer of the system on up. Distributed intelligence forms the key to transforming device software and hardware from isolated capabilities into a collaborative, mutually-reinforcing ecosystem. In this way, HUAWEI HiAI enables software and hardware makers to facilitate innovation in their respective areas of expertise, and contribute to the seamless user experience of tomorrow.
AI cares…
Technology seems like magic, in a sense. It helps unlock unforeseen potential, enriching the lives of countless individuals, in particular, those with disabilities. New apps are constantly being introduced, offering life-altering capabilities.
Huawei has teamed up with IIS Aragon and DIVE Medical to jointly launch the TrackAI project, which is dedicated to helping ophthalmologists run visual tests for children, using Huawei smart devices equipped with HiAI. Numerous medical institutes around the world, in China, Spain, Vietnam, Mexico, and Russia, among other countries, have begun amassing the plethora of data required to train the AI algorithm, through their work with over 2,000 visually-impaired children.
TrackAI's complete detection system consists of the Device for an Integral Visual Examination (DIVE), a Huawei P30 mobile phone, and a Huawei MateBook E tablet. The system displays visual stimuli on a screen, and uses an eye track to detect the child's focus. The system can also learn the differences between children with and without an eye disease. Lastly, the Huawei P30 smartphone runs a pre-trained machine learning model, powered by HiAI, to detect whether the child has a visual impairment.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
The HiAI-powered TrackAI system can help stop an eye disease in its tracks
Compared with traditional models, in which data is uploaded to the cloud for analysis, with the results transferred back to devices, HUAWEI HiAI's on-device analysis is remarkably efficient. In leveraging the local Kirin chipset for AI processing, users have access to real-time analysis, with less latency. Even in rural areas where Internet access is spotty, doctors can still use the TrackAI system for testing. This provides for tremendous benefits for children's healthcare in developing countries.
On-device processing also provides for enhanced privacy safeguards. Users can rest assured that data is stored only on their device, avoiding the risks associated with cloud storage, such as data leak.
Huawei has drawn on AI to improve the lives of those living with disabilities, in a myriad of other ways.
In collaboration with the Polish Blind Association, Huawei developed Facing Emotions, an app designed to assist the blind and visually-impaired perceive emotions through the power of sound. The app uses the rear camera and AI on Huawei phones to translate human emotions into unique sounds. For the millions of people who are unable to see faces and read emotional cues, this offers a truly life-altering capability, bringing them closer to friends and loved ones. Imagine the joy of "hearing" a smile for the first time!
There's also StorySign, an app that helps deaf children read by translating the text from selected books into sign language. Huawei partnered with the European Union of the Deaf, Penguin Books and Aardman Animations, as well as other organizations, in developing StorySign. When a child opens a selected reading book in front of them, then opens the app and holds the phone over the page, an avatar signs the story, while the app highlights each word that has been signed. Thanks to multi-lingual Optical Character Recognition (OCR) and document adjustment technology, StorySign now supports more than 10 European sign languages. StorySign enriches of lives of deaf children and their parents, opening up a wondrous world of storytelling and literature, in which they learn to read and sign together.
HUAWEI HiAI powered StorySign supports more than 10 European sign languages
Huawei has also provided crucial assistance to the Chinese-developed Qimingtong app, which is designed to help the visually-impaired better interact with the world around them. The app reads text out loud, such as that from newspaper articles, letters, and product user guides. It also leverages HUAWEI HiAI capabilities, including face detection and facial feature detection, and enables the visually-impaired to take pictures following simple voice instructions.
All of the above apps are powered by AI, and it is HUAWEI HiAI's primary mission is to make app development effortless.
HUAWEI HiAI provides developers with access to the truly boundless potential from Huawei's chipset, device, and cloud technology. On the chipset side, developers benefit from cutting-edge NPU acceleration. Device capabilities bolster face, image, text, and speech recognition, while cloud technology enables apps to provide timely scenario-based services.
AI empowers...
Developers are dreamers, and HUAWEI HiAI is the platform that helps them fulfill their dreams. Since its debut in 2018, HUAWEI HiAI has connected more than one million developers, and 4000 partners.
SketchAR, a tool for teaching drawing using augmented reality (AR) and AI, offers a prime exactly of how HiAI has revolutionized user experience. It enables users to transmit an image from their device onto any surface, such as a sheet of paper or white wall. The image can then be used on their device as a template for manually drawing on the surface of choice. SketchAR utilizes HUAWEI HiAI's NPU acceleration, to boost image recognition speeds by up by 40%, for improved accuracy and greater responsiveness.
The Chinese-developed app Lvmuxia (Green Screen Compositor) helps users composite a captured video with a background video. Typically, green screen compositing requires powerful computing capabilities, and could previously only be accomplished on the cloud. But as on-cloud computing poses a number of challenges for developers, including high costs, high latency, and privacy risks, it was not practical. By working with HUAWEI HiAI, the Lvmuxia app has overcome those challenges, providing on-device AI capabilities, and shortening the app's time to market.
While attending a conference or lecture, you may want to take photos of the PowerPoint slides for future reference, but the quality of the images can be poor, particularly if you were seated in a corner, or there is some sort of visual obstruction. HiAI provides an elegant solution, with its document adjustment feature, in which photos are straightened, clarifying text and removing unwanted corners. With the OCR feature, you can even add notes or correct mistakes in the text.
From single-device AI to distributed AI
In November 2019, Huawei introduced HiAI 3.0, an open AI capability platform that allows smart devices to share AI computing power between them, representing the tremendous leap from single-device AI to distributed AI.
In its infancy, HUAWEI HiAI 1.0 only supported single type devices. 2.0 expanded to support such devices as phones, tablets, and smart screens. HUAWEI HiAI 3.0 goes even further, pooling hardware resources to form a super device. Powered by distributed AI, devices mutually reinforce each other, providing users with the best possible experience, given the resources at its disposal.
Smart devices are designed to fulfill specific needs, but each category of device comes with drawbacks. For example, smart TVs, watches, and earphones excel in collecting images, videos, and sensor data, but fall short in terms of sheer AI computing power. Smartphones are equipped with increasingly powerful photography features, but still don't compare to dedicated televisions and surveillance cameras, in many regards. Their sound collection capabilities also pale in comparison to a speaker's microphone array.
Huawei developed HiAI 3.0 in response to the ever-present need for enhanced capabilities on smart devices. It works by pooling the hardware resources from different devices to form a super intelligent system. When enriched with shared AI capabilities, all of the participating devices are equipped to provide seamless, cross-device intelligence that is responsive to any and every user whim.
By drawing from such basic distributed technologies as distributed virtual bus and device virtualization, HUAWEI HiAI 3.0 facilitates high-speed connectivity between devices, allowing them to share capabilities and reinforce each other. This has seemingly countless applications in real life.
For example, fitness apps have often been regarded as less effective than professional personal trainers. But HiAI 3.0 helps turn such apps into viable personal trainers in their own right. HiAI 3.0 connects the user's smart TV, phone, and speaker to form a wholly-integrated, super intelligent system. The system can use the TV's camera to capture the user's posture, the phone's AI computing power to analyze the images, and determine whether the user's posture is standard based on their skeleton information, before finally having the speaker remind the user to correct their posture by voice.
When the user gets into their car, their phone can automatically connect to the car and utilize the in-car microphone to pick up sounds, while the in-car camera uses the phone chipset's AI computing for driver monitoring. If any driver fatigue or distractions are detected, an alert will be played.
On-device, distributed AI allows devices to "see", "hear", "sense", and "calculate" with greater precision and sensitivity. Fragmented experiences are merged into one consistent, cross-device experience, and device silos are connected to form a super device.
This basic understanding underpins the new paradigm that is Huawei's Seamless AI Life strategy — unbounded intelligence in all scenarios. Powered by AI, diverse hardware resources, including those from smartphones, are pooled, and mutually reinforcing, providing the seamless flow of information across all usage scenarios, and the connected intelligence that will power innovation.
Source: https://consumer.huawei.com/en/press/news/2020/huawei-hiai-3-0-arrived/
Thanks for this article, who is interested to know the latest trends in AI, follow the link: https://masterofcode.com/blog/state...nce-ai-in-ecommerce-statistics-and-deployment

Charting New Territory with Huawei Map Engine

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Maps are an integral part of life. Whether you’re going on an adventure hike, hailing a ride to get to work, trying to order food delivery from a nearby cafe, or just exploring a new neighbourhood you have just moved to, I bet the apps you’re using will always have a map and location function.
With a usage level this high, many Huawei developers need to integrate map-based functions when developing apps. This is exactly why HMS Core has included Map Kit, Site Kit and Location Kit as part of its Core Kits and capabilities.
Map Kit provides developers with basic capabilities such as map presentation, map interaction, route planning and supports various local lifestyle and transport businesses. Site Kit lets you provide users with convenient and secure access to diverse, place-related services. Location Kit combines the GPS, Wi-Fi, and base station location functionalities into an app to build up global positioning capabilities, allowing you to provide flexible location-based services targeted at users around the globe.
To launch these services that bring conveniences to users and developers alike, Huawei formed a map team. Right at the start, the team only consisted of 20 to 30 people and very few of whom had any formal training in the map industry.
Looking back on those early days, we were really flying blind. But as time went by, the team gradually filled out with new blood and included several fantastic industry experts. Each expert who joined the team was provided with a full suite of helpers to allow them to assimilate quicker onto the project so they can bring more value to the team. Many of these newcomers have grown a great deal since joining and, through a lot of hard work and a pioneering attitude, each has become a key pillar of our team’s success in their own right. We always used to joke that “the early bird catches the ***” … but he also must work the hardest.
Following the advice of our expert team members, we gradually formed a pyramid based on technical ability while maintaining a flat management structure. This enhanced the entire team’s development, deployment, analysis and problem-solving capabilities.
Now, Huawei Map Engine provides comprehensive location and mapping tools in 200 different regions and countries. Our map rendering has been enhanced by over 30% and key location indicators improved by over 20%, allowing us to surpass our initial goals in terms of performance. The service provides reliable and efficient location and mapping for app developers, supporting the worldwide expansion of the entire HMS Core ecosystem.
Throughout the development process, the team has adopted a variety of excellent new accessibility practices. For example, by proposing an integrated SDK decoupling cloud server, we were able to provide complete access to Map Kit and Location Kit to one of Singapore’s leading taxi apps ComfortDelGro in just three weeks.
Mapping and location services are a constantly evolving sector. It might be helpful to think of it like a living organism, with an algorithm engine as the brains, map data as the heart and the map ecosystem as the lifeblood. In the near future, Huawei will be able to perfect this comprehensive mapping ecosystem by combining those kits with a new app and data platform.
The new ecosystem will also introduce new algorithms and business models, such as AR maps, visual location and navigation services, AI-powered data generation, high-precision geo-positioning and other new technologies that will help to determine the future trajectory of the industry. At the same time, machine learning from accumulated data will help improve the accuracy and performance of existing algorithms and ultimately provide users with a better experience.
As always, the future is full of challenges and uncertainties. But watch this space, because our entire mapping team is confident of tackling this challenge head-on and creating a more competitive array of location services for our users.
*The article is written by HUAWEI’s Map expert.
For details about Huawei developers and HMS, visit the website.
HUAWEI Developer Forum | HUAWEI Developer
forums.developer.huawei.com

Huawei Mobile Services delivers next-level success for Wongnai and Line Man on AppGallery

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
2020 was a roaring year for the food delivery sector – the already thriving industry was boosted by an increasingly digital population that had no other means to enjoy their favourite dish due to social distancing measures. According to Sensor Tower, these factors contributed to a huge 21% year-on-year growth of 1.7 billion food and drink category apps installs globally in 2020.
However, competition within this sector also intensified last year as new companies looked to stake their claim in the growing market. In order to do this, food apps need to be able to compete with the top dogs on all fronts to ensure they are equally effective at acquiring and retaining users. For many, the answer was evident – develop a more intuitive, powerful, and enjoyable app experience. Given the industry offers little space for differentiation in terms of services, companies are dependent on driving a superior user experience to build their brand loyalty.
This set of challenge holds true for two apps in Thailand – Delivery platform Line Man as well as its lifestyle services review platform partner Wongnai. Line Man was looking to expand in 2020 but the lack of robust map service features and location tracking capabilities severely hamper its ability to achieve that goal. Similarly, Wongnai’s lacklustre location tracking capability and platform manipulation issue was affecting its standing among users during a pivotal period.
Step in Huawei. The partners sought out Huawei for guidance to address their respective issues for their apps on AppGallery. The discussion then evolved to a collaboration with Huawei where the two developers integrated the open HMS (Huawei Mobile Services) Core capabilities, and were able to successfully eradicate their roadblocks and unleash better location and tracking capabilities.
Triangulating users’ locations swiftly, accurately and clearly
Wongnai, as a leading lifestyle services review platform, needed to accurately pinpoint users’ location to recommend relevant nearby shops and attractions. By integrating with the HMS Core Location Kit, Wongnai was able to leverage the kit’s Global Navigation Satellite System (GNSS), Wi-Fi, and base station location functionalities to provide flexible location-based services. This allows it to make dynamic, relevant recommendations for its users.
Line Man was facing a similar situation as well – it needed to constantly provide users with the location of their food and delivery riders to provide the most optimal ordering experience. Not only does the integration with Location kit achieve this, it also enables advanced features such as geofencing and activity identification. These capabilities enable the app to share reliable updates on the status of the order unceasingly, building goodwill and positive word-of-mouth among customers.
Displaying a map that is not only relevant but intuitive
Given that both services are heavily reliant on users’ location, the developers also needed a map service that can be customised according to their needs and is capable to convey all the relevant information effectively. In consideration of this, Wongnai and Line Man chose to integrate HUAWEI Map Kit with their respective apps as well.
With Map Kit, apps can leverage Huawei’s powerful map service and route planning capabilities to provide an elevated app experience for the users. The two developers were able to introduce an assortment of map functions based on their respective userbase’s unique needs. As a result, the users are able to effortlessly identify the various participating restaurants and businesses in one simple glance, improving the overall user experience.
Huawei and HMS Core drive success for their partners
The partnership with Huawei was a resounding success for the developers, both commercially and operationally. The integration with HMS Core not only propelled both apps past the two million downloads milestone, but it also helped Line Man boost its daily user figures by three-fold in the first two months.
Huawei is acutely aware of the different operational challenges facing developers, including the cost associated with launching apps across multiple distribution platforms. As part of its commitment to developers, Huawei provides developers with free access to certain HMS Core Kits to help alleviate the operational cost. Furthermore, HMS Core offers a unique function that can identify whether a device supports Huawei Mobile Service, allowing the app to call upon the corresponding SDK. This essentially meant that developers only need to maintain one app, reducing the time spent in terms of deployment and testing, as well as the operational costs associated with these activities.
In addition, Huawei extends its support for developers across the entire operation management spectrum, including marketing efforts. Wongnai and Line Man collaborated with AppGallery on a week-long campaign last year where users were able to redeem HUAWEI points through purchases in-app. The campaign was particularly effective for Wongnai who saw its new user download figures skyrocket 10-fold. The campaign boosted the apps’ visibility among AppGallery users, leading to a significant increase in daily downloads even after the campaign ended.
Another notable feat from this collaboration is the time it took to complete the entire integration process, especially in the case of Wongnai. The developer implemented three other HMS Core kits – Site Kit, Safety Detect, and Push Kit – on top of the two examples discussed above, and despite the mountainous amount of technical work that comes with it, the team was able to successfully incorporate the five kits into their app in just three weeks. This achievement was made possible through an intimate collaboration with Huawei’s developer support team, who offered extensive support as well as technical insights to facilitate the development process.

Principles Behind HUAWEI Prediction How We Trained Models for the Service

HUAWEI Prediction utilizes machine learning, based on user behavior and attributes reported by HUAWEI Analytics Kit, to predict target audiences with next-level precision. The service can help you with carrying out and optimizing operations. For example, it can work with A/B Testing to evaluate how effective your promotions have been and it can also join hands with Remote Configuration to configure dedicated plans for specific audiences through Remote Configuration. This is likely to result in dramatically improved user retention and conversion.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Integrating the Analytics SDK into your app enables the Prediction service to run preset tasks for predicting lost, paying, and returning users. On the details page of a specific prediction task, you'll find audiences with high, medium, and low probabilities of triggering a specific event, with meticulous profiling. For example, an audience with a high churn probability will include users who are very likely to quit using the app over the next 7 days. The characteristics of these users are displayed on cards, which makes it easy for you to pursue targeted operations.
The following figures give you a sense of how the prediction task list and details page look in practice.
* Data in these figures is for reference only.
Ø How we built these prediction models
First of all, we made it clear what our goal was to make predictions, so the type of data we collect reflects this. We then cleansed and sampled the collected data based on user characteristics to obtain a data set. This data set was divided into a 20% validation set and an 80% training set; multiple rounds of offline experiments were then conducted to determine the features and most suitable parameters for forming models. The generated models were later trained online to perform prediction tasks.
This process is outlined in detail below:
Ø Feature and model selection and optimization
Feature exploration
At the early stage of the project, we made sure to analyze user attributes, behavior, and requirements, in order to determine the business-relevant variables, such as user active days over the last 7 days and app use durations, through which we built a feature table.
After the features were identified, we chose a method that best suited our service and optimized parameters by performing multiple rounds of experiments. Common tree boosting methods that can be found across the industry include XGBoost, random forests, and Gradient Boost Decision Tree (GBDT). We trained our data set using these methods, and found that random forests perform best. Then the bagging method was adopted to improve models' fitting and generalization capabilities.
In addition to parameter optimization, the sampling ratio was also considered, especially for the payment prediction scenario, in which the proportion between positive samples and negative samples was large (about 1:100). For such cases, the accuracy and recall indicators should both be ensured. Then we adjusted the ratio of positive samples to negative samples to 1.5:1 during model training for payment prediction, in order to boost the recall of the model.
Hyperparameter and feature determination
Unnecessary features in a model can undermine the efficacy of predictions made by the model, or slow down model training. During experiments at this early stage, features were sorted by weight, and the top features were selected. In the model that would actually come to be, these features and relevant hyperparameters were configured.
Even after a model is applied for prediction, the data still needs to be observed and analyzed to supplement necessary features. In later iterations, we added a range of features, including the event and trend features, bringing the feature count over 400.
Automatic hyperparameter search
Model training involving full features can be quite time-consuming, and fail to produce the optimal output. In addition, the optimal hyperparameters and features may vary depending on the app. Therefore, the training should be performed by app.
To address this issue, we applied the automatic hyperparameter search function to search for optimal parameters in the configured parameter space. Matched parameters are stored in a Hive table.
The following figures show the modeling procedure and relevant external support.
Ø Research emphasis
We will continue optimizing our models, by researching the following:
l Neural network
As the number of features continues to grow (400+ currently), and user behaviors become too complex to mine common rules, our prediction models will need to be enhanced to ensure that predictions remain accurate. This will require that we introduce neural networks with strong expressive power, in addition to decision trees to train models based on behavioral features.
l Federated learning
Currently, data is isolated between apps and tenants. Horizontal federated learning can be used to train models across apps and tenants on a collaborative basis.
l Time series feature
A typical app user's device will report hundreds of events (among 1,000+ event types) and access nearly 100 pages within the app on a weekly basis. These times series can be used to build both short- and long-term user behavioral features, with the goal of improving prediction accuracy across a wide range of scenarios. Page access user behavioral data can be valuable for research, as such data bear characteristics of time series data.
l Feature mining and processing
The feature set is still being expanded. We will explore additional relevant features, such as the average app use interval, device attributes, download sources, and locations. In addition, we will also undertake such measures as discretization, normalization, square and square root operations, Cartesian product calculation, and Cartesian product calculation for multiple data sets, to build subsequent features that are based on existing features.
For more on HUAWEI Prediction, visit>>
For more details, you can go to:
l Our official website
l Our Development Documentation page, to find the documents you need
l Reddit to join our developer discussion
l GitHub to download demos and sample codes
l Stack Overflow to solve any integration problems

Categories

Resources