{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Facial recognition technology is quickly implemented in fields such as finance and healthcare, which has in turn raised issues involving cyber security and information leakage, along with growing user expectations for improved app stability and security.
HMS Core ML Kit strives to help professionals from various industries work more efficiently, while also helping them detect and handle potential risks in advance. To this end, ML Kit has been working on improving its liveness detection capability. Using a training set with abundant samples, this capability has obtained an improved defense feature against presentation attacks, a higher pass rate when the recognized face is of a real person, and an SDK with heightened security. Recently, the algorithm of this capability has become the first on-device, RGB image-based liveness detection algorithm that has passed the comprehensive security assessments of China Financial Certification Authority (CFCA).
CFCA is a national authority of security authentication and a critical national infrastructure of financial information security, which is approved by the People's Bank of China (PBOC) and State Information Security Administration. After passing the algorithm assessment and software security assessment of CFCA, ML Kit's liveness detection has obtained the enhanced level certification of facial recognition in financial payment, a level that is established by the PBOC.
The trial regulations governing the secure implementation of facial recognition technology in offline payment were published by the PBOC in January 2019. Such regulations impose higher requirements on the performance indicators of liveness detection, as described in the table below. To obtain the enhanced level certification, a liveness detection algorithm must have an FAR less than 0.1% and an FRR less than 1%.
LevelDefense Against Presentation AttacksBasicWhen LDAFAR is 1%, LPFRR is less than or equal to 1%.EnhancedWhen LDAFAR is 0.1%, LPFRR is less than or equal to 1%.
Requirements on the performance indicators of a liveness detection algorithm
The liveness detection capability enables an app to have the facial recognition function. Specifically speaking, the capability requires a user to perform different actions, such as blinking, staring at the camera, opening their mouth, turning their head to the left or right, and nodding. The capability then uses technologies such as facial keypoint recognition and face tracking to compare two continuous frames, and determine whether the user is a real person in real time. Such a capability effectively defends against common attack types like photo printing, video replay, face masks, and image recapture. This helps distinguish frauds, protecting users.
Liveness detection from ML Kit can deliver a user-friendly interactive experience: During face detection, the capability provides prompts (indicating the lighting is too dark, the face is blurred, a mask or pair of sunglasses are blocking the view, and the face is too close to or far away from the camera) to help users complete face detection smoothly.
To strictly comply with the mentioned regulations, CFCA has come up with an extensive assessment system. The assessments that liveness detection has passed cover many items, including but not limited to data and communication security, interaction security, code and component security, software runtime security, and service function security.
Face samples used for assessing the capability are very diverse, originating from a range of different source types, such as images, videos, masks, head phantoms, and real people. The samples also take into consideration factors like the collection device type, sample textile, lighting, facial expression, and skin tone. The assessments cover more than 4000 scenarios, which echo the real ones in different fields. For example, remote registration of a financial service, hotel check-in, facial recognition-based access control, identity authentication on an e-commerce platform, live-streaming on a social media platform, and online examination.
In over 50,000 tests, ML Kit's liveness detection presented its certified defense capability that delivers protection against different attack types, such as people with a face mask, a face picture whose keypoint parts (like the eyes and mouth) are hollowed out, a frame or frames containing a face extracted from an HD video, a silicone facial mask, a 3D head phantom, and an adversarial example. The capability can accurately recognize and quickly intercept all the presentation attacks, regardless of whether the form is 2D or 3D.
Successfully passing the CFCA assessments is proof that the capability meets the standards of a national authority and of its compliance with security regulations.
The capability has so far been widely adopted by the internal core services of Huawei and the services (account security, identity verification, financial risk control, and more) of its external customers in various fields. Those are where liveness detection plays its role in ensuring user experience and information security in an all-round way.
Moving forward, ML Kit will remain committed to exploring cutting-edge AI technology that improves its liveness detection's security, pass rate, and usability and to better helping developers efficiently create tailored facial recognition apps.
Get more information at:
Home page of HMS Core ML Kit
Development Guide of HMS Core ML Kit
Related
Introduction:
HMS ML Kit provides diversified leading machine learning capabilities that are easy to use, helping you develop various AI apps,based on ML Kit, we provide various of service, and this article will introduce all of the ML Kit service for the developer in detail.
Text-related Services
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
1. Text Recognition, can extracts text from images of receipts, business cards, and documents. This service is widely used in office, education, transit, and other apps.
2. Document Recognition, can recognize text with paragraph formats in document images. It can extract text from document images to convert paper documents into electronic copies, greatly improving the information input efficiency and reducing labor costs.
3. Bank Card Recognition, can quickly recognize information such as the bank card number, covering mainstream bank cards such as China Union Pay, American Express, MasterCard, Visa, and JCB around the world. It is widely used in finance and payment scenarios requiring bank card binding to quickly extract bank card information, realizing quick input of bank card information.
4. General Card Recognition, provides a universal development framework based on the text recognition technology. It allows you to customize the post-processing logic to extract required information from any fixed-format cards, such as Exit-Entry Permit for Traveling to and from Hong Kong and Macao, Hong Kong identity card, and Mainland Travel Permit for Hong Kong and Macao Residents.
Language/Voice-related Services
1. Translation, can detect the language of text and translate the text into different languages. Currently, this service can translate text online between 21 languages and translate text offline between 17 languages.
2. Language Detection, supports both online and offline modes. Currently, 52 languages can be detected on the cloud and 51 languages can be detected on the device.
3. Automatic Speech Recognition (ASR), can convert speech (no more than 60 seconds) into text in real time. Currently, Mandarin Chinese (including Chinese-English bilingual speech), English, French, German, Spanish, and Italian are supported.
4. Text to Speech (TTS), can convert text information into audio output. Real-time audio data can be output from the on-device API (offline models can be downloaded). Rich timbres, and volume and speed options are supported to produce more natural sounds.
5. Audio File Transcription, can convert an audio file into text, output punctuation, and generate text information with timestamps. Currently, the service supports Chinese and English.
6. Video Course Creator, can automatically create video courses based on courseware and commentaries, reducing video creation costs and improving efficiency.
7. Real-Time Transcription, enables your app to convert long speech (no longer than 5 hours) into text in real time. The generated text contains punctuation marks and timestamps.
8. Sound Detection, can detect sound events in online (real-time recording) mode. The detected sound events can help you perform subsequent actions.
Image-related Services
1. Image Classification, classifies elements in images into intuitive categories, such as people, objects, environments, activities, or artwork, to define image themes and application scenarios.
2. Object Detection and Tracking, can detect and track multiple objects in an image, so they can be located and classified in real time. This is useful for examining and recognizing images.
3. Landmark Recognition, can identify the names and latitude and longitude of landmarks in an image. You can use this information to create individualized experiences for users.
4. Image Segmentation, The image segmentation service can differentiate elements in an image. For example, you can use this service to create photo editing apps that replace certain parts of photos, such as the background.
5. Product Visual Search, searches for the same or similar products by a taken photo from the users in the pre-established product image library, and returns the IDs of those products and related information.
6. Image Super-Resolution, provides the 1x and 3x super-resolution capabilities. 1x super-resolution removes the compression noise, and 3x super-resolution not only effectively suppresses the compression noise, but also provides a 3x enlargement capability.
7. Document Skew Correction, can automatically identify the location of a document in an image and adjust the shooting angle to the angle facing the document.
8. Text Image Super-Resolution, can zoom in an image that contains text and significantly improve the definition of text in the image.
9. Scene Detection, Classify the scene content of images and add annotation information, such as outdoor scenery, indoor places, and buildings, to help understand the image content.
Face/Body-related Services
1. Face Detection, can detect the shapes and features of your user's face, including their facial expression, age, gender, and wearing. You can use the service to develop apps that dynamically beautify users' faces during video calls.
2. Skeleton Detection, detects and locates key points of the human body, such as the top of the head, neck, shoulder, elbow, wrist, hip, knee, and ankle. For example, when taking a photo, the user can pose a posture similar to a preset one.
3. Liveness Detection, can detect whether a user in a service scenario is a real person. This service is useful in various scenarios.
4. Hand Keypoint Detection, can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return positions of the keypoints. Currently, static image detection and real-time video stream detection are supported.
Conclusion
Except ML Kit, HMS still provides Awareness Kit, which provides your app with the ability to obtain contextual information including users' current time, location, behavior, audio device status, ambient light, weather, and nearby beacons. Scan Kit, which scans and parses all major 1D and 2D barcodes and generates QR codes, helping you quickly build barcode scanning functions into your apps. And Nearby Service, which allows apps to easily discover nearby devices and set up communication with them using technologies such as Bluetooth and Wi-Fi. The service provides Nearby Connection and Nearby Message APIs.
A series of Codelabs challenge competitions in the Codelabs exhibition area at the HUAWEI DEVELOPER CONFERENCE 2020 (Together), which took place from September 10 to 12 at Songshan Lake in Dongguan, proved to be a hit among developers in attendance. Nearly 1,000 developers participated in the Codelabs activities and came away highly impressed with the programming experience.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Developers in the Codelabs exhibition area
Social Coding: Difficult, but Fun and Rewarding
Huawei Codelabs serve as training camps for developers who hope to hone their coding skills. This year's sessions drew hundreds of developers, from novices to veteran coders, providing them with a unique opportunity to experience the full range of open capabilities offered by Huawei Mobile Services (HMS), with technical experts from Huawei on hand to offer assistance.
In the exhibition area, developers were presented with three separate competition types (related to AI, HMS Core basic capabilities, and HarmanoyOS respectively), each of which challenged and engaged them in distinct ways. During the competitions, on-site staff and technical experts provided participants with professional-level guidance, for a hands-on coding experience, helping them quickly grasp how to access capabilities opened up to devices.
Huawei staff in the Codelabs exhibition area
The competition testing the HMS Core basic capabilities was open to junior and intermediate developers, and ultimately attracted the largest crowd. The security capability, one of the key capabilities provided by HMS Core, has seen some of its kits integrated into a range of apps that have proven indispensable in daily life, such as those related to financing, news and reading, and online shopping.
Safety Detect Cracks the Top 10 Most Attractive Open Capability List
Safety Detect, one of the main open security capabilities offered by HMS Core, detects five common security issues, which are: threats to system integrity, fake users, app security, malicious URLs, and malicious Wi-Fi networks.
System integrity detection determines whether the device environment is secure; the fake user detection judges if the current interactive user of the app is genuine; the app security detection enables developers to obtain a comprehensive list of malicious apps; the malicious URL detection clarifies the threat type corresponding to specific URLs; and the malicious Wi-Fi detection checks the security of the Wi-Fi network that the device is connected with.
Developers integrating Safety Detect in the Codelabs exhibition area
According to a post-event survey, participants included a significant number of developers who had already integrated HMS kits and Safety Detect ranked among the top 10 most integrated kits. Developers in attendance expressed an eager willingness to integrate the open capabilities of Safety Detect into their apps. Besides, Safety Detect also made it to the top 10 kits that developers showed greater willingness to integrate into their apps.
HMS Core security services exhibition area
Following the conclusion of the competitions in the Codelabs area, developers went to the adjacent HMS Core security services exhibition area, where security service vendors and financial payment app vendors dialogued with Huawei technical experts. Interactive demonstrations, such as immersive one-minute animations, provided attendees with a firsthand look at the HMS security ecosystem. The informal, face-to-face format of the event, also provided them with a chance to exchange ideas about app security technologies.
HMS Core security capabilities showcase in the security services exhibition area
During the event, an exhibitor opened foreign apps installed on the test mobiles to offer a direct side-by-side comparison for the effects of Huawei security services on secure and un-secure devices, showcasing the robust safeguards in place. Enterprise developers were among those who came away impressed, noting that they planned to work more closely with Huawei in the near future to bolster the security of their apps released outside of the Chinese mainland.
For more details, you can go to:
Official website
Development Documentation page, to find the documents you need
Reddit to join developer discussion
GitHub to download demos and sample codes
Stack Overflow to solve any integration problems
At the Security and Privacy Session for the HUAWEI Developer Conference held at Songshan Lake Research Center on September 11th, Liu Deqian, a security technology expert from the Huawei Consumer Cloud Service Dept, provided a detailed overview of the security architecture and data safeguards in HMS. Mr. Liu detailed the security mechanisms in HMS Core, which are adhered to during HMS Core integration, and in every proceeding step, as well as the security and data protection technology packed into HMS Core open capabilities.
HMS Core — Core Mobile Services for Global Developers
HMS Core further opens up standout chip-device-cloud capabilities in 1+8+N scenarios for global developers, making app development easier, more cost-effective and efficient. With these capabilities, developers can focus their attention on pursuing innovation and providing first-rate app content, services and experiences for users. In Mr. Liu's view, HMS Core serves a key role in connecting apps with the operating system, helping developers attract more users, improve user activity, and achieve unprecedented success on the market.
How to Enhance Privacy and Security with HMS Core Opened? — Five Security Technology Pillars
As Mr. Liu noted, there are only three steps that are required for integrating HMS Core open capabilities: registering with HUAWEI Developers; applying for integration and obtaining authentication credentials; then finally, integrating the HMS SDK (software development tools). Following that, HMS Core implements access authentication.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Security technology pillars:
Ø Authentication: authenticates users, accesses, and devices
Ø Data security and privacy protection: bolsters the security of data storage, data usage, data transmission, keys, and privacy safeguards
Ø Content protection: protects copyrighted content, generates digital watermarks, and provides anti-leeching functions
Ø App security: ensures app security with four-layer detection prior to app release, download and installation protections, and runtime defense mechanisms
Ø Business risk control: provides risk controls for accounts, transactions, and content, and prevents fraud in advertisements
These five technologies help defend the privacy and security of apps accessing HMS Core, and continue to do so following app release, or as Mr. Liu explained it: "They are the most powerful lines of defense safeguarding your privacy and security".
Access Authentication
Mr. Liu further noted that the process of access authentication is secured by authentication credential, access constraints, and permission controls. Before an app can access HMS Core's open capabilities, you need to create authentication credentials on HUAWEI Developers. Currently, credentials supported by HMS Core include API Key, OAuth2.0 ClientID, and Service Account Key. These credentials are generated with secure random numbers and encrypted using the AES-GCM algorithm for storage on the server, in the event of credential leakage.
In addition to protective measures on the developer side, Liu explained that app security is also heightened by AppGallery with a unique four-layer detection mechanism prior to app release, as well as protections during app downloads and installations, and safeguards when the app is running.
Data Protection for HMS Core Open Capabilities
With Account Kit integrated, a user can sign in to your app in an optimally convenient manner. During the sign-in process, the user's account data is secured by FIDO. Furthermore, potential risks are automatically detected when the user signs in to the app or resets the password, to prevent the occurrence of account theft. Risk identification methods, such as those used for identifying abnormal operations, are used in conjunction with expert rules and machine learning to identify fake accounts and prevent malicious registrations. The risk control platform of Huawei device cloud thus has proven capable of identifying risks quickly and accurately, to protect users' legitimate rights and interests more effectively.
Mr. Liu also explained how IAP provides secure and convenient payment services for apps, offering two prominent examples: payment with faces or fingerprints (powered by PKI, public key infrastructure), and liveness detection. With PKI, the security of online payment is further enhanced. The keys and certificates are properly stored, to build a secure runtime environment for apps.
Another example is Push Kit. During the push process, each app on a device is provided with a unique token, which is used to authenticate the app, and push messages are encrypted and cached, with sensitive information contained in the message reviewed automatically. This solution ensures that cross-platform messaging is more reliable and precise. As Mr. Liu noted, for enhanced transmission security, push messages are encrypted using session keys, and secured with integrity safeguards.
Detail-Oriented Approach: End-to-End Security and Privacy Protections
Security is prioritized above all else throughout the whole process, from requirement analysis and design, to code development and testing, a highly detail-oriented approach. For instance, Huawei offers elite secure encoding practice with HMS project development guides and sandboxes that cover 43 device-cloud security vulnerabilities, and supports security tests with the Huawei Consumer Cloud Service Dept and more than 40 test engineers from the Huawei Internal Cyber Security Lab (ICSL).
SilverNeedle Lab: On the Move
Toward the end of the session, Liu introduced the guardian plan for HMS Core. Faced with attacks from external blue teams, HMS Core established its own blue team, SilverNeedle Lab. Convened by the lab, 60 frontline security researchers with abundant experience in attack defense and penetration testing gathered at the Songshan Lake Research Center recently to discuss such hot topics as penetration tools, biometric authentication, OAuth2, and security attack defenses offered by HMS.
In addition to having built an independent blue team, HMS Core has also worked closely with industry-leading security testing companies, such as NCC Group. The first phase of Huawei's security test project, security crowdtesting was recently completed, and Huawei invited 13 prominent companies, including Tencent, 360, Chaitin Tech, and DBAPP Security, as well as more than 60 white hat hackers (from such countries and regions as the Chinese mainland, Europe, Singapore and Russia) invited by Bug Bounty Programs, to conduct security penetration tests for HMS Core and HMS apps.
The second phase, dedicated testing for Europe, was conducted from January to August, 2020. During this process, Huawei invited NCC Group, an expert security service provider in Europe, to implement continuous online penetration tests for 24 kits, 11 of which had been tested in the first phase.
The third phase, HMS Security Competition, is still under planning. For the competition, Huawei plans to invite global developers and security researchers to conduct penetration tests on HMS products, with lucrative bonuses worth millions of RMB in total awarded to those attendees who identify high-value vulnerabilities. The maximum reward for a single vulnerability is set at a staggering RMB 420,000.
Huawei has an unrelenting resolve to pursue more effective security measures, and will spare no effort to protect the HMS Core ecosystem against any and all challenges presented by the global market.
For more details, you can go to:
Official website
Development Documentation page, to find the documents you need
Reddit to join our developer discussion
GitHub to download demos and sample codes
Stack Overflow to solve any integration problems
Capabilities of Huawei’s enhanced artificial intelligence system — Huawei HiAI
In today’s world, technology evolves into smarter systems that eases our life and mobile technologies are getting involved in these systems as well. As one of the biggest in the industry, Huawei offers developers an open development platform to create smart apps quickly and easily use the powerful AI capabilities on Huawei devices, and deliver a next-level smart app user experience.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Huawei has been developing its machine learning and artificial intelligence services since 2019 July and it has become the most powerful ever. Huawei HiAI is an open AI capability platform for smart devices. It offers a 3-layer AI ecosystem that provides capabilities at chip, device and cloud level. These three layers are:
Huawei HiAI Foundation
Huawei HiAI Engine
Huawei HiAI Service
Each of them has different purposes and structures. For example, HiAI Foundation works on chip-level, HiAI Engine works on device-level and HiAI Service works on cloud-level.
HiAI Foundation
HiAI Foundation works on chip-level and serves as a basic platform providing a high-performing computing environment supporting scenario versality and achieving low power consumption through optimization. Its capabilities includes:
dedicated set of AI instructions for neural network model operations,
capable of compiling a wide range of neural network operators, including convolution, pooling, activation, and full link, into dedicated AI instruction sequences for the NPU in offline settings, with data and weight rearranged to ensure optimized performance. The instructions and data are then integrated to generate an offline execution model. Furthermore, during offline compilation, cross-layer fusion between operators (convolution, ReLU, and pooling) reduces the read-write bandwidth of the DDR, and thus improves performance,
rearranges relevant data (Batch, Channel, Height, and Width) in the neural network model in an optimally efficient manner
identifies the computing capability of the running environment, and performs adaptive subgraph splitting and device collaborative scheduling for the neural network,
automatic optimization, from pre-trained models to device-side inference models. Lightweight models are oriented toward widely varying application scenarios. They provide a broad range of algorithms, and automatically implement smaller and faster models via calibration or retraining, to meet stringent requirements for highly-precise solutions.
HiAI Engine
HUAWEI HiAI Engine provides apps with a diversity of AI capabilities using device capabilities. These capabilities are as follows:
Computer Vision (CV) Engine
Computer Vision Engine focuses to sense the ambient environment to determine, recognize, and understand the space. Its capabilities are
image recognition,
facial recognition,
text recognition.
Automatic Speech Recognition (ASR) Engine
Automatic Speech Recognition Engine converts human voice into text to facilitate speech recognition.
Natural Language Understanding (NLU) Engine
Natural Language Understanding Engine works with the ASR engine to enable apps to understand human voice or text to achieve word segmentation and text entity recognition.
HiAI Service
Huawei HiAI Service is a cloud platform designed for developers to enhance their projects through HUAWEI Ability Gallery. Abilities are integrated to provide services like Smart Service, Instant Access, AI Voice, and AI Lens. It is Huawei’s unified platform for ability integration and distribution.
To wrap it up…
Huawei’s AI is a strong development system that allows developers to have a great amount of abilities to integrate in their apps. It is also advised to use such easy to implement AI features as it greatly enhances user experience. In the evolving technology environment, our apps should adapt as well.
References
HiAI - HiAI IDE - HUAWEI Developer
HiAI is a mobile AI open platform and a three-layers AI ecosystem: Service open platform, Application open platform…
developer.huawei.com
AI is very Interesting and it can be useful in many applications.
A inteligência artificial permitir fazer muitas coisas
Event tracking is crucial, as it is a prerequisite for effective data analysis and pursuing precise data-driven operations. From tracking design to coding, verification, and management, event tracking involves various roles and encompasses a number of complex steps like ensuring the design of the event tracking system is reasonable, the logic for triggering event tracking is clear, and the data type and naming of fields meet the required standards.
All the steps have an enormous impact on data quality, analysis accuracy, and decision making. Locating and fixing the issue is difficult regardless of which step encounters a bug.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What Is Intelligent Event Tracking?
Intelligent data access is available in HUAWEI Analytics Kit 5.3.1. This function was developed with the aim of enhancing data quality and facilitating event tracking. With SDK integration verification, industry-specific templates, and tracking management, this one-stop solution is meant to help data achieve its potential in order for our partners to realize digital and intelligent transformation by reducing the workload of their technical talent and boosting the quality of data collection and the efficiency of event tracking.
What Can Intelligent Event Tracking Do?
SDK integration verification: After the Analytics SDK is integrated, you can view the initialization result in real time.
E2E management: Intelligent data access is capable of intelligently recommending data collection schemes and visual event tracking, helping you manage the event tracking process from start to finish.
Preset industry-specific templates: Intelligent data access leverages extensive industry experience to offer templates that consist of abundant events and sample code for greater efficiency.
Intelligent configuration and verification: Anomalies can be detected, ensuring a high level of accuracy throughout the entire event tracking configuration process.
Easy management: Event tracking has been made easier with one-click event registration and unregistration.
Intelligent data collection is used in conjunction with industry analysis. You can select an industry-specific template (templates for MMO and trading card games are available) under Intelligent data access. After configuring event tracking, you'll be able to view the relevant data in the industry analysis report.
How Is Intelligent Event Tracking Performed?
1. Configuring Event Tracking
Analytics Kit provides apps related to gaming and education with out-of-the-box templates and sample code.
Analytics Kit classifies tracking scenarios by industry and abstracts them into templates, which are available for trading card games, MMO games, and exam preparation apps.
* Trading card game template
* MMO game template
Analytics Kit supports performing event tracking either by coding or through visual event tracking.
Tracking by coding can be implemented by copying the sample code, downloading the report, or using the tool provided by Analytics Kit. Tracking by coding is rather stable, and supports the collection and reporting of complex data, while visual event tracking comes with less costs and lower technical requirements, and allows the visual event to be modified and added after the app release.
To use visual event tracking, you need to integrate Dynamic Tag Manager (DTM) first. You can then synchronize the app screen to a web-based UI and click relevant components to add events or event parameters.
2. Verifying the Tracking Configuration
You can use the verification function to identify incorrect and incomplete configurations, as well as other exceptions in events and parameters in a timely manner after configuring event tracking for a specific template. With this function, you can configure event tracking more accurately and mitigate risks for your business.
3. Managing Event Tracking
The management page presents the event verifications and registrations, the proportion of verified events to the upper limit, as well as the proportion of registered parameters to the upper limit. Such information serves as a one-stop management solution, bringing you a clear understanding of the progress of event tracking and structure of tracking configurations.
Analysis Report Intelligently Generated After Event Tracking Configuration
Analysis Report for Trading Card Games
Based on the characteristics of the trading card game, we tailored the event tracking system and analysis report for it.
To check the analysis report, under Intelligent data access, select a template for the trading card game and complete required configurations.
The report provides payment analysis, player analysis, virtual consumption, battle analysis, and card analysis. Together, they reveal a wide range of indicators, including numbers of players, churned users, and won-back users, real-time payment rate, ARPU, ARPPU, distribution of active users (by vendor, device model, location, channel, and role level), average usage duration, virtual coin consumption, battles, and card drawings. With all these, the analysis report allows you to dig deeper into the behavior of your users and identify ways to improve your product and revenue.
Analysis Report for MMO Games
Analytics Kit also offers the analysis report for MMO games. It sheds a light on user behavior through data related to payments, players, virtual consumption, battles, the guild system, life simulation system, and dungeon. With the help of such data, you can elaborate informed operations strategies and product optimization plans to improve users' gaming experience, attract more users, and boost revenue.
With its user-centric approach, Analytics Kit will continue to explore new methods for extracting the most value from data, and empowering enterprises with new capabilities. To learn more, click here to get the free trial for the demo, or visit our official website to access the development documents for Android, iOS, Web, and Quick App.