What is Huawei HiAI? - Huawei Developers

Capabilities of Huawei’s enhanced artificial intelligence system — Huawei HiAI
In today’s world, technology evolves into smarter systems that eases our life and mobile technologies are getting involved in these systems as well. As one of the biggest in the industry, Huawei offers developers an open development platform to create smart apps quickly and easily use the powerful AI capabilities on Huawei devices, and deliver a next-level smart app user experience.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Huawei has been developing its machine learning and artificial intelligence services since 2019 July and it has become the most powerful ever. Huawei HiAI is an open AI capability platform for smart devices. It offers a 3-layer AI ecosystem that provides capabilities at chip, device and cloud level. These three layers are:
Huawei HiAI Foundation
Huawei HiAI Engine
Huawei HiAI Service
Each of them has different purposes and structures. For example, HiAI Foundation works on chip-level, HiAI Engine works on device-level and HiAI Service works on cloud-level.
HiAI Foundation
HiAI Foundation works on chip-level and serves as a basic platform providing a high-performing computing environment supporting scenario versality and achieving low power consumption through optimization. Its capabilities includes:
dedicated set of AI instructions for neural network model operations,
capable of compiling a wide range of neural network operators, including convolution, pooling, activation, and full link, into dedicated AI instruction sequences for the NPU in offline settings, with data and weight rearranged to ensure optimized performance. The instructions and data are then integrated to generate an offline execution model. Furthermore, during offline compilation, cross-layer fusion between operators (convolution, ReLU, and pooling) reduces the read-write bandwidth of the DDR, and thus improves performance,
rearranges relevant data (Batch, Channel, Height, and Width) in the neural network model in an optimally efficient manner
identifies the computing capability of the running environment, and performs adaptive subgraph splitting and device collaborative scheduling for the neural network,
automatic optimization, from pre-trained models to device-side inference models. Lightweight models are oriented toward widely varying application scenarios. They provide a broad range of algorithms, and automatically implement smaller and faster models via calibration or retraining, to meet stringent requirements for highly-precise solutions.
HiAI Engine
HUAWEI HiAI Engine provides apps with a diversity of AI capabilities using device capabilities. These capabilities are as follows:
Computer Vision (CV) Engine
Computer Vision Engine focuses to sense the ambient environment to determine, recognize, and understand the space. Its capabilities are
image recognition,
facial recognition,
text recognition.
Automatic Speech Recognition (ASR) Engine
Automatic Speech Recognition Engine converts human voice into text to facilitate speech recognition.
Natural Language Understanding (NLU) Engine
Natural Language Understanding Engine works with the ASR engine to enable apps to understand human voice or text to achieve word segmentation and text entity recognition.
HiAI Service
Huawei HiAI Service is a cloud platform designed for developers to enhance their projects through HUAWEI Ability Gallery. Abilities are integrated to provide services like Smart Service, Instant Access, AI Voice, and AI Lens. It is Huawei’s unified platform for ability integration and distribution.
To wrap it up…
Huawei’s AI is a strong development system that allows developers to have a great amount of abilities to integrate in their apps. It is also advised to use such easy to implement AI features as it greatly enhances user experience. In the evolving technology environment, our apps should adapt as well.
References
HiAI - HiAI IDE - HUAWEI Developer
HiAI is a mobile AI open platform and a three-layers AI ecosystem: Service open platform, Application open platform…
developer.huawei.com

AI is very Interesting and it can be useful in many applications.

A inteligência artificial permitir fazer muitas coisas

Related

Huawei HMS Core Partners with Cocos to Help Developers Create Exciting Games

More information like this, you can visit HUAWEI Developer Forum​
On July 23, 2020, Huawei officially announced a cooperative partnership with Cocos, a leading game engine company. The latest version of Cocos Creator has integrated many of HMS Core's open capabilities, including Account Kit, In-App Purchases, Ads Kit, and Game Service. HMS Core will provide even more services in future, including Computer Graphics Kit (CG Kit), Push Kit, Analytics Kit and Location Kit.
Open capabilities which can be quickly integrated
HMS Core provides a wide array of open device and cloud capabilities which bring efficient development, fast growth, and flexible monetization to developers all over the world. It gives game creators the freedom to innovate, create next-level user experiences, and provide premium content and services to a wide audience. Developers can quickly integrate HMS Core's open capabilities when developing games using Cocos Creator, and then release these games onto HUAWEI AppGallery.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
https://communityfile-dre.op.hicloud.com/FileServer/getFile/cmtybbs/006/560/358/2640852000006560358.20200807013532.36916252467427474842777963038122:50510810024909:2800:8B18C6AB7F0F106A9130992D2201CF282B2B080817EF482287987FB2B8039C9C.png
Next-generation image quality​
By deeply integrating the HUAWEI CG Kit, Cocos can utilize the power and efficiency of Vulkan's ultimate rendering capabilities, which are based on underlying algorithms. This collaboration both improves the Cocos engine's 3D game rendering capabilities on mobile terminals, and helps developers produce great games with crystal clear frames on Cocos platforms.
Developers can utilize the lifelike materials and lighting effects delivered by the optimized CG Kit to craft true-to-life games with incredible ease.
CG Kit is a new open capability released with HMS Core 5.0. It provides Android mobile clients with the Vulkan rendering framework, so they can render plug-ins, and extensions.
CG Kit features:
l A high-performance rendering framework with atom-level capabilities, covering materials, models, lighting, and post effects, which helps developers apply Vulkan more easily when developing games.
l Rendering plug-ins leverage the anti-aliasing algorithm and multi-thread rendering components.
l Extensions including Smart Cache and Pre-rotation, which improve graphics rendering performance.
In the future, this Kit will bring revenue surprises to developers with more cutting-edge AI-powered rendering technologies on mobile platforms, including the super-resolution technology, automatic 3D face modeling, and automatic animation generation.
Building a collaborative game ecosystem
From the time Huawei announced it was fully opening the HMS ecosystem in August 2019 through to the end of Q1, 2020, the number of registered HMS developers reached 1.4 million globally, and the number of apps integrated into HMS Core jumped to 60,000, a 115% and 67% increase respectively. Access to many of the open services also greatly increased, indicating an early success in the construction of the game ecosystem.
With the release of HMS Core 5.0 on June 30, 2020, Huawei launched CG Kit for computer graphics, Scene Kit for graphics rendering, AR Engine for AR game development, and hQUIC Kit that provides faster and steadier game downloads and updates. Developers can use the new HMS Core to design more innovative gaming experiences.
The 2020 HDC will take place between September 10 and 12 in Dongguan, Guangdong. At the conference, Huawei will be demonstrating more HMS open capabilities, as well as sharing best practices and discussing development possibilities with developers.
For more information, please visit: https://developer.huawei.com/consumer/en/hms

Super Growth Logs of HMS Core

Since 2019 to now, within only more than one year, Huawei Mobile Services Core has helped millions of mobile developers to offer high-quality and intelligent digital life experience in nearly every scenario.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
HMS Core provides global developers with chip-device-cloud software and hardware capabilities across 7 key fields, including App Services, Media, AI, Graphics, System, Smart Device and Security. These capabilities contribute to the building of a technically competitive HMS ecosystem, enable app innovations, boost development efficiency, and provide smart services that meet conceivable user needs.
Superpower in AI - - A World Without Barriers
As AI technologies continue to develop at an unprecedented pace, smart capabilities are becoming a must-have element in a developer's arsenal. Therefore, Huawei wish and will open up all device-cloud capabilities to developers, including Machine Learning Kit which provides services such as translation, text recognition, speech recognition, and face/body detection.
ML Kit of HMS Core helps developers easily build AI-powered apps. For apps integrated with ML Kit, the speed of face/person detection increases by 70% and the accuracy of text recognition reaches 99%, delivering a smarter and more convenient experience for the user.
Superpower in Graphics - - A More Vivid Graphics Experience
In the graphics domain, Computer Graphics Kit of HMS Core provides a high-performance rendering framework and a series of rendering plug-ins. It is an open capability system based on GPU technologies. After integrating Computer Graphics Kit, game apps can improve their average frame rate by 13–16% and reduce their average power consumption by 11–13%, offering users a better overall experience.
Superpower in App Service - - Allow Developers to Focus on Innovation
Talking to App Service domain, Map and Site Kits of HMS Core are dedicated to provide personalized map display and interaction, as well as to improve the LBS experience. They are able to provide multiple path planning, including 56.8+ million kilometers of navigation information. Equipped with 180 million POI and 10 million AOI information, HMS Core is also capable to satisfy massive location search. By covering more than 200 countries/regions and supporting more than 40 languages, Map and Site Kits enhance the help for developers to enrich the practical functions of their apps.
No pains no gains, the growth of HMS Core is quite encouraging. Today, the number of registered developers within HMS ecosystem has increased to 1.8 million, and the number of apps integrated with HMS Core has increased to 96 thousand. It is also a great honor of HMS Core to have super growth of integration in the oversea market.
So which "superpower" of HMS Core would you like to pick up for your app? Please leave your comment below
Being more inclined towards gaming, computer graphics kit of HMS core fascinates me more.
:highfive:

Huawei Developers & SUSS School Series Talk Review

To learn more about students' voices in the development field of universities and improve the overall influence of HMS Core. Since November, the HMS Core ecosystem expansion support team and online operation team have worked with frontline colleagues in Singapore to actively seek the theme of activities that meet local development requirements and plan a series of activities. The first university series event was held with Republic Polytechnic at the end of November and received good response. On December 16, the second series of events ended at Singapore University of Social Sciences. The event focused on AI & Media, attracting Over 100 students from the University.
This activity consists of three parts, focusing on the full layout of Huawei ecosystem, Huawei AppGallery overview, and HMS Core capabilities. At the beginning, Jimmy Kang from Huawei Singapore showed the Huawei All-scenario intelligent ecosystem driven by mutually-reinforcing software and hardware systems from the perspective of global mobile application development.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Then, Huawei Singapore DTSE Godwin Wong introduced the principle and logic of mobile app development from the perspective of Android app development foundation, and briefly demonstrated the program development process. Finally, we introduce the core capabilities of HMS Core in 7 vertical domains (50+ Kits), which helps students learn more about HMS Core.
Nando Li & Hannah Xu, the product manager of HMS Core at Huawei, delivered the final session. The HMS Core Analytics Kit, HMS Core ML Kit, and HMS Core solutions in Media & Entertainment fields are mentioned. Firstly, “Tata Empire” ,one of HUAWEI developer partner Improves Product Experience and Achieves the User Growth Target Through Funnel Analysis.
This case shows HMS Core Analytics Kit can enrich the indicator system through intelligent data, customize overview, and provide industry reports to help lean operation. Then, the integration story of Huawei's partner developer "Banggood" demonstrates that the ML Kit of HMS Core enables developers in different fields in terms of text, image, body, and custom model, achieving Wide Device Coverage and Global Coverage, core values such as Data Security and Customizable Training Model to bridge AI to Your App.
In addition, based on the media capabilities of the HMS Core, HMS Core can provide the following functions: Make Audio/Video Splendid, Make Playout Fluent, Make Content Interactive, and Make Monet Efficient. For example, the Video Kit Provide Fluent and Adaptive Video Play Capabilities;The Wireless Kit provides an Optimal Viewing Experience for users; The ML Kit helps implement multi-language real-time translation and interaction in live room;The AR Engine implements multiple interactions during photo shooting through facial recognition and automatically generates 3D stickers close to facial expressions.
As the first event in a series of cooperation with universities in Singapore, this event focuses on popular AI and Media fields and enables more students to have a comprehensive understanding of HMS Core capabilities. More seminars and integration activities will be held in the future.
More details, you can check https://forums.developer.huawei.com/forumPortal/en/topic/0203440219037710014

Are you wearing Face Mask? Let's detect using HUAWEI Face Detection ML Kit and AI engine MindSpore

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Article Introduction
In this article, we will show how to integrate Huawei ML Kit (Face Detection) and powerful AI engine MindSpore Lite in an android application to detect in realtime either the users are wearing masks or not. Due to Covid-19, the face mask is mandatory in many parts of the world. Considering this fact, the use case has been created with an option to remind the users with audio commands.
Huawei ML Kit (Face Detection)
Huawei Face Detection service (offered by ML Kit) detects 2D and 3D face contours. The 2D face detection capability can detect features of your user's face, including their facial expression, age, gender, and wearing. The 3D face detection capability can obtain information such as the face keypoint coordinates, 3D projection matrix, and face angle. The face detection service supports static image detection, camera stream detection, and cross-frame face tracking. Multiple faces can be detected at a time.
Following are the important features supported by Face Detection service:
MindSpore Lite
MindSpore Lite is an ultra-fast, intelligent, and simplified AI engine that enables intelligent applications in all scenarios, provides E2E solutions for users, and helps users enable AI capabilities. Following are some of common scenarios to use MindSpore:
For this article, we implemented Image classification. The camera stream yield frames. We then process it to detect faces using ML Kit (Face Detection). Once, we have the faces, we process our trained MindSpore lite engine to detect either the face is With or Without Mask.
Pre-Requisites
Before getting started, we need to train our model and generate .ms file. For that, I used HMS Toolkit plugin of Android Studio. If you are migrating from Tensorflow, you can convert your model from .tflite to .ms using the same plugin.
The dataset used for this article is from Kaggle (link is provided in the references). It provided 5000 images for both cases. It also provided some testing and validation images to test our model after being trained.
Step 1: Importing the images
To start the training, please select HMS > Coding Assistance > AI > AI Create > Image Classification. Import both folders (WithMask and WithoutMask) in the Train Data description. Select the output folder and train parameters based on your requirements. You can read more about this in the official documentation (link is provided in the references).
Step 2: Creating the Model
When you are ready, click on Create Model button. It will take some time depending upon your machine. You can check the progress of the training and validation throughout the process.
Once the process is completed, you will see the summary of the training and validation.
Step 3: Testing the Model
It is always recommended to test your model before using it practically. We used the provided test images in the dataset to complete the testing manually. Following were the test results for our dataset:
After testing, add the generated .ms file along with labels.txt in the assets folder of your project. You can also generate Demo Project from the HMS Toolkit plugin.
Development
Since it is on device capability, we don't need to integrate HMS Core or import agconnect-services.json in our project. Following are the major steps of development for this article:
Read full article.
7.2: Final Results
Conclusion
Building smart solutions with AI capabilities is much easy with HUAWEI mobile services (HMS) ML Kit and AI engine MindSpore Lite. Considering different situations, the use cases can be developed for all industries including but not limited to transportation, manufacturing, agriculture and construction.
Having said that, we used Face Detection ML Kit and AI engine MindSpore to develop Face Mask detection feature. The on-device open capabiltiies of HMS provided us highly efficient and optimized results. Individual or Multiple users without Mask can be detected from far in realtime. This is applicable to be used in public places, offices, malls or at any entrance.
Tips & Tricks
Make sure to add all the permissions like WRITE_EXTERNAL_STORAGE, READ_EXTERNAL_STORAGE, CAMERA, ACCESS_NETWORK_STATE, ACCESS_WIFI_STATE.
Make sure to add aaptOptions in the app-level build.gradle file aftering adding .ms and labels.txt files in the assets folder. If you miss this, you might get Load model failed.
Always use animation libraries like Lottie to enhance UI/UX in your application. We also used OwlBottomSheet for the help bottom sheet.
The performance of model is directly propotional to the number of training inputs. Higher the number of inputs, higher will be accuracy to yield better results. In our article, we used 5000 images for each case. You can add as many as possible to improve the accuracy.
MindSpore Lite provides output as callback. Make sure to design your use case while considering this fact.
If you have Tensorflow Lite Model file (.tflite), you can convert it to .ms using the HMS Toolkit plugin.
HMS Toolkit plugin is very powerful. It supports converting MindSpore Lite and HiAI models. MindSpore Lite supports TensorFlow Lite and Caffe and HiAI supports TensorFlow, Caffe, CoreML, PaddlePaddle, ONNX, MxNet and Keras.
If you want to use Tensorflow with HMS ML Kit, you can also implement that. I have created another demo where I put the processing engine as dynamic. You can check the link in the references section.
References
HUAWEI ML Kit (Face Detection) Official Documentation:
https://developer.huawei.com/consum...-Guides-V5/face-detection-0000001050038170-V5
HUAWEI HMS Toolkit AI Create Official Documentation:
https://developer.huawei.com/consumer/en/doc/development/Tools-Guides/ai-create-0000001055252424
HUAWEI Model Integration Official Documentation:
https://developer.huawei.com/consum...ols-Guides/model-integration-0000001054933838
MindSpore Lite Documentation:
Using MindSpore on Mobile and IoT — MindSpore Lite r1.1 documentation
MindSpore Lite Code Repo:
MindSpore/mindspore
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
gitee.com
Kaggle Dataset Link:
Face Mask Detection ~12K Images Dataset
12K Images divided in training testing and validation directories.
www.kaggle.com
Lottie Android Documentation:
Lottie Docs
Lottie is a library for Android, iOS, Web, and Windows that parses Adobe After Effects animations exported as json with Bodymovin and renders them natively on mobile and on the web
airbnb.io
Tensorflow as a processor with HMS ML Kit:
https://github.com/yasirtahir/Huawe...icodelabs/fragments/mlkit/facemask/tensorflow
Github Code Link:
https://github.com/yasirtahir/DetectFaceMask
Read full article.
Nice and useful to know it in the time of COVID-19.
How much accuracy it provides?

HMS Core ML Kit's Capability Certificated by CFCA

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Facial recognition technology is quickly implemented in fields such as finance and healthcare, which has in turn raised issues involving cyber security and information leakage, along with growing user expectations for improved app stability and security.
HMS Core ML Kit strives to help professionals from various industries work more efficiently, while also helping them detect and handle potential risks in advance. To this end, ML Kit has been working on improving its liveness detection capability. Using a training set with abundant samples, this capability has obtained an improved defense feature against presentation attacks, a higher pass rate when the recognized face is of a real person, and an SDK with heightened security. Recently, the algorithm of this capability has become the first on-device, RGB image-based liveness detection algorithm that has passed the comprehensive security assessments of China Financial Certification Authority (CFCA).
CFCA is a national authority of security authentication and a critical national infrastructure of financial information security, which is approved by the People's Bank of China (PBOC) and State Information Security Administration. After passing the algorithm assessment and software security assessment of CFCA, ML Kit's liveness detection has obtained the enhanced level certification of facial recognition in financial payment, a level that is established by the PBOC.
The trial regulations governing the secure implementation of facial recognition technology in offline payment were published by the PBOC in January 2019. Such regulations impose higher requirements on the performance indicators of liveness detection, as described in the table below. To obtain the enhanced level certification, a liveness detection algorithm must have an FAR less than 0.1% and an FRR less than 1%.
LevelDefense Against Presentation AttacksBasicWhen LDAFAR is 1%, LPFRR is less than or equal to 1%.EnhancedWhen LDAFAR is 0.1%, LPFRR is less than or equal to 1%.
Requirements on the performance indicators of a liveness detection algorithm
The liveness detection capability enables an app to have the facial recognition function. Specifically speaking, the capability requires a user to perform different actions, such as blinking, staring at the camera, opening their mouth, turning their head to the left or right, and nodding. The capability then uses technologies such as facial keypoint recognition and face tracking to compare two continuous frames, and determine whether the user is a real person in real time. Such a capability effectively defends against common attack types like photo printing, video replay, face masks, and image recapture. This helps distinguish frauds, protecting users.
Liveness detection from ML Kit can deliver a user-friendly interactive experience: During face detection, the capability provides prompts (indicating the lighting is too dark, the face is blurred, a mask or pair of sunglasses are blocking the view, and the face is too close to or far away from the camera) to help users complete face detection smoothly.
To strictly comply with the mentioned regulations, CFCA has come up with an extensive assessment system. The assessments that liveness detection has passed cover many items, including but not limited to data and communication security, interaction security, code and component security, software runtime security, and service function security.
Face samples used for assessing the capability are very diverse, originating from a range of different source types, such as images, videos, masks, head phantoms, and real people. The samples also take into consideration factors like the collection device type, sample textile, lighting, facial expression, and skin tone. The assessments cover more than 4000 scenarios, which echo the real ones in different fields. For example, remote registration of a financial service, hotel check-in, facial recognition-based access control, identity authentication on an e-commerce platform, live-streaming on a social media platform, and online examination.
In over 50,000 tests, ML Kit's liveness detection presented its certified defense capability that delivers protection against different attack types, such as people with a face mask, a face picture whose keypoint parts (like the eyes and mouth) are hollowed out, a frame or frames containing a face extracted from an HD video, a silicone facial mask, a 3D head phantom, and an adversarial example. The capability can accurately recognize and quickly intercept all the presentation attacks, regardless of whether the form is 2D or 3D.
Successfully passing the CFCA assessments is proof that the capability meets the standards of a national authority and of its compliance with security regulations.
The capability has so far been widely adopted by the internal core services of Huawei and the services (account security, identity verification, financial risk control, and more) of its external customers in various fields. Those are where liveness detection plays its role in ensuring user experience and information security in an all-round way.
Moving forward, ML Kit will remain committed to exploring cutting-edge AI technology that improves its liveness detection's security, pass rate, and usability and to better helping developers efficiently create tailored facial recognition apps.
Get more information at:
Home page of HMS Core ML Kit
Development Guide of HMS Core ML Kit

Categories

Resources