A Review of the 3D Audio Creation Solution - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Creating authentic sounds is becoming easier and easier, thanks to technological developments such as monophonic sound reproduction, stereo, surround sound, and 3D audio. Of these technologies, 3D audio stands out because of its remarkable ability to process 3D audio waves that mimic real-life sounds, for a more immersive user experience.
Specifically speaking, 3D audio (which is also known as spatial sound) delivers an audio experience that improves the sense of immersion, by simulating a surround-sound setup. Sometimes it may in fact be confused with surround sound, but actually 3D audio performs much better as it makes sounds not only move around in all directions of a plane (which surround sound can also do), but also makes sounds come at the listener from other directions in a 3D space. 3D audio processing frequently places sound sources in different positions in virtual 3D space, to produce audio that sounds natural.
3D audio is usually implemented using raw audio tracks (like the voice track and piano sound track), a digital audio workstation (DAW), and a 3D reverb plugin — manually. This process is slow and costly, and has a high threshold. This method can also be daunting for mobile app developers as accessing raw audio tracks is a challenge.
Fortunately, Audio Editor Kit from HMS Core can resolve all of these issues through two capabilities that facilitate 3D audio generation: audio source separation for obtaining raw audio tracks and spatial audio for converting 2D audio to 3D audio.
Audio source separation and spatial audio
Next, I would like to show you the basics I have learned about these two capabilities and how they can help with creating 3D audio.
Introduction to Audio Source Separation​Most audio that we are exposed to is stereophonic. Stereo audio mixes all audio objects (including voice, piano, and guitar sounds) into two channels, making it difficult to separate the sounds, let alone reshuffle the objects into different positions in a 3D space. This means that mixed audio objects must be separated before performing 2D-to-3D audio conversion.
The audio source separation capability is a valuable tool here as it adopts a colossal amount of music data for deep learning modeling and classic signal processing methods. This capability uses the short-time Fourier transform (STFT) to convert 1D audio signals into a 2D spectrogram. Then, it inputs both the 1D audio signals and 2D spectrogram as two separate streams. The audio source separation capability relies on multi-layer residual coding and training of a large amount of data to obtain the expression in the latent space for a specified audio object. Finally, the capability uses a set of transformation matrices to restore the expression in the latent space to the stereo sound signals of the object.
The matrices and network structure in the mentioned process are uniquely developed for the audio source separation capability, which are designed according to the features of different audio sources. In this way, the capability can ensure that each of the sounds it supports can be separated wholly and distinctly, to provide high-quality raw audio tracks for creating 3D audio.
Audio source separation utilizes a set of advanced technologies. Here I will name a few:
Audio feature extraction. This technology involves direct extraction from the time domain signals by using an encoder and extraction of spectrogram features from the time domain signals by using the STFT.
Deep learning modeling. It introduces the residual module and attention, to enhance harmonic modeling performance and time sequence correlation for different audio sources.
Multistage Wiener filter (MWF). This technique is combined with the functionality of traditional signal processing and utilizes deep learning modeling to predict the power spectrum relationship between the audio object and non-objects. MWF builds and processes the filter coefficient.
How audio source separation works
To pave the way for 3D audio creation, audio source separation now supports 12 sound types, which are: voice, accompaniment, drum sound, violin sound, bass sound, piano sound, acoustic guitar sound, electric guitar sound, lead vocalist, accompaniment with the backing vocal voice, stringed instrument sound, and brass stringed instrument sound.
Introduction to Spatial Audio​It's incredible that our ears are able to tell the source of a sound just by hearing it. This is possible because sound travels in different speeds and directions to our ears, so we are able to perceive the direction it came from pretty quickly.
In the digital world, however, the differences between sounds are represented by a series of transformation functions, namely, head-related transfer functions (HRTFs). By applying the HRTFs on the point audio source, we can simulate the direct sound. This is because the HRTFs recognize body differences in, for example, the head shape and shoulder width.
To achieve a high level of audio immersion and ensure that 3D audio can be enjoyed by as many users as possible, the spatial audio capability is loaded with a set of relatively universal HRTFs. The capability also implements the reverb effect (the echo that appears after a sound is produced). It constructs authentic space by using room impulse responses (RIRs) to simulate acoustic phenomena in the physical world, such as reflection, dispersion, and interference. By using the HRTFs and RIRs for audio wave filtering, the spatial audio capability can convert a sound (such as one that is obtained by using the audio source separation capability) to 3D audio.
How spatial audio works
HUAWEI Music uses these two capabilities so that users can enjoy 3D audio simply by opening the app and tapping Sci-Fi Audio or Focus on the Sound effects > Featured screen.
Sci-Fi Audio and Focus
The following audio sample compares the original audio with the 3D audio generated using these two capabilities. Sit back, listen, and enjoy.
Your browser is not able to display this video.
Original stereo audio
Your browser is not able to display this video.
Edited 3D audio
Audio source separation and spatial audio help streamline 3D audio creation. Next time when you want to generate 3D audio effects for your app, try out audio source separation to get the raw audio tracks, and then import them to spatial audio and let it do the rest of the work for you. 3D audio is ideal for games and entertainment apps, but I'm curious to know what other fields you think it can be used for. Let me know in the comments section below.

Related

Dolby Atmos on the Le Max 2 a joke ?

There's some discussion of Dolby Atmos on the Le Max 2. Is this a joke?
I've always thought of Dolby as a commercial enterprise driven compression scheme for multi-channel audio, and nothing more, in home and commercial theatre environments. Like Dolby compresses audio to then be uncompressed for playback using an amp that has Dolby's blessing ($cu-ching$ $cu-ching$), similar to the way Zip and Rar compress files on a PC for archiving, or even mp3 and flac are compression schemes for audio. In a way I consider Dolby ransomware: "Thanks for buying the Dolby disk. By the way, we screwed you. You'll need a licensed amplifier to decrypt the audio we encrypted for you."
So the tech goes in home cinema has, for me, been anything but easy to follow: dolby pro logice - TrueHD - 5.1 - then 7.1 Dolby Digital (that's just dolby, not to mention DTS stuff), now atmos is big and being thrown around a lot, and I roll my eyes, but for some reason I feel the need not to ignore it anymore
Now I'm supposed to believe Dolby can work some magic and make me feel like I'm in a Cinemark XD theatre on the Le Max 2 with cell phone stereo speakers? I just don't get it. Can someone explain why I should have Dolby Atmos?
Well like most smartphones on the market, the Le Max 2 has a single bottom firing speaker that's just there to provide basic functionality. If you're going to consume media for any extended amount of time, then you of course pull out your headphones, and that's where Dolby Atmos comes in. It's a surround sound technology, but it also provides you with system wide audio equalizer, and other audio settings and presets. Features that were fairly common in portable audio and video players back in the day, and now very poorly implemented if at all on smartphones.
I guess I need to try it on headphones. Only headphones I use are blue tooth, but I still think I won't be saying best thing since sliced bread. How you get surround sound experience with headphones? There are 2 speakers, one on right ear and the other on the left. As for a difference in sound quality across cheap vs quality headphones? I'm all in for that.
.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Enviado desde mi Le X829 mediante Tapatalk
It's basically a complex set of algorithms that attempt to virtualize various sound "objects" spatially - by the way, on a stereo device the processing is not as advanced as on theater grade equipment with 5.1+ outputs.
I've tried it once or twice on my Le Max 2, and while I considered it to have an interesting effect on the sound field, it's not something I want in the audio processing chain full-time. Oh, and it works best with ear/headphones... ?
Well, I tried it and it makes music sound much much better. I cannot express in what ways it makes it better, but my ears fell in love with Atmos alterations.
Nothing, its pure marketing as surround on stereo headphones is fake. I have it turned off because surround on stereo headphones sounds like garbage.
Dolby Atmos & CDLA mode are joke, I disable them.
I installed Audio Beats customized sound with Eq, 3D reverb and Bass boost. clear and amazing
If you want try DFX audio enhancer from Playstore. I bought this app and always use it on all my devices it's make sound pretty good. For volume if not loud try app "Volume booster GOODDEV" if speaker is loud enough, I just boost it 20-30% its plenty loud.
I forgot.... What was the name?? .....
Yaa it's Viper?????.....
Have you heard the name before?
Sorry for bad english??
Viper4android is the best!!,,??
Turn it off
Enviado de meu Le X820 usando Tapatalk
orangepowerpokes said:
There's some discussion of Dolby Atmos on the Le Max 2. Is this a joke?
I've always thought of Dolby as a commercial enterprise driven compression scheme for multi-channel audio, and nothing more, in home and commercial theatre environments. Like Dolby compresses audio to then be uncompressed for playback using an amp that has Dolby's blessing ($cu-ching$ $cu-ching$), similar to the way Zip and Rar compress files on a PC for archiving, or even mp3 and flac are compression schemes for audio. In a way I consider Dolby ransomware: "Thanks for buying the Dolby disk. By the way, we screwed you. You'll need a licensed amplifier to decrypt the audio we encrypted for you."
So the tech goes in home cinema has, for me, been anything but easy to follow: dolby pro logice - TrueHD - 5.1 - then 7.1 Dolby Digital (that's just dolby, not to mention DTS stuff), now atmos is big and being thrown around a lot, and I roll my eyes, but for some reason I feel the need not to ignore it anymore
Now I'm supposed to believe Dolby can work some magic and make me feel like I'm in a Cinemark XD theatre on the Le Max 2 with cell phone stereo speakers? I just don't get it. Can someone explain why I should have Dolby Atmos?
Click to expand...
Click to collapse
Dolby can only develope with the limitations of Android. With Android On it should be better, but remember that programming audio is probably one of the most difficult things to do especially on Android.

Building More Audio and Video Service Scenarios with AV Pipeline Kit

AV Pipeline Kit is a new product released in HMS Core 6.0 that embodies Huawei's effort to open its technologies in the media field. With a framework that enables you to design your own service scenarios, it equips your app with rich and customizable audio and video processing capabilities. The preset plugins and pipelines for audio and video collection, editing, and playback have simplified the development of audio and video apps, social apps, e-commerce apps, and more.
AV Pipeline Kit provides three major capabilities: pipeline customization, video super-resolution, and sound event detection.
With pipeline customization, you'll be able to build brand new media collection, editing, and playback capabilities and offer excellent user experience. Customized plugins will be automatically parsed by the framework of AV Pipeline Kit, and plugins will be used intelligently according to the hardware capabilities. On top of this, third-party plugins are also available for integration as long as they comply with the requirements of this kit, freeing you from complicated development work and allowing you to prioritize media service innovations.
Video super-resolution offers users an excellent experience even when they watch low-resolution videos. It supports frame-by-frame super-resolution during the playback of a low-resolution video to reduce noise and enhance color in real time, while high-resolution videos will be processed to deliver better images. Currently, resolutions from 270P to 720P and a maximum scale factor of 3 are supported. This capability also enables your app to flexibly adjust the super-resolution effect according to the video resolution and bit rate.
Another capability of AV Pipeline Kit is sound event detection, which detects sound events during audio playback in a fast and accurate manner. Currently, 13 types of sound events can be detected, like knocking on the door, vehicle horns, baby's cry, and sounds made by cats and dogs. With this capability integrated, your app will make users' home safer by detecting potential accidents around the home.
AV Pipeline Kit is easy to use, high performing, and consumes low power.
It provides preset pipelines that support basic media collection, editing, and playback capabilities. You can quickly integrate these pipelines into your app or tailor them to your specific service needs.
In terms of performance, AV Pipeline Kit ensures that data is uniformly encapsulated regardless of its formats, so that data will not be copied between modules. With hardware detection capabilities, the kit has managed to achieve more than 10% performance increase compared with traditional pipelines by preferentially calling nodes that support hardware acceleration.
A major difficulty in media app development has been to ensure better user experience by cutting power consumption. AV Pipeline Kit has managed to save power consumption by more than 20% by preferentially loading hardware capabilities, balancing out performance and power consumption. This is realized by a multi-modal media framework that always avoids unnecessary data copy and data format conversion.
With the open capabilities of AV Pipeline Kit, developing audio and video apps that deliver excellent user experience will be much easier.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
What are the supported video file format?
useful sharing.
Thanks for sharing..

[HMS Core 6.0 Global Release]Effortlessly Develop Audio Editing Functions with Audio Editor Kit

Audio Editor Kit consolidates Huawei's cutting-edge technologies for music and speech, as well as the wider audio field. This versatile toolkit is ideal for all kinds of scenarios, thanks to its rich audio-processing capabilities that include audio editing, audio source separation, spatial audio, voice changer, and noise reduction. With its powerful, open, yet easy-to-use APIs, the kit helps integrate audio editing functions into your app effortlessly and efficiently.
​
The sound field, music style, and equalizer capabilities of Audio Editor Kit provides various sound effects that can be applied to audio. On top of this, the kit allows customized sound effects to be added. One of the kit's more quirky features is the voice changer, which allows users to change their voices to a monster, or other funny or scary voices. The audio source separation capability utilizes AI and has been trained with huge data. This capability parses the voice track and accompaniment track in a song and then isolates the latter into a separate track, which is useful for karaoke and music editing app developers. Another capability is spatial audio, which intuitively specifies the locations of different audio tracks in 3D space. And last but not least is the scene effect capability. Using it, users can enjoy audio in different scenes like underwater, broadcast, earpiece, and gramophone. These two capabilities help satisfy audio editing requirements in scenarios such as editing surround sound and adding BGM.
Boasting such diverse capabilities, Audio Editor Kit enables audio/video editors to handle multi-track audio in a manageable way and helps live-streamers efficiently optimize the sound of their voices. In the future, Audio Editor Kit will provide even more capabilities, such as AI dubbing. It can convert text into emotionally expressive narration with a lifelike timbre that is created with the help of deep learning. Meanwhile, audio source separation will allow users to isolate a specific instrument sound (like piano, guitar, and violin) into a separate audio track. In short, Audio Editor Kit will remain fully dedicated to meeting the diverse needs of the audio field, helping develop useful audio editing apps.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Can we change the audio of a particular video file?
very useful sharing, thanks!!

Audio Editor Kit, a Library of Special Effects

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Audio is a fundamental way of communication. It transcends space limitations, is easy to grasp, and comes in all forms, which is why many mobile apps that cover short videos, online education, e-books, games, and more are integrating audio capabilities. Adding special effects is a good way of freshening up audio.
Rather than compiling different effects myself, I turned to Audio Editor Kit from HMS Core for help, which boasts a range of versatile special effects generated by the voice changer, equalizer, sound effect, scene effect, sound field, style, and fade-in/out.
Voice Changer​This function alters a user's voice to protect their privacy while simultaneously spicing up their voice. Available effects in this function include: Seasoned, Cute, Male, Female, and Monster. What's more, this function supports all languages and can process audio in real time.
Equalizer​An equalizer adjusts the tone of audio by increasing or decreasing the volume of one or more frequencies. In this way, this filter helps customize how audio plays back, making audio sound more fun.
The equalizer function of Audio Editor Kit is preloaded with 9 effects: Pop, Classical, Rock, Bass, Jazz, R&B, Folk, Dance, and Chinese style. The function also supports customized sound levels of 10 bands.
Sound Effect​A sound effect is also a sound — or sound process — which is artificially created or enhanced. A sound effect can be applied to improve the experience of films, video games, music, and other media.
Sound effects enhance the enjoyment of the content: Effective use of sound effects delivers greater immersion, which change with the plot and stimulate emotions.
Audio Editor Kit provides over 100 effects (all free-to-use), which are broken down into 10 types, including Animals, Automobile, Ringing, Futuristic, and Fighting. They, at least for me, are comprehensive enough.
Scene Effect​Audio Editor Kit offers this function to simulate how audio sounds in different environments by using different algorithms. It now has four effects: Underwater, Broadcast, Earpiece, and Gramophone, which deliver a high level of authenticity, to immerse users of music apps, games, and e-book reading apps.
Sound Field​A sound field is a region of a material medium where sound waves exist. Sound fields with different positions deliver different effects.
The sound field function of Audio Editor Kit offers 4 options: Near, Grand, Front-facing, and Wide, which incorporates the preset attributes of reverb and panning.
Each option is suitable for a different kind of music: Near for soft folk songs, Front-facing for absolute music, Grand for music with a large part of bass and great immersion (such as rock music and rap music), and Wide for symphonies. They can be used during audio/video creation or music playback on different music genres, to make music sound more appealing.
Style​A music style — or music genre — is a musical category that identifies pieces of music with common elements in terms of tune, rhythm, tone, beat, and more.
The Style function of Audio Editor Kit offers the bass boost effect, which makes audio sound more rhythmic and expressive.
Fade-in/out​The fade-in effect gradually increases the volume from zero to a specified value, whereas fade-out does just the opposite. Both of them deliver a smooth music playback.
This can be realized by using the fade-in/out function from Audio Editor Kit, which is ideal for creating a remix of songs or videos.
Stunning effects, aren't they?
Audio Editor Kit offers a range of other services for developing a mighty audiovisual app, including basic audio processing functions (like import, splitting, copying, deleting, and audio extraction), 3D audio rendering (audio source separation and spatial audio), and AI dubbing.
Check out the development guide of Audio Editor Kit and don't forget to give it a try!

HMS Core 6.6.0 Release News

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
New Features​Analytics Kit​
Released the function of saving churned users as an audience to the retention analysis function. This function enables multi-dimensional examination on churned users, thus contributing to making targeted measures for winning back such users.
Changed Audience analysis to Audience insight that has two submenus: User grouping and User profiling. User grouping allows for segmenting users into different audiences according to different dimensions, and user profiling provides audience features like profiles and attributes to facilitate in-depth user analysis.
Added the Page access in each time segment report to Page analysis. The report compares the numbers of access times and users in different time segments. Such vital information gives you access to your users' product usage preferences and thus helps you seize operations opportunities.
Added the Page access in each time segment report to Page analysis. The report compares the numbers of access times and users in different time segments. Such vital information gives you access to your users' product usage preferences and thus helps you seize operations opportunities.
Learn more>>
3D Modeling Kit​
Debuted the auto rigging capability. Auto rigging can load a preset motion to a 3D model of a biped humanoid, by using the skeleton points on the model. In this way, the capability automatically rigs and animates such a biped humanoid model, lowering the threshold of 3D animation creation and making 3D models appear more interesting.
Added the AR-based real-time guide mode. This mode accurately locates an object, provides real-time image collection guide, and detects key frames. Offering a series of steps for modeling, the mode delivers a fresh, interactive modeling experience.
Learn more>>
Video Editor Kit​
Offered the auto-smile capability in the fundamental capability SDK. This capability detects faces in the input image and then lightens up the faces with a smile (closed-mouth or open-mouth).
Supplemented the fundamental capability SDK with the object segmentation capability. This AI algorithm-dependent capability separates the selected object from a video, to facilitate operations like background removal and replacement.
Learn more>>
ML Kit​
Released the interactive biometric verification service. It captures faces in real time and determines whether a face is of a real person or a face attack (like a face recapture image, face recapture video, or a face mask), by checking whether the specified actions are detected on the face. This service delivers a high-level of security, making it ideal in face recognition-based payment scenarios.
Improved the on-device translation service by supporting 12 more languages, including Croatian, Macedonian, and Urdu. Note that the following languages are not yet supported by on-device language detection: Maltese, Bosnian, Icelandic, and Georgian.
Learn more>>
Audio Editor Kit​
Added the on-cloud REST APIs for the AI dubbing capability, which makes the capability accessible on more types of devices.
Added the asynchronous API for the audio source separation capability. On top of this, a query API was added to maintain an audio source separation task via taskId. This serves as the solution to the issue that in some scenarios, a user failed to find their previous audio source separation task when they exited and re-opened the app, because of the long time taken by an audio source separation task to complete.
Enriched on-device audio source separation with the following newly supported sound types: accompaniment, bass sound, stringed instrument sound, brass stringed instrument sound, drum sound, accompaniment with the backing vocal voice, and lead vocalist voice.
Learn more>>
Health Kit​
Added two activity record data types: apnea training and apnea testing in diving, and supported the free diving record data type on the cloud-side service, giving access to the records of more activity types.
Added the sampling data type of the maximum oxygen uptake to the device-side service. Each data record indicates the maximum oxygen uptake in a period. This sampling data type can be used as an indicator of aerobic capacity.
Added the open atomic sampling statistical data type of location to the cloud-side service. This type of data records the GPS location of a user at a certain time point, which is ideal for recording data of an outdoor sport like mountain hiking and running.
Opened the activity record segment statistical data type on the cloud-side service. Activity records now can be collected by time segment, to better satisfy requirements on analysis of activity records.
Added the subscription of scenario-based events and supported the subscription of total step goal events. These fresh features help users set their running/walking goals and receive push messages notifying them of their goals.
Learn more>>
Video Kit​
Released the HDR Vivid SDK that provides video processing features like opto-electronic transfer function (OETF), tone mapping, and HDR2SDR. This SDK helps you immerse your users with high-definition videos that get rid of overexposure and have clear details even in dark parts of video frames.
Added the capability for killing the WisePlayer process. This capability frees resources occupied by WisePlayer after the video playback ends, to prevent WisePlayer from occupying resources for too long.
Added the capability to obtain the list of video source thumbnails that cover each frame of the video source — frame by frame, time point by time point — when a user slowly drags the video progress bar, to improve video watching experience.
Added the capability to accurately play video via dragging on the progress bar. This capability can locate the time point on the progress bar, to avoid the inaccurate location issue caused by using the key frame for playback location.
Learn more>>
Scene Kit​
Added the 3D fluid simulation component. This component allows you to set the boundaries and volume of fluid (VOF), to create interactive liquid sloshing.
Introduced the dynamic diffuse global illumination (DDGI) plugin. This plugin can create diffuse global illumination in real time when the object position or light source in the scene changes. In this way, the plugin delivers a more natural-looking rendering effect.
Learn more>>
New Resources​Map Kit​
For the hms-mapkit-demo sample code: Added the MapsInitializer.initialize API that is used to initialize the Map SDK before it can be used.
Added the public layer (precipitation map) in the enhanced SDK.
Go to GitHub>>
Site Kit​
For the hms-sitekit-demo sample code: Updated the Gson version to 2.9.0 and optimized the internal memory.
Go to GitHub >>
Game Service​
For the hms-game-demo sample code: Added the configuration of removing the dependency installation boost of HMS Core (APK), and supported HUAWEI Vision.
Go to GitHub >>
Made necessary updates to other kits. Learn more >>

Categories

Resources