I want to know in low level framework either AudioTrack or AudioFlinger which app has the AudioFocus and which app lost it , especially when an app gains or loses AudioFocus momentarily (Transient GAIN or LOSS) .
How do I pass that info from AudioManager or MediaFocusControl down to AudioTrack or AudioFlinger ?
Related
For educational purposes I'm trying to understand how audio is routed from/to the radio during a voice call. By running strace on my N1 I was able to see this:
- Audio is played back to the AudioFlinger (i see write() to /dev/msm_pcm_out)
- Recording is not done using msm_pcm_in - maybe DMA directly to the modem?
- Most of the work is done in the propriety libhtc_ril.so
Couldn't find anywhere in the source code where an AudioTrack is created for the voice call.
If anyone could shed some light on this I'd be grateful.
Thanks.
I'm working on a program that procedurally generates sound. Using alsa in linux, i can register a callback that gets called when the buffer needs more data.
I can't seem to figure out when the Audiotrack needs to be written to...all the examples i've found just have a thread looping around a write call. Is there a cleaner way to accomplish this that i'm just not seeing?
Thanks
I am working on android BLE project.In this I fetch the RSSI value from the Sensortag.Using this tag i gave range .I have a problem when mobile away from the tag it gave some beep sound and in range no beep sound occur.I can't handle this.which place the particular code is add?
I am wondering if there is an app that can intercept the audio focus requests from multiple apps and while keeping the audio focus to the android OS to itself, and merge the streams coming from the multiple apps.
A specific use case is Pandora and Pokemon Go. I want to listen to Pandora while still getting the sound effects from Pokemon while I am out walking/running/biking. I can then stop and hit the Pokestop or catch the Pokemon and continue on my way. By default, which ever app was opened most recently is the one with audio focus to the OS and therefore only sounds from one app can be heard.
What I envision is an app that would require root, would request and receive the audio focus to the OS, and then when any other media source sends an audio focus request to the OS this app would intercept that request, spoof back to the media source that the media source now has audio focus, and add that media source's stream to the mixed stream being sent to the OS. This would also allow the app to control the volume of individual media sources before sending them via the mixed stream up to the OS, so one or more media source could be a higher or lower volume than the others.
I have looked around but can't find anything that offers this service. Most sound mixers (including Viper4Android) are just equalizers, not mixing streams. Some offer the option to change the way notifications interrupt media when a notification comes in, or when a phone call is ringing.
I do know that the app RunKeeper has a feature called "Audio Ducking" which will lower the volume of the other media source (like Pandora) while it announces the mileage/time/speed of your activity. This is similar to but not exactly what I am looking for, and it is not universal to all apps regardless of source.
Thoughts?
By the way, I am currently using Android N Developer Preview 5 on a Nexus 6. I assume that will matter when determining how to intercept the requests for audio focus.
Regards,
Russ
After digging into the kernel code for the Qualcomm WCD audio codecs on a few phones (Hammerhead, Pixel, Nexus 6P) I'm having some trouble figuring out how to access the DC blocking/high-pass filter being applied to a signal coming in through the headset mic line (ADC2 on some phones). Looking at the spectrum on all of these phones shows a definite high-pass filter shape with a cutoff between 50-100 Hz depending on the phone. What is more problematic is the phase distortion that the DC blocker is applying.
Originally I tried modifying the TX HPF Cutoff settings as well as turning it on in cases where it was still off during recording, just to see if it would have an effect. These values don't seem to affect the signal at all, whether it's set to 3 Hz, the default of 75, or the max of 150. I believe the TX HPF is only part of the signal chain during telephony modes and not during normal recording. There are also IIR filters available with these Qualcomm codecs but looking at the datasheet for the WCD9311 (the only one publicly available) shows that these are for sidetone processing and not the direct ACD headset mic path, so again it seems these are only applicable in telephony modes.
For my application the raw signal is desired - so minimal HPF effects, no ANC, no Compander, and no automatic gain control. Basically what the new UNPROCESSED config in Nougat is supposed to provide, but testing so far on the Pixel has shown that this HPF+phase distortion are still happening even when running a clean square wave or similar signal in through the headset mic line. I'm using TinyMixer to check exposed mixer path values over adb in real-time while recording (and comparing the values to what they are before recording).
In the past on the Nexus S this problem could be overcome with root by modifying a register (the enum was something like ADC High Pass Filter Switch), but the mixer paths for these WCD9xxx codecs are a little more complex with no signs of a single HPF switch for this ADC line that exists outside of the TX one. This doesn't seem to be a problem that many people face but I was hoping for any insight anyone here may have, and also hoping that anyone who needs to use this line in without distorting their signals will be able to find a solution in the future, at least before USB C over and the 3.5mm jack becomes obsolete.