I'm working on a program that procedurally generates sound. Using alsa in linux, i can register a callback that gets called when the buffer needs more data.
I can't seem to figure out when the Audiotrack needs to be written to...all the examples i've found just have a thread looping around a write call. Is there a cleaner way to accomplish this that i'm just not seeing?
Thanks
Related
For educational purposes I'm trying to understand how audio is routed from/to the radio during a voice call. By running strace on my N1 I was able to see this:
- Audio is played back to the AudioFlinger (i see write() to /dev/msm_pcm_out)
- Recording is not done using msm_pcm_in - maybe DMA directly to the modem?
- Most of the work is done in the propriety libhtc_ril.so
Couldn't find anywhere in the source code where an AudioTrack is created for the voice call.
If anyone could shed some light on this I'd be grateful.
Thanks.
Hello,
I am trying to programatically play an audio file (WAV) on an Android phone and inject that audio into a phone call so that remote party on the other end of the call can hear that audio. Is there a way to inject audio into a voice all? Likewise, I also need to record downlink voice call audio on the Android phone. This is for an internal project and not for commercial/consumer use.
I read in blogs that some modifications to the kernel would allow such audio inject & record and in another blog, I found commands such as aplay, arec and amix could help. Anyone knows how to do this?
I am open to pay up to $10K for a working solution on Samsung S3.
Thanks.
Jay
Have you had any success? I found your posting after googling a similar query and hoping you've had better luck than I.
Hello everyone, this is my first thread on xda, I hope not wrong to post.
My problem is this:
I would like to create an app that for the moment only do two things, reads from the audio in of the device, and write on the audio output.
I created the app using the classes AudioRecord and AudioTrack provided by Android but I have a latency which is about 500ms and then makes the app unusable.
I would then read and write to the lowest level possible, even using JNI. From what I understand, Android has an abstraction layer called HAL that allows a generic access to the hardware without worrying about the platform.
So my question is: can I from my app, call the functions of the HAL level? And if so, how? And how do I know what are the functions of the HAL level that I can call?
I would be very grateful if someone could solve my doubts. Or if anyone has a link to some document that explains it, understandable for beginners, I would be very grateful.
Thank you all.
Hi,
First time posting here, but not new to rooting and flashing. All thanks to xda and it's user.
So, I want route the audio coming out from phone's speaker/headphone or w/e output device to microphone internally.
For example,
If my friend is listening to a song and he calls me and asks which song is this, I want to able to route the audio to Shazam or Soundhound without the use of another device (although that's not the reason for the problem)
Anothe example is, Whatsapp. Lets say I am listening to a song (locally on device or on a radio app in my cell), I want to able to use that quick audio message button in Whatsapp to record that song, internally, so if there bunch of people in the car, their sound doesn't get recorded.
And I can think of tonnes of situations where this could be used like when you are talking to someone on phone, use song/sounds/audio phrases direct from phone. Like a using some famous/quote but instead of you quoting them, using the actual person's sound from your local storage/youtube or any other audio app storage. Using sound effects, all internally. You can get so creative.
And this is something that is done by music producers all the time, routing audio and play it with recorded audio, but all done with hardware. It could be easily done using software or hardware on a computer.
I thought of using the 3.5 audio jack with mic to route the audio using wire by connecting them but I would require that special hardware (modded 3.5 jack wire) all the time and won't really help other people much. A software solution would be a lot more helpful.
I did try using this app, called SoundAbout to fix my problem but it didn't help much, or maybe I was doing something wrong.
ps. This is the 2nd time I am writing this lol. First time xda logged me out and I lost the whole thing. Would be nice if there was a app (for windows) that would copy any text written in any dialog box automatically (just like autofill feature but for larger text fields) and keep updating it on the fly. For example, text in the Title box gets copied in the app after each character punched with 5 histories. Same goes for Message box. In case of Firefox crash or accidental refresh or like in my case getting logged out, there is text stored in another app that can be retrieved. I am sure there are solutions used by devs. as they do tonnes of coding. Please share you thoughts.
Also please feel free to give any advice regarding right category, title, tags etc. so that this thread is organized and easily searchable.
I am searching for solution too. Some professionals are needed to check on these:
1. https://github.com/jurihock/voicesmith solving to step between the mic & audio feed processing
2. `system/ect/mixer_paths.xml` file can be hacked (maybe alter in some devices)?
3. This is just for more understanding: https://developer.android.com/guide/topics/media/sharing-audio-input
Did you figure it out? Are there not any Chinese or Russian apps that will bypass this restriction?
I am wondering if there is an app that can intercept the audio focus requests from multiple apps and while keeping the audio focus to the android OS to itself, and merge the streams coming from the multiple apps.
A specific use case is Pandora and Pokemon Go. I want to listen to Pandora while still getting the sound effects from Pokemon while I am out walking/running/biking. I can then stop and hit the Pokestop or catch the Pokemon and continue on my way. By default, which ever app was opened most recently is the one with audio focus to the OS and therefore only sounds from one app can be heard.
What I envision is an app that would require root, would request and receive the audio focus to the OS, and then when any other media source sends an audio focus request to the OS this app would intercept that request, spoof back to the media source that the media source now has audio focus, and add that media source's stream to the mixed stream being sent to the OS. This would also allow the app to control the volume of individual media sources before sending them via the mixed stream up to the OS, so one or more media source could be a higher or lower volume than the others.
I have looked around but can't find anything that offers this service. Most sound mixers (including Viper4Android) are just equalizers, not mixing streams. Some offer the option to change the way notifications interrupt media when a notification comes in, or when a phone call is ringing.
I do know that the app RunKeeper has a feature called "Audio Ducking" which will lower the volume of the other media source (like Pandora) while it announces the mileage/time/speed of your activity. This is similar to but not exactly what I am looking for, and it is not universal to all apps regardless of source.
Thoughts?
By the way, I am currently using Android N Developer Preview 5 on a Nexus 6. I assume that will matter when determining how to intercept the requests for audio focus.
Regards,
Russ