I read the instructions here about WP7 background audio player agent. I thought there is only 1 Unknown event and 1 Playing event in agent side
However, when I log the event in OnPlayStateChanged of the agent, using
System.Diagnostics.Debug.WriteLine(player.PlayerState.ToString();
I receive 2 Unknown and 3 Playing events when a new track is play
It's weird, why is that?
P/S: I use the code sample on MSDN How to: Play Background Audio for Windows Phone
Thanks to Peter Torr, I found the reason
Due to the asynchronous nature of media playback, you should use the
arguments to the OnPlayStateChanged callback to drive your logic. You
shouldn’t need to query the player (that is mostly for the foreground
app to display UI).
Related
we have developed geofencing for capturing entry and exit times at specific locations in background. In some mobile phones, entry and exit are captured with a delay of 2-4 minutes.
We would appreciate your help in fixing this
I have used Broadcast Receiver to receive the Geofence alerts, I register the receiver in android manifest file. But in android document it said that alerts will be late from android O and above if Background Location Limits exists.
Question :
If I register the receiver dynamically using mContext.reigsterReceiver() programatically. The delay in receiving geo fence alert will not be their ?
If we register the receiver in Android Manifest or via programtically (mContext.registerReceiver()). Their will be delay in receiving geo fence alerts ?
My application need Geo fence alerts immediately with out any delay in both the cases Either Application in Foreground or Background, What should I do for that?
Mainly samsung and moto mobiles it is not working.
Recently, I wanted to get my hands dirty with Core Audio, so I started working on a simple desktop app that will apply effects (eg. echo) on the microphone data in real-time and then the processed data can be used on communication apps (eg. Skype, Zoom, etc).
To do that, I figured that I have to create a virtual microphone, to be able to send processed (with the applied effects) data over communication apps. For example, the user will need to select this new microphone (virtual) device as Input Device in a Zoom call so that the other users in the call can hear her with her voiced being processed.
My main concern is that I need to find a way to "route" the voice data captured from the physical microphone (eg. the built-in mic) to the virtual microphone. I've spent some time reading the book "Learning Core Audio" by Adamson and Avila, and in Chapter 8 the author explains how to write an app that a) uses an AUHAL in order to capture data from the system's default input device and b) then sends the data to the system's default output using an AUGraph. So, following this example, I figured that I also need to do create an app that captures the microphone data only when it's running.
So, what I've done so far:
I've created the virtual microphone, for which I followed the NullAudio driver example from Apple.
I've created the app that captures the microphone data.
For both of the above "modules" I'm certain that they work as expected independently, since I've tested them with various ways. The only missing piece now is how to "connect" the physical mic with the virtual mic. I need to connect the output of the physical microphone with the input of the virtual microphone.
So, my questions are:
Is this something trivial that can be achieved using the AUGraph approach, as described in the book? Should I just find the correct way to configure the graph in order to achieve this connection between the two devices?
The only related thread I found is this, where the author states that the routing is done by
sending this audio data to driver via socket connection So other apps that request audio from out virtual mic in fact get this audio from user-space application that listen for mic at the same time (so it should be active)
but I'm not quite sure how to even start implementing something like that.
The whole process I did for capturing data from the microphone seems quite long and I was thinking if there's a more optimal way to do this. The book seems to be from 2012 with some corrections done in 2014. Has Core Audio changed dramatically since then and this process can be achieved more easily with just a few lines of code?
I think you'll get more results by searching for the term "play through" instead of "routing".
The Adamson / Avila book has an ideal play through example that unfortunately for you only works for when both input and output are handled by the same device (e.g. the built in hardware on most mac laptops and iphone/ipad devices).
Note that there is another audio device concept called "playthru" (see kAudioDevicePropertyPlayThru and related properties) which seems to be a form of routing internal to a single device. I wish it were a property that let you set a forwarding device, but alas, no.
Some informal doco on this: https://lists.apple.com/archives/coreaudio-api/2005/Aug/msg00250.html
I've never tried it but you should be able to connect input to output on an AUGraph like this. AUGraph is however deprecated in favour of AVAudioEngine which last time I checked did not handle non default input/output devices well.
I instead manually copy buffers from the input device to the output device via a ring buffer (TPCircularBuffer works well). The devil is in the detail, and much of the work is deciding on what properties you want and their consequences. Some common and conflicting example properties:
minimal lag
minimal dropouts
no time distortion
In my case, if output is lagging too much behind input, I brutally dump everything bar 1 or 2 buffers. There is some dated Apple sample code called CAPlayThrough which elegantly speeds up the output stream. You should definitely check this out.
And if you find a simpler way, please tell me!
Update
I found a simpler way:
create an AVCaptureSession that captures from your mic
add an AVCaptureAudioPreviewOutput that references your virtual device
When routing from microphone to headphones, it sounded like it had a few hundred milliseconds' lag, but if AVCaptureAudioPreviewOutput and your virtual device handle timestamps properly, that lag may not matter.
Situation: at Windows "Control Panel", you can visit "Sound" widget and switch to "Communications" tab. There, you can configure how much %% the OS should reduce all other sounds if we have incoming VoIP call ringing (to not miss the call, indeed).
Question: is there any API that allows a developer to subscribe and react on such events too? (let say, auto-pause your game app, or "do not disturb" auto-status for the call duration in your messenger app, or any other smart thing you can do for better user experience).
Note: I'm looking for OS-wide API, not "SDK for VoIP app X only".
It turns out that the Microsoft term for this is Custom Ducking Behavior. The seemingly-odd name is explained by the Wikipedia page on ducking:
Ducking is an audio effect commonly used in radio and pop music,
especially dance music. In ducking, the level of one audio signal is
reduced by the presence of another signal. In radio this can typically
be achieved by lowering (ducking) the volume of a secondary audio
track when the primary track starts, and lifting the volume again when
the primary track is finished. A typical use of this effect in a daily
radio production routine is for creating a voice-over: a foreign
language original sound is dubbed (and ducked) by a professional
speaker reading the translation. Ducking becomes active as soon as the
translation starts.
From the MSDN, the APIs you need to implement custom ducking behavior are COM-based. In summary:
MMDevice API for multimedia device enumeration and selection.
WASAPI for accessing the communications capture and render device, stream management operations, and handling ducking events.
WAVE APIs for accessing the communications device and capturing audio input.
Code samples to implement the functionality you want are available at the respective MSDN pages.
We are experimenting with Freeswitch recording for outbound dialling. Currently stuck with two issues after reading many posts and issue lists:
Not able to play the beep tone to the receiver
If I use only the beep tone is heard for the receiving party.
If I use the beep tone is not heard to the receiving party but present in recording.
In the case of call transfer 2 recording files are generated. Is there anyway to ask freeswitch to merge them into one file?
Can RECORD_APPEND or Post Processing Recordings in the Dialplan will be an option to try out?
I'm using BackgroundAudioPlayer agent in my Windows Phone 7 application. When the track end, the agent side receives TrackEnded event, but UI side doesnot receive any events.
Also, when I intentionally set audio track 's position to its end, then call Play(), the agent side receives TrackEnded event (because the track has come to an end), but the UI side does receive Stopped in its PlayStateChanged handler. So weird !
How to let UI know that an track has come to an end ? Communicating through isolated storage is not my favorite !
From research and a little testing, using Isolated Storage as a middle-man between Background and Foreground instances of the BackgroundAudioPlayer is still the only route for Windows Phone 7. The options are mentioned here (which I know you're aware of)...
http://blogs.msdn.com/b/wpukcoe/archive/2012/02/10/background-audio-in-windows-phone-7-5-part-2.aspx
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202944(v=vs.105).aspx
https://stackoverflow.com/a/11419680/247257
This was also confirmed by Peter Torr who said:
For example, the agent may need to tell the foreground “I just started pre-downloading the next track,” or “I updated a database table and you should refresh your state”. Such notifications are impossible to create with Windows Phone OS 7.1; at best you can model them by using polling techniques, but this approach is inefficient and prone to errors.
The only good news is that in the same post, he gives a solution (using named events for IPC) for Windows Phone 8 which is a lot more reliable...
http://blogs.windows.com/windows_phone/b/wpdev/archive/2013/03/27/using-named-events-to-coordinate-foreground-apps-and-background-agents.aspx