Play background sound for a fixed custom interval? - windows-phone-7

I am developing a WP7 game which should notify the user with a sound once every minute as long as the game is running.
I would like this to happen even when the app is closed. I have tried some different schedulers (e.g. PeriodicTask) but they are all very restricted and do not allow such a small interval - the minimum is often 30 min.
I have tried the AudioPlayerAgent as well, which allows background audio playback, but I don't know how to run an AudioPlayerAgent which plays the sound once every minute?
Is it possible for me to have a background thread which keeps track of the game and plays a sound every minute?

You could play a track via the background audio agent that has a minute (or so) of silence between each sound and then loop it.
Is seems a hacky way to go and I'm not 100% certain of your exact requirements or why you really would want to do this anyway.

Related

How to synchronize HLS and/or MPEG-DASH videos on multiple clients using ExoPlayer?

I'm trying to guarantee synchronization between multiple clients using DASH and/or HLS. Synchronization between each client must fall within 40 milliseconds.
Live streaming seems to be an obvious choice. However, the only way to really get within a small time frame of synchronization would be to lower the segment times. Is this the only viable solution? Are there any tags that would help me keep clients within 40 milliseconds to the live time?
Currently, I'm using FFMPEG to encode video and audio to live content.
There are a couple of separate issues here:
'Live time' - assuming the is the real time the event actually happens that is being broadcast, for example the actual time that a football is kicked in a game, then achieving a full end to end delivery to a end screen within 40 milliseconds is pushing the boundaries of any possible delivery technology. Certainly HLS and DASH streams won't give you that.
Your target may be to have each end user be no more than 40ms different than each other end user - e.g. every user receives the broadcast with a 10 second delay, but that delay is the same plus or minus 40ms for each user. This is still quite a tricky problem as, unless you have some common clock that all the devices are synched to, you will be relying on some mechanism to signal the position in the stream between each device and some central or distributed control mechanism and, again, 40ms is not a lot of time to allow even for small messages to travel back and forth along with any processing required to calculate any time difference and adjust.
Synchronising internet delivered media streams is not an easy problem but there is at least some work you can look at to help you get some ideas - see here for some examples: https://stackoverflow.com/a/51819066/334402

Is it possible to delay OSX's system sound?

Let's say for instance that you are watching a stream with sound that's two seconds too early. No matter how many times you refresh, you still hear sound 2 secs before the image appears.
Would it be possible to then programmatically delay the whole system's sound to match the image? After searching for a while I did not find anything on the topic, for free at least.

Play sound at an exact moment

I want to play a sound sample (about 2 minute long) and record keystrokes while the sample is playing. I need to know the exact time a key was pressed relative to the play start time, with a resolution of 1 millisecond or less.
First I tried NAudio and a Winforms application. I found it quite impossible to synchronize the playback start time and the keystroke time. I know when I start transferring the sample bytes to NAudio, but not the time it takes between passing the sample and the actual playback. I also know the time my application receives the KeyDown event, but not the time it's actually taken for the event to go all the way from the keyboard hardware to my C# event handler.
This is more or less accurate - I get a 270ms(+- 5ms) delay between the delay reported by the application and the actual delay (I know the actual delay by recording the session and looking at the sample file. The recording is done on a different device, not the computer running the application of course...).
This isn't good enough, because of the +- 5ms. This happens even when I disabled the generation 2 GC during playback. I need to be more accurate than this.
I don't mind switching to C++ and unmanaged code, but I'd like to know which sound playback API to use, and whether there are more accurate ways to get the keyboard input than waiting for the WM_KEYDOWN message.
With NAudio, all output devices that implement the IWavePosition interface can report accurately exactly where the playback position is. It can take a bit of trial and error to learn exactly how to convert the position into time, but this is your best approach to solving this problem.

Solving latency between AVPlayer and NSSound

I am getting a latency that seems dependent on the computer, between AVFoundation and the simple [NSSound play].
My program is playing one video track and 3 audio tracks arranged inside an AVPlayer. This is working nicely. Independently, the program generates a metronome for each beat of the measure, following information from the music score. The two metronome sounds are very short files that I load in an NSSound and use [NSSound play] to play them. I noticed that I had to shift the metronome playback of about 90 milliseconds so that it is perfectly synchronized. Part of it may be the exact moment when the impact of the metronome is located in the metronome file, but if that was the only reason, then this delay would be the same on all Mac computers. However, on different Macs, this delay must be adjusted. As it is a metronome beat synchronized with the music, it is quite critical, as a slight shift makes it sound off beat. Is there any way to calculate this delay directly from AVFoundation API ? Or to compensate it or to play the metronome in another way so that there is no delay between the AVPlayer and the NSSound play ? I would appreciate some link or idea about this.
Thanks !
Dominique
Arpege Music, Belgium
I suggest looking into a low level audio library to manage and instantly play your music. BASS is a low level library built upon Audio Units which allows for refined, precise and fast control over your stream. By manipulating your buffer, and possibly creating a mixer stream (Refer to the docs) you should be able to instantly play the sound on any device. Specifically, look into buffering the sound before hand and keeping it in memory since it's a short sound.

Why does the game I ported to Mac destroy all sound on Mac until reboot?

The setup
The game in question is using CoreAudio and single AudioGraph to play sounds.
The graph looks like this:
input callbacks -> 3DMixer -> DefaultOutputDevice
3DMixer's BusCount is set to 50 sounds max.
All samples are converted to default output device's stream format before being fed to input callbacks. Unused callbacks aren't set (NULL). Most sounds are 3D, so azimuth, pan, distance and gain are usually set for each mixer input, not left alone. They're checked to make sure only valid values are set. Mixer input's playback rate is also sometimes modified slightly to simulate pitch, but for most sounds it's kept at default setting.
The problem
Let's say I run the game and start a level populated with many sounds, lot of action.
I'm running HALLab -> IO Cycle Telemetry window to see how much time it takes to process each sound cycle - it doesn't take any more than 4ms out of over 10 available ms in each cycle, and I can't spot a single peak that would make it go over alloted time.
At some point when playing the game, when many sounds are playing at the same time (less than 50, but not less than 20), I hear a poof, and from then on only silence. No Mac sounds can be generated from any application on Mac. The IO Telemetry window shows my audio ticks still running, still taking time, still providing samples to output device.
This state persists even if there are less, then no sounds playing in my game.
Even if I quit the game entirely, Mac sounds generated by other applications don't come back.
Putting Mac to sleep and waking it up doesn't help anything either.
Only rebooting it fully results in sounds coming back. After it's back, first few sounds have crackling in them.
What can I do to avoid the problem? The game is big, complicated, and I can't modify what's playing - but it doesn't seem to overload the IO thread, so I'm not sure if I should. The problem can't be caused by any specific sound data, because all sound samples are played many times before the problem occurs. I'd think any sound going to physical speakers would be screened to avoid overloading them physically, and the sound doesn't have to be loud at all to cause the bug.
I'm out of ideas.

Resources