Android wear detect sound off switch - wear-os

Is there any way to detect if the system sounds are muted via the sounds switch which can be pulled down from the top? Except with checking if all sound volumes are zero...
From my research it just sets all sound volumes to zero and "unmuting" the sounds restores the sounds volumes before mute. This is not very reliable way of doing this kind of stuff and if some app changes the system sound volumes it will "unmute" the watch and what is even funnier is that if an app sets the volume to zero, when the watch is muted = volume is zero, the unmute will leave at zero!
I need a reliable way of detecting this or the heuristic of checking if all sound volumes are zero is the best we have?

Related

How to do Volume Boost in Windows?

I am trying to find some information on how it's possible to boost the system volume above 100% - I know there are applications that can do it, but I'm not able to spot how they are able to do that. How can I do it programmatically, and which DLLs system calls are needed?
There is of course an upper limit on the system volume. Other software allows you to continue to zoom in after reaching the maximum volume, which is done at the time of decoding. You don't have a decoder.
There are actually two ways you can do something:
Directly amplify the waveform ----- this effect is limited
Write sound card driver, directly use the driver to modify the sound card output power.
The software never jumps out of the hardware limit. Increasing the volume will only make the sound quality worse.

Synchronizing with monitor refreshes without vsync

What is the preferred way of synchronizing with monitor refreshes, when vsync is not an option? We enable vsync, however, some users disable it in driver settings, and those override app preferences. We need reliable predictable frame lengths to simulate the world correctly, do some visual effects, and synchronize audio (more precisely, we need to estimate how long a frame is going to be on screen, and when it will be on screen).
Is there any way to force drivers to enable vsync despite what the user set in the driver? Or to ask Windows when a monitor rerfesh is going to happen? We have issues with manual sleeping when our frame boundaries line up closely to vblank. It causes occasional missed frames, and up to 1 extra frame of input latency.
We mainly use OpenGL, but Direct3D advice is also appreciated.
You should not build your application's timing on the basis of vsync and exact timings of frame presentation. Games don't do that these days and have not do so for quite some time. This is what allows them to keep a consistent speed even if they start dropping frames; because their timing, physics computations, AI, etc isn't based on when a frame gets displayed but instead on actual timing.
Game frame timings are typically sufficiently small (less than 50ms) that human beings cannot detect any audio/video synchronization issues. So if you want to display an image that should have a sound played alongside it, as long as the sound starts within about 30ms or so of the image, you're fine.
Oh and don't bother trying to switch to Vulkan/D3D12 to resolve this problem. They don't. Vulkan in particular decouples presentation from other tasks, making it basically impossible to know the exact time when an image starts appearing on the screen. You give Vulkan an image, and it presents it... at whatever is the next most opportune moment. You get some control over how that moment gets chosen, but even those choices can be restricted based on factors outside of your control.
Design your program to avoid the need for rigid vsync. Use internal timings instead.

Solving latency between AVPlayer and NSSound

I am getting a latency that seems dependent on the computer, between AVFoundation and the simple [NSSound play].
My program is playing one video track and 3 audio tracks arranged inside an AVPlayer. This is working nicely. Independently, the program generates a metronome for each beat of the measure, following information from the music score. The two metronome sounds are very short files that I load in an NSSound and use [NSSound play] to play them. I noticed that I had to shift the metronome playback of about 90 milliseconds so that it is perfectly synchronized. Part of it may be the exact moment when the impact of the metronome is located in the metronome file, but if that was the only reason, then this delay would be the same on all Mac computers. However, on different Macs, this delay must be adjusted. As it is a metronome beat synchronized with the music, it is quite critical, as a slight shift makes it sound off beat. Is there any way to calculate this delay directly from AVFoundation API ? Or to compensate it or to play the metronome in another way so that there is no delay between the AVPlayer and the NSSound play ? I would appreciate some link or idea about this.
Thanks !
Dominique
Arpege Music, Belgium
I suggest looking into a low level audio library to manage and instantly play your music. BASS is a low level library built upon Audio Units which allows for refined, precise and fast control over your stream. By manipulating your buffer, and possibly creating a mixer stream (Refer to the docs) you should be able to instantly play the sound on any device. Specifically, look into buffering the sound before hand and keeping it in memory since it's a short sound.

Play background sound for a fixed custom interval?

I am developing a WP7 game which should notify the user with a sound once every minute as long as the game is running.
I would like this to happen even when the app is closed. I have tried some different schedulers (e.g. PeriodicTask) but they are all very restricted and do not allow such a small interval - the minimum is often 30 min.
I have tried the AudioPlayerAgent as well, which allows background audio playback, but I don't know how to run an AudioPlayerAgent which plays the sound once every minute?
Is it possible for me to have a background thread which keeps track of the game and plays a sound every minute?
You could play a track via the background audio agent that has a minute (or so) of silence between each sound and then loop it.
Is seems a hacky way to go and I'm not 100% certain of your exact requirements or why you really would want to do this anyway.

Why does the game I ported to Mac destroy all sound on Mac until reboot?

The setup
The game in question is using CoreAudio and single AudioGraph to play sounds.
The graph looks like this:
input callbacks -> 3DMixer -> DefaultOutputDevice
3DMixer's BusCount is set to 50 sounds max.
All samples are converted to default output device's stream format before being fed to input callbacks. Unused callbacks aren't set (NULL). Most sounds are 3D, so azimuth, pan, distance and gain are usually set for each mixer input, not left alone. They're checked to make sure only valid values are set. Mixer input's playback rate is also sometimes modified slightly to simulate pitch, but for most sounds it's kept at default setting.
The problem
Let's say I run the game and start a level populated with many sounds, lot of action.
I'm running HALLab -> IO Cycle Telemetry window to see how much time it takes to process each sound cycle - it doesn't take any more than 4ms out of over 10 available ms in each cycle, and I can't spot a single peak that would make it go over alloted time.
At some point when playing the game, when many sounds are playing at the same time (less than 50, but not less than 20), I hear a poof, and from then on only silence. No Mac sounds can be generated from any application on Mac. The IO Telemetry window shows my audio ticks still running, still taking time, still providing samples to output device.
This state persists even if there are less, then no sounds playing in my game.
Even if I quit the game entirely, Mac sounds generated by other applications don't come back.
Putting Mac to sleep and waking it up doesn't help anything either.
Only rebooting it fully results in sounds coming back. After it's back, first few sounds have crackling in them.
What can I do to avoid the problem? The game is big, complicated, and I can't modify what's playing - but it doesn't seem to overload the IO thread, so I'm not sure if I should. The problem can't be caused by any specific sound data, because all sound samples are played many times before the problem occurs. I'd think any sound going to physical speakers would be screened to avoid overloading them physically, and the sound doesn't have to be loud at all to cause the bug.
I'm out of ideas.

Resources