I'm writing an Objective C iOS app which will react to specific sounds.
What is the minimum sound power that the iPhone (5,5c,5s) microphone must sense in order to do audio
acquisition?
There is no minimum sound power - you can initiate recording at any time. There are multiple ways to do this depending on exactly what you want to do, but the simplest is through the AV Foundation Framework. If you need more control you can use the Core Audio Framework, which provides a very powerful set of tools but is more complicated. See https://developer.apple.com/library/ios/documentation/Miscellaneous/Conceptual/iPhoneOSTechOverview/MediaLayer/MediaLayer.html#//apple_ref/doc/uid/TP40007898-CH9-SW2.
But maybe this isn't exactly what you're asking? If you're asking is there some way to have your app "come alive" and start recording when a certain dB level is reached, there is no way to do that. You need to have the app constantly running and recording, and monitor the sound yourself.
Related
I need to check if an app is making a certain sound. This app only produces a single specific sound, so a solution that simply checks if there's any sound whatsoever from the app will also work.
I don't need to find out which app makes a sound or anything like that. I know the app that should produce a sound and I know what sound it's going to be, I simply need to detect the exact time this sound is played.
The only solution I know of is to listen to the audio output for the whole OS and then detect my specific sound with some audio recognition software, but it won't work properly if there's music or a movie playing on the background, so it's not an option.
I need a solution to do it via WinAPI methods. The language isn't very important here - I can use C#, javascript, Python or another language. I just need to find out a general approach on how to extract sound produced by a specific application in Windows 7.
The general approach here is to trace calls from a given process to OS to play audio. These calls are more commonly known as "system calls".
This will show only direct attempts by a process to produce sound.
The only hardest part here is to identify all the system calls, that play sound in windows.
This question has some answers on how to trace system calls on Windows
Have you looked at SO answer on similar topic with a bunch of useful .Net wrappers for IAudioSessionManager2 and related API: Controlling Application's Volume: By Process-ID
I think that general approach of
Finding IAudioSession by process name
Subscribing to its events via IAudioSessionEvents
Listening to the OnStateChanged event
should do it for you.
And don't forget that you should pump Windows messages which might require some explicit code in non-UI applications. In UI applications this is what Application.Run does internally anyway.
My question is pretty simple. As a somewhat new person to the whole coding world, I don't know a whole lot about computers, so I want to ask you guys if it's possible to keep a webpage exchanging data with me while keeping my computer at the minimal energy cost.
The closest thing I can think off is the "Asleep" mode, but that doesn't seem to work for what I need to do. I need a way to have my computer connected to video streams or youtube videos (like if I was watching them) while I sleep or go to school.
It might seem a bit weird the reason as to why I need to be able to do this, but it's basicly for a website that gives you "coins" for every 780 seconds that you watch a stream. Those coins can then be used to enter giveaways of hardware and software (gaming related), the more coins you bet on an item the more chances you have of winning it.
I use Windows 7.
your requirement is unclear.. but i suggest use a spare android phone or use a windows phone(convert it to pc using software available online).
Other than that i dont think there are any other option as to keep the website live and that is simple for you to implement.
For a some-what small (at least hopefully) project, I am hoping to gain access to the current audio being played through the "main line" (i.e. what is heard through the speakers.) Specifically, I'd like to create a visual equalizer of the audio currently being played. I do not wish to capture or "tamper" with the audio in any way, just run a little analysis on it. That being said, I'd imagine access to such information is not handed out nicely in a high-level API.
I noticed a similar question which is concerned with looking at system sound. The accepted answer points to looking into Soundflower's source code. I am not completely adverse to doing this but I'd like to ensure there isn't a simpler way before I got into it (especially because I have no real audio programming experience, especially at the system level.)
Any input is very much appreciated,
--Sam
There is no simple way to do this on OS X. You really have to do this from a kext, unfortunately.
I'm developing an windows application which needs to get the sound output level of the current audio device. I'm currently doing this using windows core audio API - EndpointVolume API (IAudioMeterInformations). The application checks the sound output level every 10ms and does its own logic according to the level.
The key of the app is to manipulate the sound before it reaches the speakers (so when you here it, it was already processed).. The current solutions (using EndpointVolume) kind of does this but it processes the sound which WAS already played.. but i would like to process the sound just before it is played.
Would it be better to use the peak meter from DeviceTopology API in stead of the peak meter in AudioEndpoint API?
I am asking this because the applications needs to react as fast as possible to the sound output level, so the manipulation wont be noticeable. So i though if i would use DeviceTopology (which is placed before the Endpoint device) it would make it more responsive and less noticeable?
Is my assumption correct or am i barking the wrong tree?
As I understand it, Vista introduced a completely rearchitectured sound input/output system to the OS. In particular, before Vista there was a single system-wide sound mixer, to which output devices could be connected. For recording, it was possible to retrieve data directly from a recording device or from this mixer.
In Vista and later, as I understand it, there is no longer a system-wide mixer. It is possible, in theory, to route some sounds to one output device and other sounds to a different output device,1 and this requires separate mixers for each output device.
Now, I have a simple recording application that I would like to update to take advantage of this new API. In particular, I was hoping it would be possible to let the user select one of the output devices as an audio data source. My reasoning is that the OS probably mixes all the inputs into each sound device anyway, and hopefully provides a way to tap into the mixed data.
Is it possible to select an output device as an input into my recording application, and if so, how?
1Although I am yet to find any UI that actually lets one do this.
Loopback Recording