Is it possible to get a list of audio cards (not endpoints) in Win32?
This information would be really useful when constructing full-duplex audio streams, to be sure both input and output share the same hardware clock.
So far, I found PKEY_DeviceInterface_FriendlyName, which comes close, but probably cannot be used when 2 exact same audio cards are plugged in.
I also found Enumerating audio adapters in WinAPI, and while the WMI query in the accepted answer retrieves the results I'm looking for, I see no easy way to correlate those to a WASAPI endpoint device id.
Turns out my premise was wrong. Apparently just because multiple endpoints reside on the same physical device it does not mean they share the same clock (although it might be the case). See here: https://portaudio.music.columbia.narkive.com/0qYpAMkP/understanding-multiple-streams and here: https://audiophilestyle.com/forums/topic/19715-hq-player/page/584/, so that basically defeats the purpose of my question. Thanks for the help anyway, everyone.
Related
I have cyclone II fpga and also ı have a camera attached to it.Fisrtly ı want to take a capture from camera and this capture is passing to fpga and ı want to take this capture from serial port.Can you give me ideas how can do this or are there any example code about this.I am working with verilog .
thanks for help...
You need to research how exactly the camera is hooked up to your FPGA and how exactly you need to communicate with it. Once you understand this protocol and it's connections, you'll be 90% of the way towards understanding what you need to do, or at least will be better able to ask more specific, intelligent questions. =)
For example, there is a chance that you are working with a camera over a bus called Camera Link. If so, you will need to read up on the protocol (e.g. starting with https://en.wikipedia.org/wiki/Camera_Link), and then determine if you are going to write a custom interface yourself in an HDL (like Verilog) or if you are going to try to obtain and use an existing design from a 3rd party.
Once you know how to get images, then you need to do the same discovery to figure out how you are going to send these images to your PC. Do you already know all about RS-232 and UARTs (or USB, or whatever you are using)? If not, you will need to learn enough to either implement these interfaces, or at least to interface with existing designs that you have obtained elsewhere.
In sort -- you really need to do more research before anyone can help you with any specific advice. =)
I am trying to play a sound on Windows XP in multi-channel (parallel) manner.
I had read somewhere that playing parallel sounds with WinMM is maybe not possible,
but here is what I observe:
When I call WaveOutOpen() once, and then call many WaveOutWrite() many times
then sounds are not parallel - they are queued.
But when I call WaveOutOpen say nine times (and then store nine handles to it)
and then call nine times WaveOutWrite() with nine different sounds they are
played in parallel (multi-channel) - that is they are mixed.
It seems to work but I am not sure if it is okay because I don't find it stated clearly
in any tutorial or documentation.
It is okay to play sound in such 'many WaveOutOpen' way??
When I call WaveOutOpen() once, and then call many WaveOutWrite() many
times then sounds are not parallel - they are queued.
That's exactly what is supposed to happen. WaveOutWrite queues the next buffer. It allows you to send the audio you want to play in small chunks.
But when I call WaveOutOpen say nine times (and then store nine
handles to it) and then call nine times WaveOutWrite() with nine
different sounds they are played in parallel (multi-channel) - that is
they are mixed.
Again, this is correct and expected. This is the simplest way to play back many simultaneous sounds. If you want sample accurate mixing however, you should mix the audio samples yourself into one stream of samples and play that through a single WaveOut device.
I stand corrected with ability of waveOut* API to play sounds simultaneously and mixed.
Here is test code for the curious: http://www.alax.info/trac/public/browser/trunk/Utilities/WaveOutMultiPlay An application started with arguments abc plays on different threads sounds at 1, 5 and 15 kHz respectively and they mix well.
At the same time DirectShow Audio Renderer (WaveOut) Filter built on top of the same API is unable to play anything more that a single stream, for no visible reason.
FYI waveOutOpen API is retired since long ago and currently is wrapper on top of newer APIs. waveOutOpen assumes that audio output device is opened for exclusive use so there is no guarantee that multiply opened devices simultanesouly would produce mixed audio output. To achieve such behavior you would be better off with a newer audio API: DirectSound, DirectShow on top of DirectSound or WASAPI.
I suggest going with DirectSound if your product is for consumers.
From DirectX8 onwards the API is at the point where it is actually quite painless and most consumer machines will have it installed.
I'm decoding a video format that has an accompanying audio track in a separate file. Per the specs, I render a frame of video every 1/75th second. And the length of the video file is the same as the length of the audio track.
I'm playing the audio with Audio Queue Services (which I chose because I figured there would be situations where I needed precise timing control -- just the sort of situation I'm encountering!). It's a big API and I haven't progressed much past the sample code in Apple's programming guide (though I have wrapped things up in a nicer ObjC API).
In ideal situations, things work fine with the basic playback setup. The video and audio stays synced and both end at the same time (within my own ability to tell the difference). However, if performance hiccups (or I attach the Leaks Instrument or something), they quickly get out of sync.
This is the first time I've ever written something of this nature: I have no prior experience with sound or video. I certainly have no experience with Audio Queue Services. So I'm not sure where to go from here.
Have you done something like this? Do you have some advice or tips or tricks to offer? Is there some fundamental piece of documentation I need to read? Any help would be greatly appreciated.
First off, I've never actually coded anything like this so I'm shooting from the hip. Also, I've done a decent amount of programming with the HAL and AUHAL but never with AudioQueue so my approach might not be the best way to use AQ.
Obviously the first thing to decide is whether to sync the audio to the video or the video to the audio. From the question it seems you've decided the video will be the master and the audio should sync to it.
I would approach this by keeping track of the number of frames of video rendered, along with the frame rate. Then, when enqueuing your audio buffers, rather than passing a monotonically increasing value for the startTime adjust the buffer's start time to match any discontinuities observed in the video. This is a bit vague because I don't know exactly where your audio is coming from or how you are enqueuing it, but hopefully the principle is clear.
I am creating a simple intrusion detection system for an Information Security course using jpcap.
One of the features will be remote OS detection, in which I must implement an algorithm that detects when a host sends 5 packets within 20 seconds that have different ACK, SYN, and FIN combinations.
What would be a good method of detecting these different "combinations"? A brute-force algorithm would be time-consuming to implement, but I can't think of a better method.
Notes: jpcap's API allows one to know if the packet is ACK, SYN, and/or FIN. Also note that one doesn't need to know what ACK, SYN, and FIN are in order to understand the problem.
Thanks!
I built my own data structure based on vectors that hold "records" about the type of packet.
You need to keep state on each session. - using hashtables. Keep each syn,ack and fin/fin-ack. I wrote and opensource IDS sniffer a few years ago that does this; feel free to look at the code. It should be very easy to write an algorithm to do passive os-detection (google it). My opensource code is here dnasystem
I have a raw grabbed data from spectrometer that was working on wifi (802.11b) channel 6.
(two laptops in ad-hoc ping each other).
I would like to decode this data in matlab.
I see them as complex vector with 4.6 mln of complex samples.
I see their spectrum quite nice. I am looking document a bit less complicated as IEEE 802.11 standard (which I have).
I can share measurement data to other people.
There's now a few solutions around for decoding 802.11 using Software Defined Radio (SDR) techniques. As mentioned in a previous answer there is software that is based on gnuradio - specifically there's gr-ieee802-11 and also 802.11n+. Plus the higher end SDR boards like WARP utilise FPGA based implementations of 802.11. There's also a bunch of implementations of 802.11 for Matlab available e.g. 802.11a.
If your data is really raw then you basically have to build every piece of the signal processing chain in software, which is possible but not really straightforward. Have you checked the relevant wikipedia page? You might use gnuradio instead of starting from scratch.
I have used 802.11 IEEE standard to code and decode data on matlab.
Coding data is an easy task.
Decoding is a bit more sophisticated.
I agree with Stan, it is going to be tough doing everything yourself. you may get some ideas from the projects on CGRAN like :
https://www.cgran.org/wiki/WifiLocalization