When I play music with Playsound for exmp. PlaySound("XX.wav",NULL,SND_ASYNC|SND_LOOP) if I play then another .wav file it stops this file(XX.wav).
I think this problem can be avoided with threads. But how to make it in WinApi?How Threadfunc will look like?
And How to use it when I want to stop music when I press SPACE but I do not want to stop it when I call another PlaySound.
PlaySound() can only play one sound at a time. Threading will not change that. To play multiple sounds at the same time, you need to either:
1) mix the audio frames together yourself and then push your audio buffers to waveOutWrite().
2) use the playback capabilities of the DirectSound API. It allows you to mix multiple sounds together during playback.
I would suggest #2.
Related
I would like to make an app (Target pc windows) that let you modify the micro input in real time, like introducing sound effects or even modulating your voice.
I searched over the internet and only found people telling that it would not be possible without using a virtual audio cable.
However I know some apps with similar behavior (voicemod, resonance) not using a virtual audio cable so I would like some help about how can be done (just the name of a library capable would be enough) or where to start.
Firstly, you can use professional ready-made software for that - Digital audio workstation (DAW) in combination with a huge number of plugins for that.
See 5 steps to real-time process your instrument in the DAW.
And What is (audio) direct monitoring?
If you are sure you have to write your own, you can use libraries for real-time audio processing (as far as I know, C++ is better for this than C#).
These libraries really works. They are specially designed for realtime.
https://github.com/thestk/rtaudio
http://www.portaudio.com/
See also https://en.wikipedia.org/wiki/Csound
If you don't have a professional sound interface yet, but want to minimize a latency, read about Asio4All
The linked tutorial worked for me. In it, a sound is recorded and saved to a .wav.
The key to having this stream to a speaker would be opening a SourceDataLine and outputting to that instead of writing to a wav file. So, instead of outputting on line 59 to AudioSystem.write, output to a SourceDataLine write method.
IDK if there will be a feedback issue. Probably good to output to headphones and not your speakers!
To add an effect, the AudioInputLine has to be accessed and processed in segments. In each segment the following needs to happen:
obtain the byte array from the AudioInputLine
convert the audio bytes to PCM
apply your audio effect to the PCM (if the effect is a volume change over time, this could be done by progressively altering a volume factor between 0 to 1, multiplying the factor against the PCM)
convert back to audio bytes
write to the SourceDataLine
All these steps have been covered in StackOverflow posts.
The link tutorial does some simplification in how file locations, threads, and the stopping and starting are handled. But most importantly, it shows a working, live audio line from the microphone.
I'm trying to obtain playback video streams from some Axis and Hikvision cameras, using Onvif.
I'm doing this in a C# application, and the resulted stream is played in VLC.
Using the FindRecordings/GetRecordingSearchResult calls and then GetReplayUri I can obtain the playback stream (RTSP/H264), but here I have this problem: this behaves like a live stream - I can only use play and pause. I cannot use the time cursor to seek, cannot play in reverse.
So I find this unusable for a playback application - you have to watch the entire recording (days or hours of recording!) in order to see a specific event in time. And once you play it, you cannot go back 1 minute to see it again.
This seems quite stupid to me, so I believe that I'm doing something wrong in my code. Maybe I'm missing some configuration in order to obtain a 'true' playback stream.
My question is: is this playback stream behavior the 'standard' one, and I cannot expect more on this? Or some of you have this working ok (seek, reverse, frame by frame stepping), so I will know it can be done.
Thank you.
Reverse playback is possible, but it is not easy. First, the reverse replay is initiated using the Scale header field with a negative value. As an example:
PLAY rtsp://192.168.0.1/path/to/recording RTSP/1.0
Cseq: 123
Session: 12345678
Require: onvif-replay
Range: clock=20090615T114900.440Z-
Rate-Control: no
Scale: -1.0
After the stream is initialized, you will get GOPs in reverse order, not just reversed frames. I don't know if VLC supports this way of operating.
Be aware that only devices with the ReversePlayback capability support reverse playback.
Please refer to the streaming specification for further details.
This is not a real solution to the problem above, but maybe it would help others to deal with this situation.
Some cameras with which I worked were continuously recording on the same video file (so the time range was not known) and they were reporting (via RTSP) the available time interval like this:
range:npt=0-
Due to this, VLC was not displaying any time interval in the time slider, so it was not
allowing for seek. In my case, it was a requirement to use VLC, so I had to find a workaround to the problem.
This was a module which was acting like a proxy, and it sit between VLC and the RTSP source (camera). So all RTSP traffic between VLC and camera was going via this module which I controlled, so I could easily change the responses from camera in a way which was ok for VLC, so I got the seek capability available in VLC.
I am looking for a way I can modify an output stream from the microphone.
The idea is to modify the output stream merging two audio streams into single one.
My use case is the following. When a person makes a skype call it adds a background song to the output stream.
Is there any way to do this for Windows ?
If you are talking about manipulating the input that other programs see this would be fairly difficult to implement, you would have to create a virtual audio device and then have the target program use that. There are existig packages that already provide that functionality, however, perhaps a search for "virtual audio cable" or "virtual mixer" would come up with something that would work.
I'd like to capture multiple real-time video streams arriving on rtp protocol, using ffmpeg. When I initiate the recording, by issuing the ffmpeg <command line parameters> command, it always takes a while for the connection to built up and the actual recording to begin. This might be more than 2 seconds in certain cases, which cause a constant time difference at the replay.
How can I extract the information containing the time of the first actually recorded frame from ffmpeg? If it's not possible with ffmpeg without editing the source (which I did, and would like to avoid for other reasons), is there any similar multi-platform open-source tool which could be used?
Not possible without effort on your side. Use something like live555 to capture your streams. All your sources must synchronize to a single clock using ntp and then rtp timestamps can be used at the receiver end to synchronize the various streams. This is not trivial and is used in video conferencing systems. I am not aware of any free implementation of the same.
If you do not have control over the sources then you are out of luck because there is no such things as a common base time across the streams. If you do, you still need to modify live555 and your player to synchronize using the timestamps on the streams and the ntp clock. Like I said, not trivial.
Perhaps gstreamer might already have plugins for it, its been a while since I used it so I am not sure. You could take a look there. (gstreamer.net).
I have a link to some video stream (web cam that is always recording some place). I would like to be able to take a screenshot of what ever is on that video stream at the moment a user goes to my app.
Can it be done and how?
I have looked but all I could find was for taking screenshots out of a movie/video, not out of a streaming video.
I suspect ffmpeg connected to the streaming service as an input could probably extract thumbnails for you. You could either leave it running and pick up latest thumbnails, or fire it up with a system command and make it connect and emit a single screenshot. The latter would be more efficient and easier to code if you have a low number of hits, but would have a high latency on each request.
I did a quick search for you, but the most common uses of ffmpeg with streaming input is to re-format and re-stream, or to use it in personal video recorder setup. Ffmpeg is quite complex, so I could not complete the search in the time I have had so far.