I'd like to make a real time streaming program that takes input from a webcamera, ffmpeg looks like a good library for encoding a stream of images but there is no documentation or community tutorials (there is just a doxygen API reference).Where should I start if there's no official documentation?
Related
I am working on a V4L2 camera driver.The webcam taking number of sequence of image files.Now I want to convert it into video (mp4) file.How it is possible using FFMPEG/GSTREAM using pure c source code instead of ubuntu terminal command ?
GStreamer can be used to write applications using C. There are application guides available on their documentation site. But you need few extra topics related GObject as well to start using the framework.
I would recommend to go through their documentation site.
https://gstreamer.freedesktop.org/documentation/tutorials/index.html?gi-language=c
It is not that hard if you read through the documentation and they have lots of plugins available to to achieve quite lots of things related to audio/video processing.
I really like the look and idea of ncurses, and I was wondering if there was a way to include a video stream embedded in a ncurses gui. If not, is there any other gui method I can use that I can make look like ncurses available for python or C++?
Although I doubt you would find it useful for any practical purpose, you can embed a video stream in an ncurses window using libcaca (a short, basic tutorial is available, although it doesn't discuss video).
I use Qt & OpenCV to record video and QAudioInput to record audio into wav format. I want to combine them into one video file. How can I accomplish this? I have researched so much but I can't seem to find a command to accomplish this.
I use both Windows and Mac.
FYI, this operation seems to be accomplished through the cmd-line in this thread. This approach may turn into an easy hack since you can call this command using system().
But if you still want to do it programatically, I suggest you take a look at Dranger's FFmpeg tutorials. It provides 8 interesting tutorials that shows how to do simple stuff, from taking snapshots of a video to more complex stuffs like writing a simple video player with audio/video sync.
These tutorials teach how to work independently with audio and video streams, which is what you need to do: read the audio stream from the WAV file and then insert it as the audio stream of a video file.
Maybe not directly related to what you are aim for, but this answer demonstrates how to use FFmpeg to retrieve the audio stream of one file and play it with SDL, while simultaneously using OpenCV to retrieve video frames and display them in a SDL window.
In one of the project that we have undertaken we are looking for a video capture & recording library. Our groundwork (based on google search) shows that vlc (libvlc), ffmpeg (libavcodec) and gstreamer are the three popular free and open source libraries / multimedia frameworks available for the same. How do these libraries compare on the following parameters:
Licensing policy to allow use within a commercial product without the need to open source any of the components of the product that is using the library
Ability to be used effectively in a multi-threaded environment (library should be inherently thread-safe)
Easy to use and maintain
Documentation: API should be well documented...this is relative...:)
Our primary intention is to be able to capture RTSP video streams (H.264/MPEG-2/MJPEG encoded), convert these streams to raw video / frames so that it can be used for analysis / processing and later on compress these frames and store it on the disk in the form of an MP4 file (using MPEG2 / H.264 encoding).
P.S. We understand that FFmpeg is also one of the components of vlc since vlc uses libavcodec library. Is the same true for gstreamer as well? Does it have any ffmpeg dependency?
Awaiting your responses.
Regards,
Saurabh Gandhi
I suggest you to use Gstreamer.
Gstremer is multimedia framework and it has so many plug-in for various task. Plugin are one type of library. And for Capturing rtsp , converting raw video , and muxing in mp4 all have i think you will easily find out the best plug-in in Gstermer. yOU just need to write one application for this.
1. Licensing policy to allow use within a commercial product without
the need to open source any of the components of the product that is
using the library
i dont know much about this
2. Ability to be used effectively in a multi-threaded environment
(library should be inherently thread-safe)
yea Gstremer internally take care for all threading.
3. Easy to use and maintain
yea Gstremer is easy to use and maintain
4. Documentation: API should be well documented...this is relative...:)
yea Gstremer has verry well managed documented API
No Gstermer framework has no dependency on ffmpeg.but Actualy gstremer has some plugin which are based on ffmpeg. that is gst-ffmpeg
I'd like to pull a stream of PCM samples from a Mac's line-in or built-in mic and do a little live analysis (the exact nature doesn't pertain to this question, but it could be an FFT every so often, or some basic statistics on the sample levels, or what have you).
What's a good fit for this? Writing an AudioUnit that just passes the sound through and incidentally hands it off somewhere for analysis? Writing a JACK-aware app and figuring out how to get it to play with the JACK server? Ecasound?
This is a cheesy proof-of-concept hobby project, so simplicity of API is the driving factor (followed by reasonable choice of programming language).
The principal framework for audio development in Mac OS X is Core Audio; it's the basis for all audio I/O. There are layers on top of it like Audio Toolbox, Audio Queue Services, QuickTime, and QTKit that you can use if you want a simplified API for common tasks.
To just pull a stream of samples, you'd probably want to use Audio Queue Services; the AudioQueueNewInput function will set up recording of PCM data and pass it to a callback you supply.
On your Mac there's a set of Core Audio examples in /Developer/Examples/CoreAudio/SimpleSDK that includes a use (AQRecord in AudioQueueTools) of the Audio Queue Services recording APIs.
I think portaudio is what you need.
Reading from the mike from a console app is a 10 line C file (see patests in the portaudio distrib).
Apple provides sample code for reading and writing audio data. Additionally there is a lot of good information in the Audio section of the Apple Developer site.