I am very new to ffmpeg and just read some examples on how to open a video file and decode its stream.
But is it possible to open a webcam's stream, something like:
http://192.168.1.173:80/live/0/mjpeg.jpg?x.mjpeg
Is there any examples/tutorials on this?
I need to use ffmpeg as decoder to decode the stream in my own Qt based program.
Nyaruko,
First check if your webcam is supported... Do
ffmpeg -y -f vfwcap -i list
Next ,
ffmpeg -y -f vfwcap -r 25 -i 0 out.mp4 for encoding
This site has helpful info;
http://www.area536.com/projects/streaming-video/
Best of Luck.
This works for live video streaming:
ffplay -f dshow -video_size 1280x720 -i video0
The other option using ffmpeg is:
ffmpeg -f dshow -video_size 1280x720 -i video0 -f sdl2 -
Above both the solution are provided by FFMPED
Related
I am using the following command to create an mp4 container, input in a raw file, and the problem I have is FFmpeg apparently trying to encode me in H264. Is there a way to tell FFmpeg not to use any codec? that is, how do I use FFmpeg without compressing anything? Thanks!
Command statement i'm using:
ffmpeg -f rawvideo -pixel_format rgb24 -video_size 160x120 -framerate 24 -i file out.mp4
Output:
I am using ffmpeg to record a video using a Raspberry Pi with its camera module.
I would like to run a image classifier on a regular interval for which I need to extract a frame from the stream.
This is the command I currently use for recording:
$ ffmpeg -f video4linux2 -input_format h264 -video_size 1280x720 -framerate 30 -i /dev/video0 -vcodec copy -an test.h264
In other threads this command is recommended:
ffmpeg -i file.mpg -r 1/1 $filename%03d.bmp
I don't think this is intended to be used with files that are still appended to and I get the error "Cannot use -sseof, duration of test.h264 not known".
Is there any way that ffmpeg allows this?
I don't have a Raspberry Pi set up with a camera at the moment to test with, but you should be able to simply append a second output stream to your original command, as follows to get, say, 1 frame/second of BMP images:
ffmpeg -f video4linux2 -input_format h264 -video_size 1280x720 -framerate 30 -i /dev/video0 -vcodec copy -an test.h264 -r 1 frame-%03d.bmp
I want to capture video+audio from directshow device like webcam and stream it to RTMP server. This part no problem. But the problem is that I want to be able to see the preview of it. After a lot of search someone said pipe the input using tee muxer to ffplay. but I couldn't make it work. Here is my code for streaming to rtmp server. how should I change it?
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -b:v 1024k -b:a 128k -ar 48000 -s 720x576 -f flv "rtmp://ip-address-of-my-server/live/out"
Here is the final code I used and it works.
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -f tee -map 0:v -map 0:a "[f=flv]rtmp://ip-address-and-path|[f=nut]pipe:" | ffplay pipe:
The core command for those running ffmpeg on a Unix-compatible system (e.g. MacOS, BSD and GNU-Linux) is really quite simple. It's to redirect or to "pipe" one of the outputs of ffmpeg to ffplay. The main problem here is that ffmpeg cannot autodetect the media format (or container) if the output doesn't have a recognizable file extension such as .avi or .mkv.
Therefore you should specify the format with the option -f. You can list the available choices for option -f with the ffmpeg -formats command.
In the following GNU/Linux command example, we record from an input source named /dev/video0 (possibly a webcam). The input source can also be a regular file.
ffmpeg -i /dev/video0 -f matroska - filename.mkv | ffplay -i -
A less ambiguous way of writing this for non-Unix users would be to use the special output specifier pipe.
ffmpeg -i /dev/video0 -f matroska pipe:1 filename.mkv | ffplay -i pipe:0
The above commands should be enough to produce a preview. But to make sure that you get the video and audio quality you want, you also need to specify, among other things, the audio and video codecs.
ffmpeg -i /dev/video -c:v copy -c:a copy -f matroska - filename.mkv | ffplay -i -
If you choose a slow codec like Google's AV1, you'd still get a preview, but one that stutters.
We were recording a video by specifying a named pipe as input for video frames, like this:
ffmpeg -r 30 -f rawvideo -pix_fmt bgra -s 640x480 -i namedPipe [... output options] out.mp4
It works well, and FFmpeg stops once the named pipe is closed, as is desired.
However, then we also want to record live audio from directshow, like this:
ffmpeg -r 30 -f rawvideo -pix_fmt bgra -s 640x480 -i namedPipe -f dshow -i audio=virtual-audio-capturer [... output options] out.mp4
This also works, but the problem is that the process now does not stop any more once we close the named pipe for the video frames.
My guess is that ffmpeg still gets audio input and thus just keeps running.
How can I change the FFmpeg command so that it stops once the video frames keep coming?
Add -shortest i.e. [... output options] -shortest out.mp4
I am using FFMPEG to convert uploaded videos to .flv, after conversion the flv video doesn't have information about it's duration. So the user cannot rewind/forward, replay or see a specific part of it. The code is as follows:
"ffmpeg -i $srcfile_path -s 320x240 -ar 44100 -b 2048k -r 12 $desfilepath";
Please help. Thanks in advance.
I ran the following command and it worked.
"ffmpeg -i $srcfile_path -f flv - | flvtool2 -U stdin $desfilepath"
This requires flvtool installed on your system. I am using an FFMPEG and FLVTOOL2 enabled server, so it worked.
That's very strange, I have been using ffmpeg to convert videos from one format to another without any issues. See example below:
ffmpeg -i input.avi -b:a 192K -b:v 2400 -s hd720 -c:v mpeg2video output.mpg
I am sure you know the syntax.