ffmpeg screencasting with webcam - ffmpeg

I want to use the screen view of the computer in real-time for a fake webcam. I did the video action on the bottom. But the frames like the video (2.34) are forming on top of each other. How can I solve this problem.
https://www.youtube.com/watch?v=6AZRiW3hHrw
Load the module
sudo modprobe v4l2loopback exclusive_caps=1
Find the dummy device
v4l2-ctl --list-devices
Start the virtual-webcam (change "/dev/video1" to reflect your system)
ffmpeg -f x11grab -r 15 -s 1920x1080 -i :0.0+0,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 /dev/video1

Related

Is there a way to add/modify/delete one of the outputs of a running ffmpeg core which has multiple ones?

The task is to stream a video feed from a webcam, watch it live on a display, then start/stop a recording of same into a file, while maintaining the live video feed.
This will record a 5-s clip (with no audio), while simultaneously watching it on the screen:
ffmpeg -v error -f v4l2 -framerate 30 -video_size 640x480 -t 5 -i /dev/video0 -an \
clip.mp4 -y -map 0:v -pix_fmt yuv420p -f xv "Capturing a 5-s clip"
however I need to surround it before and after with live display:
ffmpeg -v error -f v4l2 -framerate 30 -video_size 640x480 -i /dev/video0 -an \
-map 0:v -pix_fmt yuv420p -f xv "Live display"
which I then kill in order to launch the recording version and re-launch when that one terminates. This creates a noticeable break as one ffmpeg dies and another launches.
This explains how to pause/resume ffmpeg process itself using
kill -s SIGSTOP <PID> and kill -s SIGCONT <PID> as needed, but what I want is to pause/resume (or start/stop) just the recording output, while keeping the display output continuous.
Another way to think about it is whether it is possible to add an extra output stream to a running ffmpeg core, and either pass a time restriction to it - so it disconnects itself when done, or have an ability to delete an output from a running ffmpeg core, without killing the other outputs.

ffmpeg v4l2 taking 100% cpu usage

I am on Fedora 37, version is irrelevant as I had same issue on Fedora 36.
I am using the following command to send video from my usb video card to v4l2loopback and the CPU usage goes to 100% and I can't do much more on my computer (i5-8600k, nvidia 2060 rtx, 16GB ram).
ffmpeg -nostdin -threads 1 -f video4linux2 -video_size 1920x1080 -input_format mjpeg -i /dev/video1 -f v4l2 -preset fast -c:v copy /dev/video0
previously the command was like that and the rest of the arguments was added to see if it makes any difference, it seems it doesn't change much
ffmpeg -f video4linux2 -video_size 1920x1080 -input_format mjpeg -i /dev/video1 -f v4l2 -c copy /dev/video0
To be honest I don't know much about ffmpeg or video in general, I was using the same command a few months ago on the same hardware with almost 0 impact to my performance, I could play games and watch videos at the same time while now I get 15 FPS in my games while I am using ffmpeg. Did anything change in ffmpeg? Any improvements to my ffmpeg command? Shouldn't ffmpeg not encode anything if I am using pixelformat and frame rate supported by my usb video card and if that is true shouldn't be super light?

Extracting frames from video while recording using ffmpeg

I am using ffmpeg to record a video using a Raspberry Pi with its camera module.
I would like to run a image classifier on a regular interval for which I need to extract a frame from the stream.
This is the command I currently use for recording:
$ ffmpeg -f video4linux2 -input_format h264 -video_size 1280x720 -framerate 30 -i /dev/video0 -vcodec copy -an test.h264
In other threads this command is recommended:
ffmpeg -i file.mpg -r 1/1 $filename%03d.bmp
I don't think this is intended to be used with files that are still appended to and I get the error "Cannot use -sseof, duration of test.h264 not known".
Is there any way that ffmpeg allows this?
I don't have a Raspberry Pi set up with a camera at the moment to test with, but you should be able to simply append a second output stream to your original command, as follows to get, say, 1 frame/second of BMP images:
ffmpeg -f video4linux2 -input_format h264 -video_size 1280x720 -framerate 30 -i /dev/video0 -vcodec copy -an test.h264 -r 1 frame-%03d.bmp

Overlay streaming video on another video with ffmpeg

I am running a robot that uses fmpeg to send straming video to letsrobot.tv You can see my bot on the website called patton II. I want to overlay a video HUD on the stream.
I have found a link explaining how to do this, however I do not know how to do it with a streaming video as input instead of a single image file.
This is the command that is currently being used to stream the video:
overlayCommand = '-vf dynoverlay=overlayfile=/home/pi/runmyrobot/images/hud.png:check_interval=500'
videoCommandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s %s -f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 -muxdelay 0.001 %s http://%s:%s/hello/640/480/' % (deviceAnswer, rotationOption, args.kbps, overlayCommand, server, videoPort)
audioCommandLine = '/usr/local/bin/ffmpeg -f alsa -ar 44100 -i hw:1 -ac 2 -f mpegts -codec:a mp2 -b:a 32k -muxdelay 0.001 http://%s:%s/hello/640/480/' % (server, audioPort)
You already have one input, which is the webcam video:
-f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s
You want to overlay another video, so you have to add a second input, which is your HUD stream. I'm assuming that you already have a stream that's being generated on the fly:
-i /path/to/hud/stream
Then, add a complex filter that overlays one over the other:
-filter_complex "[0:v][1:v]overlay[out]"
After the filter, add a -map "[out]" option to tell ffmpeg to use the generated video as output, and add your remaining options as usual. So, in sum:
/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s \
-i /path/to/hud/stream \
-filter_complex "[0:v][1:v]overlay[out]" -map "[out]" \
-f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 \
-muxdelay 0.001 %s http://%s:%s/hello/640/480/
Obviously, without knowing more, this is the most generic advice I can give you.
Some general tips:
Make sure that the HUD stream is the same resolution as the webcam video, where the elements are placed where you want them. Or use the overlay filter's x and y options to move the HUD.
Your HUD stream should have a transparency layer. Not all codecs and container formats support that.
You're using -codec:v mpeg1video, which is MPEG-1 video. It's quite resource-efficient but otherwise low in quality. You may want to choose a better codec, but it depends on your device capabilities (e.g., at least MPEG-2 with mpeg2, or MPEG-4 Part 10 with mpeg4, or H.264 with libx264).

ffmpeg: Low framerate when capturing with -vcodec mjpeg but not with -vcodec copy

I'm trying to capture video from a webcam, and I find that when I use the -vcodec copy option, it works really well (far better than any other software I've tried). However, I'd like my files to be a bit smaller, and it seems that every attempt I make to compress the video leads to extremely jumpy video. If, for example, I switch the output vcodec to mjpeg, it changes from reporting 15 fps to reporting between 3 and 4 fps. Am I doing something wrong?? Here is the call with -vcodec copy:
ffmpeg -y -f dshow -vcodec mjpeg -s 1184x656 -framerate 25 -i video="HD 720P Webcam" -vcodec copy test.avi
-- which gets me 15 fps. But if I change to mjpeg, I get only 3-4 fps:
ffmpeg -y -f dshow -vcodec mjpeg -s 1184x656 -framerate 25 -i video="HD 720P Webcam" -vcodec mjpeg test.avi
Experimental attempts to put -framerate 25 or -r 25 before test.avi also does nothing to help the situation. I'm not getting any smoother video when experimenting with mpeg4 or libx264 either. Only the copy option gives me smooth video (btw I'm filming my hands playing a piano, so there is a lot of fast motion in the videos).
Help!!!! And thank you...
I don't understand why the framerate drops so much, but you could try a 2 pass approach where you first record it using -vcodec copy (as you pasted in the question)
ffmpeg -y -f dshow -vcodec mjpeg -s 1184x656 -framerate 25 -i video="HD 720P Webcam" -vcodec copy test.avi
Then transcode it into mjpeg once it's done (something like this):
ffmpeg -i test.avi -vcodec mjpeg test.mjpeg
note: I haven't actually tested any of the above command lines.
Sounds like your webcam is outputting a variable frame rate stream. Try the below on one of your copy captured files.
ffmpeg -i test.avi -vcodec libx264 -r 30 test.mp4
(You should avoid capturing to AVI, use MKV instead)

Resources