Distorted vlc playback with x264 encoded file - ffmpeg

I have captured raw video in rgb format from my webcam using ffmpeg:
ffmpeg -f video4linux2 -s 320x240 -r 10 -i /dev/video0 -f rawvideo \
-pix-fmt rgb24 -r10 webcam.rgb24
This raw video file plays ok in mplayer.
I encode this file using x264:
x264 --input-res 320x240 --demuxer raw --input-fmt rgb24 --fps 10 \
-o webcam.mkv webcam.rgb24
However when I try to play webcam.mkv with vlc it is an interlaced, distorted image.
I don't know what I am doing wrong.

After some further research I was able to successfully encode the raw video stream. The problem (I think) was that x264 expects yuv420p formatted data. When I changed the capture format I could play the mkv file without any distortion.
Capture command:
ffmpeg -t 10 -f video4linux2 -s 320x240 -r 10 -i /dev/video0 -f rawvideo \
-pix_fmt yuv420p -r 10 webcam.yuv420p
(capture from input device /dev/video0 for 10 secs at a frame rate of 10 and output to file webcam.yuv420p in yuv420p pixel format)
Encode command:
x264 --input-res 320x240 --demuxer raw --input-fmt yuv420p --fps 10 \
-o webcam.mkv webcam.yuv420p
Play command:
mplayer -vo gl:nomanyfmts webcam.mkv
(Or open with vlc)

Your problem was that you use --input-fmt option (which exists specifically for lavf demuxer) with --demuxer raw. With raw demuxer you should use --input-csp option (with bgr value probably for ffmpeg's -pix-fmt rgb24).

Related

Extracting frames from video while recording using ffmpeg

I am using ffmpeg to record a video using a Raspberry Pi with its camera module.
I would like to run a image classifier on a regular interval for which I need to extract a frame from the stream.
This is the command I currently use for recording:
$ ffmpeg -f video4linux2 -input_format h264 -video_size 1280x720 -framerate 30 -i /dev/video0 -vcodec copy -an test.h264
In other threads this command is recommended:
ffmpeg -i file.mpg -r 1/1 $filename%03d.bmp
I don't think this is intended to be used with files that are still appended to and I get the error "Cannot use -sseof, duration of test.h264 not known".
Is there any way that ffmpeg allows this?
I don't have a Raspberry Pi set up with a camera at the moment to test with, but you should be able to simply append a second output stream to your original command, as follows to get, say, 1 frame/second of BMP images:
ffmpeg -f video4linux2 -input_format h264 -video_size 1280x720 -framerate 30 -i /dev/video0 -vcodec copy -an test.h264 -r 1 frame-%03d.bmp

ffmpeg colorspace conversion speed

I am running 2 ffmpeg commands on a fairly fast, GPU-enabled machine (AWS g2.2xlarge instance):
ffmpeg -i ./in.mp4 -s 1280x720 -r 30 -an -f rawvideo -pix_fmt yuv420p - | cat - >/dev/null
gives 524fps while
ffmpeg -i ./in.mp4 -s 1280x720 -r 30 -an -f rawvideo -pix_fmt argb - | cat - >/dev/null
just 101... it just shouldn't, couldn't take as much as 8ms per frame on a modern CPU, let alone GPU!
What am i doing wrong and how can i improve speed of this?
PS: Now this is truly ridiculous!
ffmpeg -i ./in.mp4 -s 1280x720 -r 30 -an -f rawvideo -pix_fmt yuv420p - | ffmpeg -s 1280x720 -r 30 -an -f rawvideo -pix_fmt yuv420p -i - -s 1280x720 -r 30 -an -f rawvideo -pix_fmt argb - | cat - >/dev/null
makes 275 fps! which is by far not perfect, but something i can live with.
why?
Thanks!
it is easy to see that GPU is used for output encoding - no CPU could encode mp4 at 1280x720x30fps at 10x the playback speed
Are you sure? On a mid-range Haswell i5, my CPU encodes get around 4-5x for that resolution. Since you haven't specified a codec, ffmpeg will default to libx264 for MP4 output, which does NOT encode on a GPU.
Check the output of your ARGB pipeline. In order to save as RGB, libx264 has to be called explicitly as -c:v libx264rgb. Except H.264 does not store alpha. So for MP4 format, you'll probably have to encode as VP9, using a very recent build of ffmpeg. The output will be a YUV pixel format with an alpha plane. If MOV works, PNG and QTRLE are your other options.
I'm not aware of a hardware-accelerated encoder for VP9/PNG/QTRLE usable with ffmpeg.

avconv / ffmpeg webcam capture while using minimum CPU processing

I have a question about avconv (or ffmpeg) usage.
My goal is to capture video from a webcam and saving it to a file.
Also, I don't want to use too much CPU processing. (I don't want avconv to scale or re-encode the stream)
So, I was thinking to use the compressed mjpeg video stream from the webcam and directly saving it to a file.
My webcam is a Microsoft LifeCam HD 3000 and its capabilities are:
ffmpeg -f v4l2 -list_formats all -i /dev/video0
Raw: yuyv422 : YUV 4:2:2 (YUYV) : 640x480 1280x720 960x544 800x448 640x360 424x240 352x288 320x240 800x600 176x144 160x120 1280x800
Compressed: mjpeg : MJPEG : 640x480 1280x720 960x544 800x448 640x360 800x600 416x240 352x288 176x144 320x240 160x120
What would be the avconv command to save the Compressed stream directly without having avconv doing scaling or re-encoding.
For now, I am using this command:
avconv -f video4linux2 -r 30 -s 320x240 -i /dev/video0 test.avi
I'm not sure that this command is CPU efficient since I don't tell anywhere to use the mjpeg Compressed capability of the webcam.
Is avconv taking care of the configuration of the webcam setting before starting to record the file ? Is it always working of raw stream and doing scaling and enconding on the raw stream ?
Thanks for your answer
Reading the actual documentation™ is the closest thing to magic you'll get in real life:
video4linux2, v4l2
input_format
Set the preferred pixel format (for raw video) or a codec name. This option allows one to select the input format, when several are available.
video_size
Set the video frame size. The argument must be a string in the form WIDTHxHEIGHT or a valid size abbreviation.
The command uses -c:v copy to just copy the received encoding without touching it therefore achieving the lowest resource use:
ffmpeg -f video4linux2 -input_format mjpeg -video_size 640x480 -i /dev/video0 -c:v copy <output>

How to stream webcam video using ffmpeg?

I am very new to ffmpeg and just read some examples on how to open a video file and decode its stream.
But is it possible to open a webcam's stream, something like:
http://192.168.1.173:80/live/0/mjpeg.jpg?x.mjpeg
Is there any examples/tutorials on this?
I need to use ffmpeg as decoder to decode the stream in my own Qt based program.
Nyaruko,
First check if your webcam is supported... Do
ffmpeg -y -f vfwcap -i list
Next ,
ffmpeg -y -f vfwcap -r 25 -i 0 out.mp4 for encoding
This site has helpful info;
http://www.area536.com/projects/streaming-video/
Best of Luck.
This works for live video streaming:
ffplay -f dshow -video_size 1280x720 -i video0
The other option using ffmpeg is:
ffmpeg -f dshow -video_size 1280x720 -i video0 -f sdl2 -
Above both the solution are provided by FFMPED

Using FFMPEG to losslessly convert YUV to another format for editing in Adobe Premier

I have a raw YUV video file that I want to do some basic editing to in Adobe CS6 Premiere, but it won't recognize the file. I thought to use ffmpeg to convert it to something Premiere would take in, but I want this to be lossless because afterwards I will need it in YUV format again. I thought of avi, mov, and prores but I can't seem to figure out the proper command line to ffmpeg and how to ensure it is lossless.
Thanks for your help.
Yes, this is possible. It is normal that you can't open that raw video file since it is just raw data in one giant file, without any headers. So Adobe Premiere doesn't know what the size is, what framerate ect.
First make sure you downloaded the FFmpeg command line tool. Then after installing you can start converting by running a command with parameters. There are some parameters you have to fill in yourself before starting to convert:
What type of the YUV pixel format are you using? The most common format is YUV4:2:0 planar 8-bit (YUV420p). You can type ffmpeg -pix_fmts to get a list of all available formats.
What is the framerate? In my example I will use -r 25 fps.
What encoder do you want to use? The libx264 (H.264) encoder is a great one for lossless compression.
What is your framesize? In my example I will use -s 1920x1080
Then we get this command to do your compression.
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -r 25 -pix_fmt yuv420p -i inputfile.yuv -c:v libx264 -preset ultrafast -qp 0 output.mp4
A little explanation of all other parameters:
With -f rawvideo you set the input format to a raw video container
With -vcodec rawvideo you set the input file as not compressed
With -i inputfile.yuv you set your input file
With -c:v libx264 you set the encoder to encode the video to libx264.
The -preset ultrafast setting is only speeding up the compression so your file size will be bigger than setting it to veryslow.
With -qp 0 you set the maximum quality. 0 is best, 51 is worst quality in our example.
Then output.mp4 is your new container to store your data in.
After you are done in Adobe Premiere, you can convert it back to a YUV file by inverting allmost all parameters. FFmpeg recognizes what's inside the mp4 container, so you don't need to provide parameters for the input.
ffmpeg -i input.mp4 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1920x1080 -r 25 rawvideo.yuv

Resources