ffmpeg colorspace conversion speed - ffmpeg

I am running 2 ffmpeg commands on a fairly fast, GPU-enabled machine (AWS g2.2xlarge instance):
ffmpeg -i ./in.mp4 -s 1280x720 -r 30 -an -f rawvideo -pix_fmt yuv420p - | cat - >/dev/null
gives 524fps while
ffmpeg -i ./in.mp4 -s 1280x720 -r 30 -an -f rawvideo -pix_fmt argb - | cat - >/dev/null
just 101... it just shouldn't, couldn't take as much as 8ms per frame on a modern CPU, let alone GPU!
What am i doing wrong and how can i improve speed of this?
PS: Now this is truly ridiculous!
ffmpeg -i ./in.mp4 -s 1280x720 -r 30 -an -f rawvideo -pix_fmt yuv420p - | ffmpeg -s 1280x720 -r 30 -an -f rawvideo -pix_fmt yuv420p -i - -s 1280x720 -r 30 -an -f rawvideo -pix_fmt argb - | cat - >/dev/null
makes 275 fps! which is by far not perfect, but something i can live with.
why?
Thanks!

it is easy to see that GPU is used for output encoding - no CPU could encode mp4 at 1280x720x30fps at 10x the playback speed
Are you sure? On a mid-range Haswell i5, my CPU encodes get around 4-5x for that resolution. Since you haven't specified a codec, ffmpeg will default to libx264 for MP4 output, which does NOT encode on a GPU.
Check the output of your ARGB pipeline. In order to save as RGB, libx264 has to be called explicitly as -c:v libx264rgb. Except H.264 does not store alpha. So for MP4 format, you'll probably have to encode as VP9, using a very recent build of ffmpeg. The output will be a YUV pixel format with an alpha plane. If MOV works, PNG and QTRLE are your other options.
I'm not aware of a hardware-accelerated encoder for VP9/PNG/QTRLE usable with ffmpeg.

Related

Colour space conversion from yuv44416le to yuv420p10le using ffmpeg command line tool

Is there a way to convert from yuv444 planar 16 bit little-endian format to yuv420 planar 10 bit little-endian format using ffmpeg?
I have tried the following command but failed:
ffmpeg -y -pixel_format yuv444p16le -s 4096x4096 -r 30 -i input.yuv -pixel_format yuv420p10le -s 4096x4096 -r 30 output.yuv
Use
ffmpeg -y -pixel_format yuv444p16le -s 4096x4096 -framerate 30 -i input.yuv -pix_fmt yuv420p10le output.yuv
-pixel_format is an input option for raw demuxers.
-pix_fmt is an output option for the target format.
Note that 10 bit formats are still padded to 16-bit,so storage size will remain the same.

FFMPEG - Convert UInt16 Data to .264

At the moment, I'm trying to convert with FFMPEG my raw data in uint16 format from an infrared camera to MP4 format or at least to .h264.
My current command for ffmpeg is here:
ffmpeg -f rawvideo -pix_fmt gray16be -s:v 140x110 -r 30 -i binaryMarianData.bin -c:v libx264 -f rawvideo -pix_fmt yuv420p output.264
But my ouput File is not really looking good :(
Frame of My Input, its a Nose
Frame of My Output
Here is my Input File: http://fileshare.link/91a43a238e0de75b/binaryMarianData.bin
Update 1: Little Endian
Hey guys, would be great if it's possible to get the video output in the little endian byte order.
This is a frame shown with ImageJ with the following settings
Settings of the shown frame above in ImageJ
Unfortunaley my output doesn't look like this.
Output Frame Little Endian
This is my command used to convert the RAW File:
ffmpeg -f rawvideo -pixel_format gray16le -video_size 110x140 -framerate 30 -i binaryMarianData.bin -vf transpose=clock -c:v libx264 -pix_fmt yuv420p output.264
It is sideways, so the stride has to be corrected and the image rotated.
ffmpeg -f rawvideo -pixel_format gray16be -video_size 110x140 -framerate 30 -i binaryMarianData.bin -vf transpose=clock -c:v libx264 -pix_fmt yuv420p output.264

ffmpeg: Low framerate when capturing with -vcodec mjpeg but not with -vcodec copy

I'm trying to capture video from a webcam, and I find that when I use the -vcodec copy option, it works really well (far better than any other software I've tried). However, I'd like my files to be a bit smaller, and it seems that every attempt I make to compress the video leads to extremely jumpy video. If, for example, I switch the output vcodec to mjpeg, it changes from reporting 15 fps to reporting between 3 and 4 fps. Am I doing something wrong?? Here is the call with -vcodec copy:
ffmpeg -y -f dshow -vcodec mjpeg -s 1184x656 -framerate 25 -i video="HD 720P Webcam" -vcodec copy test.avi
-- which gets me 15 fps. But if I change to mjpeg, I get only 3-4 fps:
ffmpeg -y -f dshow -vcodec mjpeg -s 1184x656 -framerate 25 -i video="HD 720P Webcam" -vcodec mjpeg test.avi
Experimental attempts to put -framerate 25 or -r 25 before test.avi also does nothing to help the situation. I'm not getting any smoother video when experimenting with mpeg4 or libx264 either. Only the copy option gives me smooth video (btw I'm filming my hands playing a piano, so there is a lot of fast motion in the videos).
Help!!!! And thank you...
I don't understand why the framerate drops so much, but you could try a 2 pass approach where you first record it using -vcodec copy (as you pasted in the question)
ffmpeg -y -f dshow -vcodec mjpeg -s 1184x656 -framerate 25 -i video="HD 720P Webcam" -vcodec copy test.avi
Then transcode it into mjpeg once it's done (something like this):
ffmpeg -i test.avi -vcodec mjpeg test.mjpeg
note: I haven't actually tested any of the above command lines.
Sounds like your webcam is outputting a variable frame rate stream. Try the below on one of your copy captured files.
ffmpeg -i test.avi -vcodec libx264 -r 30 test.mp4
(You should avoid capturing to AVI, use MKV instead)

BMDCapture (bmdtools) capture with Ultrastudio Mini recorder results is color problems

I'm trying to get a Blackmagic Ultrastudio Mini Recorder to stream via avconv to HLS. To test, it's hooked up to an AppleTV and this is the command I'm using:
./bmdcapture -m 14 -C 0 -F nut -f pipe:1 | avconv -vsync passthrough -y -i - -vcodec copy -pix_fmt yuyv422 -strict experimental -f hls -hls_list_size 999 +live -strict experimental out.m3u8
However, the colors are all messed up; suggesting the color format is set incorrectly. The input format is 1280x720 # 59.94 FPS (which is correct) and I've set the format to yuyv422 (though nothing else I've set this to has fixed the error).
Got it!
The Mini Recorder shoots at 10 bits rather than 8 (which I assumed considering Adobe's live encoder said it would be 8).
Here is the fixed code:
./bmdcapture -m 14 -p yuv10 -C 0 -F nut -f pipe:1 | avconv -vsync passthrough -y -i - -vcodec copy -pix_fmt uyvy422 -strict experimental -f hls -hls_list_size 999 +live -strict experimental out.m3u8

Distorted vlc playback with x264 encoded file

I have captured raw video in rgb format from my webcam using ffmpeg:
ffmpeg -f video4linux2 -s 320x240 -r 10 -i /dev/video0 -f rawvideo \
-pix-fmt rgb24 -r10 webcam.rgb24
This raw video file plays ok in mplayer.
I encode this file using x264:
x264 --input-res 320x240 --demuxer raw --input-fmt rgb24 --fps 10 \
-o webcam.mkv webcam.rgb24
However when I try to play webcam.mkv with vlc it is an interlaced, distorted image.
I don't know what I am doing wrong.
After some further research I was able to successfully encode the raw video stream. The problem (I think) was that x264 expects yuv420p formatted data. When I changed the capture format I could play the mkv file without any distortion.
Capture command:
ffmpeg -t 10 -f video4linux2 -s 320x240 -r 10 -i /dev/video0 -f rawvideo \
-pix_fmt yuv420p -r 10 webcam.yuv420p
(capture from input device /dev/video0 for 10 secs at a frame rate of 10 and output to file webcam.yuv420p in yuv420p pixel format)
Encode command:
x264 --input-res 320x240 --demuxer raw --input-fmt yuv420p --fps 10 \
-o webcam.mkv webcam.yuv420p
Play command:
mplayer -vo gl:nomanyfmts webcam.mkv
(Or open with vlc)
Your problem was that you use --input-fmt option (which exists specifically for lavf demuxer) with --demuxer raw. With raw demuxer you should use --input-csp option (with bgr value probably for ffmpeg's -pix-fmt rgb24).

Resources