ffmpeg convert mp4 video to rgba stream - ffmpeg

I'm looking to process the frames of an mp4 video as an rgba matrix, much like what can be done with HTML5 canvas.
This SU question/answer seemed promising: https://superuser.com/questions/1230385/convert-video-into-low-resolution-rgb32-format
But the output is not as promised. A 800KB mp4 file produced a 56MB out.bin file that seems to be gibberish, not an rgba matrix.
If anyone can clarify or provide alternate suggestions, that'd be great.

Related

ffmpeg need perfect pixels for LED processing (h264, mpeg 1, 2)

We have .mov with Animation codec and the pixels look great. But the LED media players are looking for h264, mpeg-1 or mpeg2. Is this even possible to get high pixel accuracy? I read a lot of the comments and tried the h264 lossless to no avail. Thanks for your help!

Tools for high quality GIF files

I want to add some GIF files to my README.
Please help me create some GIF files:
merge PNG images.
convert a video to GIF.
I have seen some 60fps 4k gif files, but I know they are fake.
For example 9gag using <video> tag with an mp4 video source.
I am not able to embed video directly to my README.
I want only short (2-5 seconds) videos.
What is the best way to add animation to a github README file?
If you want a tool to record your desktop screen to gif directly, I strongly recommend ScreenToGif which is really easy and very useful.
You can use FFMPEG with this method for converting videos to reasonable quality GIFs - http://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html
quote:
#!/bin/sh
palette="/tmp/palette.png"
filters="fps=15,scale=320:-1:flags=lanczos"
ffmpeg -v warning -i $1 -vf "$filters,palettegen" -y $palette
ffmpeg -v warning -i $1 -i $palette -lavfi "$filters [x]; [x][1:v] paletteuse" -y $2
...which can be used like this:
% ./gifenc.sh video.mkv anim.gif
PNG files can be converted and merged in a GIF with imagemagick:
convert -loop 0 -delay 100 in1.png in2.png out.gif
Or with some online tools like this one.
But please keep in mind that GIF is not really intended or suited for large, high quality animations. With some trickery you can get it to display more than 256 colors, but it will dramatically increase file size. 4k 60fps GIFs will be very large to download and most likely cause performance problems. If you are planning to add multiple such GIFs to a single page, they probably will crash the browser or slow it down significantly for some visitors. That's why some sites now are using videos for what they call "GIFs".
For maximum visual quality there's gifski encoder.
GIF theoretically supports maximum 100fps, but for backwards-compatibility reasons browsers won't play it at this rate. Some play at 33fps, some at 25 fps max.
GIF compression is awfully bad. Even the ideal case of solid color compresses poorly. GIF is able to encode only a small rectangle that differs between frames, so if it's a screencast where only mouse or text cursor moves, it may be tolerable file size (you can optimize that with gifsicle/giflossy), but otherwise avoid using high resolutions.

jpeg colors worse than png when extracting frames with ffmpeg?

When extracting still frames from a video at a specific time mark, like this:
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 example.png
I noticed using PNG or JPG results in different colors.(Note that the -q:v 1 indicates maximum image quality)
Here are some examples:
JPG vs PNG
JPG vs PNG
JPG vs PNG
In general, the JPG shots seem to be slightly darker and less saturated than the PNGs.
When checking with exiftool or imagemagick's identify, both images use sRGB color space and no ICC profile.
Any idea what's causing this? Or which of these two would be 'correct'?
I also tried saving screenshots with my video player (MPlayerX), in both JPG and PNG. In that case, the frame dumps in either format look exactly the same, and they look mostly like ffmpeg's JPG stills.
This is related to the video range or levels. Video stores color as luma and chroma i.e. brightness and color difference and due to legacy reasons from the days of analogue signals, black and white are not represented as 0 and 255 in a 8-bit encoding but as 16 and 235 respectively. The video stream should normally be flagged that this is the case, since one can also store video where 0 and 255 are B and W respectively. If the file isn't flagged or flagged wrongly, then some rendering or conversion functions can produce the wrong results. But we can force FFmpeg to interpret the input one way or the other.
Use
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 -src_range 0 -dst_range 1 example.png/jpg
This tells FFmpeg to assume studio or limited range and to output to full range. The colours still won't be identical due to color encoding conversion but the major change should disappear.
I don't know about ffmpeg specifically. But, in general, JPEG images can have compression that lowers the quality slightly in exchange for a large reduction in file size. Most programs that can write JPEG files will have a switch (or however they take options) which sets the "quality" or "compression" or something like that. I don't seem to have an ffmpeg on any of the half dozen machines I have open, or I'd tell you what I thought the right one was.

FFMPEG not extracting yuv colorspace images

I am using FFMPEG to extract images from MXF videos. I am interested in extracting tiff format images with the YUV (preferably 422) color space. The MXF videos are of the YUV color space. Hence why I want to continue to work in that color space. I have tried:
ffmpeg -i video.mxf -pix_fmt yuv422p f%10d.tiff
However the output images appear to be of the RGB color space. I use ImageMagick and the command line:
identify -verbose output.tiff
Which informs me that the image files are of the RGB color space. I have googled and tried variations to my FFMPEG command line but to no avail. What am I doing wrong?
ffmpeg console output as requested:
First part of output
Second part of output
imagemajick identify (partial) result:
(I'm not allowed to post more than two links*)
Check ffprobe output.tiff. It should report a YUV pixel format.
Don't confuse pixel format with absolute color space, sRGB, if it's the source colorspace, will remain the destination colorspace as well. The YUV pixel formats are a way of decoupling luminance/intensity from chromaticity/hue, which aids with efficiently compressing a video signal.
From Y'CbCr:
Y′CbCr is not an absolute color space; rather, it is a way of encoding
RGB information.

How to transform .jpg to .yuv

Any help on transform the .jpg to .yuv?
I have a.jpg file, and I wanna read the file and transform to a.yuv.
How can I do it by using the ffmpeg?
From man ffmpeg (modified slightly):
You can output to a raw YUV420P file:
ffmpeg -i mydivx.avi hugefile.yuv
hugefile.yuv is a file containing raw YUV planar data. Each frame is
composed of the Y plane followed by the U and V planes at half
vertical and horizontal resolution.
Remember that ffmpeg usage questions are better suited for superuser.com (and therefore you'll probably get better and more detailed answers there too).

Resources