FFMPEG not extracting yuv colorspace images - ffmpeg

I am using FFMPEG to extract images from MXF videos. I am interested in extracting tiff format images with the YUV (preferably 422) color space. The MXF videos are of the YUV color space. Hence why I want to continue to work in that color space. I have tried:
ffmpeg -i video.mxf -pix_fmt yuv422p f%10d.tiff
However the output images appear to be of the RGB color space. I use ImageMagick and the command line:
identify -verbose output.tiff
Which informs me that the image files are of the RGB color space. I have googled and tried variations to my FFMPEG command line but to no avail. What am I doing wrong?
ffmpeg console output as requested:
First part of output
Second part of output
imagemajick identify (partial) result:
(I'm not allowed to post more than two links*)

Check ffprobe output.tiff. It should report a YUV pixel format.
Don't confuse pixel format with absolute color space, sRGB, if it's the source colorspace, will remain the destination colorspace as well. The YUV pixel formats are a way of decoupling luminance/intensity from chromaticity/hue, which aids with efficiently compressing a video signal.
From Y'CbCr:
Y′CbCr is not an absolute color space; rather, it is a way of encoding
RGB information.

Related

Are there NV12 files? or are there in PNG files?

I am confused about image formats. I have read what theoretically YUV and RGB mean. The problem is that I understand them just theoretically.
I have a PNG file. Are PNG files RGB by default? Or can they be NV12(YUV) too?
If no, in what format do NV12 images come from? How do I recognize they are NV12 images? Do I have to perform conversion from a PNG file to read it as NV12?
Part of my confusion is because I read a program that converted RGB to YUV but then later extracted only the Y part and saved it in a file. So the file only had the Y part. Are there files with the whole YUV information?

how to get pixel by pixel values from a YUY2 yuyv422 raw video?

I have a video (extension is .ravi) that is the output of a IR camera software.
I need to compute the temperature from each pixel for each frame within that video.
The IR camera+software company already told me they would not help since such information would reveal much of their know-how.
When I play the video, e.g. in mplayer, the temperature fields seem to display correctly. So the information that translates into a pixel color contains the information on its temperature.
My idea is to isolate each frame into a file, find the pixel information for each pixel in each file. I am able to retrieve the temperature pixel by pixel for single frames with the original IR Camera software (though this is done by point and click, unfeasible for each frame). Then, with the pixel information on the one side, and the temperature information on the other side (both for the first frame), I hope to relate them by a function and hopefully that function applies to all the frames of the video.
For that video, I get the following metadata (from FFMEPG):
Metadata:
META : (640,480,16,312500),(0,0,0,0),0,1,80
Duration: 00:13:53.06, start: 0.000000, bitrate: 158338 kb/s
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x481, 157619 kb/s, 32 fps, 32 tbr, 32 tbn, 32 tbc
I guess the color I see when playing the video, which is somehow related to the pixels temperature, comes from the YUV values for each pixel.
How can I access that information?
I have tried to convert each frame to an image (say PNG) with FFMPEG, and then get the value for each pixel (e.g. using ImageMagick), but I get RGB values.
Following #VC.One advice I can convert every frame to a single file:
ffmpeg -i original.ravi -c copy -pix_fmt yuyv422 frame%05d.bmp
(If I use .yuv as extension instead of .bmp I get a single file, and not a file for each frame.)
Now, with a file for each frame, I can look at it with ImageMagick:
convert -size 640x481 -depth 8 -sampling-factor 4:2:2 YUV:frame00001.bmp -colorspace YUV frame00001.txt
and this is how the output looks like:
# ImageMagick pixel enumeration: 640,481,65535,yuv
0,0: (12436,26650,58361) #3068E3 yuv(48,104,227)
1,0: (6314,29662,45762) #1973B2 yuv(25,115,178)
2,0: (15547,25111,19123) #3C624A yuv(60,98,74)
From the documentation I understand this information as follows:
first line, the header, contains information on the number of pixels (640x481) and the maximum value for each YUV field: 65535 (this I guess comes from the -depth 8 option)
then, for each pixel, a line: its position, the YUV values into brackets,
then as a comment the RGB value
and then something else I do not understand.
Are the single frames really saved as yuv files, even though they have the .bmp extension?
Do I read properly the output of ImageMagick?
(1)
"I would like to, for each frame of the video, have the luminance and chrominance values for each pixel."
First... Get the video into single frames via FFmpeg:
ffmpeg -f rawvideo -framerate 32 -s 640x481 -pix_fmt yuyv422 -i input.yuv -c copy frame%d.yuv
This will output each frame as an individual YUV (eg: frame01.yuv, frame02.yuv). Note: there is no header about picture width or height, so later you must specify image size whenever using the YUV file.
Second... Getting pixel values:
I don't use ImageMagick but from a quick research of the manual:
convert -size 1024x768 -colorspace YUV frame01.yuv output01.txt
The above is untested and it's possible you need also -sampling-factor 4:2:2 or even -depth 8 as input options. You have more experience than me with this program. Maybe even try as:
convert -size 1024x768 -depth 8 YUV:frame01.yuv -colorspace YUV output01.txt
"...How can I be sure the yuyv422 I obtain... is the same as the one in the original video?"
You could open each YUV file in hex editor to see the YUV data as byte values (or find a command line tool that prints binary data).
For example a yellow pixel looks like:
RGB (24): [FF] [F2] [00]... where red=FF (255), green=F2 (246), blue=00 (0).
YUYV422 (16): [CB] [13] [CB] [96]... where Y0=CB (203), U0=13 (19), Y1=CB (203), V0=96 (150).
So starting at index 0 for first pixel, the lum (Y) is 203 and ChromaB (U) is 19, skip one byte (ie: the third byte) and get ChromaR (v) = 150. So first pixel YUV is [CB][13][96] within bytes of the frame's YUV data.
The second pixel starts with that third byte [CB] that was previously skipped. See image below for YUV byte structure...

ffmpeg - Is is possible to create a video from image tiles

With ffmpeg, you can:
create a video from a list of images
create an image with tiles representing frames of a video
but, how is it possible to create a video from tiles, in a picture, representing frames of a video?
if I have this command line:
ffmpeg -i test.mp4 -vf "scale=320:240,tile=12x25" out.png
I will get an image (out.png) made of 12x25 tiles of 320x240 pixels each.
I am trying to reverse the process and, from that image, generate a video.
Is it possible?
Edit with more details:
What am I really trying to achieve is to convert a video into a gif preview. But in order to make an acceptable gif, I need to build a common palette. So, either I scan the movie twice, but it would be very long since I have to do it for a large batch, or I make a tiled image with all the frames in a single image, then make a gif with a palette computed from all the frames, which would be significantly faster... if possible.

jpeg colors worse than png when extracting frames with ffmpeg?

When extracting still frames from a video at a specific time mark, like this:
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 example.png
I noticed using PNG or JPG results in different colors.(Note that the -q:v 1 indicates maximum image quality)
Here are some examples:
JPG vs PNG
JPG vs PNG
JPG vs PNG
In general, the JPG shots seem to be slightly darker and less saturated than the PNGs.
When checking with exiftool or imagemagick's identify, both images use sRGB color space and no ICC profile.
Any idea what's causing this? Or which of these two would be 'correct'?
I also tried saving screenshots with my video player (MPlayerX), in both JPG and PNG. In that case, the frame dumps in either format look exactly the same, and they look mostly like ffmpeg's JPG stills.
This is related to the video range or levels. Video stores color as luma and chroma i.e. brightness and color difference and due to legacy reasons from the days of analogue signals, black and white are not represented as 0 and 255 in a 8-bit encoding but as 16 and 235 respectively. The video stream should normally be flagged that this is the case, since one can also store video where 0 and 255 are B and W respectively. If the file isn't flagged or flagged wrongly, then some rendering or conversion functions can produce the wrong results. But we can force FFmpeg to interpret the input one way or the other.
Use
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 -src_range 0 -dst_range 1 example.png/jpg
This tells FFmpeg to assume studio or limited range and to output to full range. The colours still won't be identical due to color encoding conversion but the major change should disappear.
I don't know about ffmpeg specifically. But, in general, JPEG images can have compression that lowers the quality slightly in exchange for a large reduction in file size. Most programs that can write JPEG files will have a switch (or however they take options) which sets the "quality" or "compression" or something like that. I don't seem to have an ffmpeg on any of the half dozen machines I have open, or I'd tell you what I thought the right one was.

How to transform .jpg to .yuv

Any help on transform the .jpg to .yuv?
I have a.jpg file, and I wanna read the file and transform to a.yuv.
How can I do it by using the ffmpeg?
From man ffmpeg (modified slightly):
You can output to a raw YUV420P file:
ffmpeg -i mydivx.avi hugefile.yuv
hugefile.yuv is a file containing raw YUV planar data. Each frame is
composed of the Y plane followed by the U and V planes at half
vertical and horizontal resolution.
Remember that ffmpeg usage questions are better suited for superuser.com (and therefore you'll probably get better and more detailed answers there too).

Resources