How to transform .jpg to .yuv - ffmpeg

Any help on transform the .jpg to .yuv?
I have a.jpg file, and I wanna read the file and transform to a.yuv.
How can I do it by using the ffmpeg?

From man ffmpeg (modified slightly):
You can output to a raw YUV420P file:
ffmpeg -i mydivx.avi hugefile.yuv
hugefile.yuv is a file containing raw YUV planar data. Each frame is
composed of the Y plane followed by the U and V planes at half
vertical and horizontal resolution.
Remember that ffmpeg usage questions are better suited for superuser.com (and therefore you'll probably get better and more detailed answers there too).

Related

ffmpeg convert mp4 video to rgba stream

I'm looking to process the frames of an mp4 video as an rgba matrix, much like what can be done with HTML5 canvas.
This SU question/answer seemed promising: https://superuser.com/questions/1230385/convert-video-into-low-resolution-rgb32-format
But the output is not as promised. A 800KB mp4 file produced a 56MB out.bin file that seems to be gibberish, not an rgba matrix.
If anyone can clarify or provide alternate suggestions, that'd be great.

perspective correction example

I have some videos taken of a display, with the camera not perfectly oriented, so that the result shows a strong trapezoidal effect.
I know that there is a perspective filter in ffmpeg https://ffmpeg.org/ffmpeg-filters.html#perspective, but I'm too dumb to understand how it works from the docs - and I cannot find a single example.
Somebody can show me how it works?
The following example extracts a trapezoidal perspective section from an input Matroska video to an output video.
An estimated coordinate had to be inserted to complete the trapezoidal pattern (out-of-frame coordinate x2=-60,y2=469).
Input video frame was 1280x720. Pixel interpolation was specified linear, however that is the default if not specified at all. Cubic interpolation bloats the output with NO apparent improvement in video quality. Output video frame size will be of the input video's frame size.
Video output was viewable but rough quality due to sampling error.
ffmpeg -hide_banner -i input.mkv -lavfi "perspective=x0=225:y0=0:x1=715:y1=385:x2=-60:y2=469:x3=615:y3=634:interpolation=linear" output.mkv
You can also make use of ffplay (or any player which lets you access ffmpeg filters, like mpv) to preview the effect, or if you want to keystone-correct a display surface.
For example, if you have your TV above your fireplace mantle and you're sitting on the floor looking up at it, this will un-distort the image to a large extent:
ffplay video.mkv -vf 'perspective=W*.1:0:W*.9:0:-W*.1:H:W*1.1:H'
The above expands the top by 20% and compresses the bottom by 20%, cropping the top and infilling the bottom with the edge pixels.
Also handy for playing back video of a building you're standing in front of with the camera pointed up around 30 degrees.

jpeg colors worse than png when extracting frames with ffmpeg?

When extracting still frames from a video at a specific time mark, like this:
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 example.png
I noticed using PNG or JPG results in different colors.(Note that the -q:v 1 indicates maximum image quality)
Here are some examples:
JPG vs PNG
JPG vs PNG
JPG vs PNG
In general, the JPG shots seem to be slightly darker and less saturated than the PNGs.
When checking with exiftool or imagemagick's identify, both images use sRGB color space and no ICC profile.
Any idea what's causing this? Or which of these two would be 'correct'?
I also tried saving screenshots with my video player (MPlayerX), in both JPG and PNG. In that case, the frame dumps in either format look exactly the same, and they look mostly like ffmpeg's JPG stills.
This is related to the video range or levels. Video stores color as luma and chroma i.e. brightness and color difference and due to legacy reasons from the days of analogue signals, black and white are not represented as 0 and 255 in a 8-bit encoding but as 16 and 235 respectively. The video stream should normally be flagged that this is the case, since one can also store video where 0 and 255 are B and W respectively. If the file isn't flagged or flagged wrongly, then some rendering or conversion functions can produce the wrong results. But we can force FFmpeg to interpret the input one way or the other.
Use
ffmpeg -i foobar.mp4 -vframes 1 -ss 4:20 -q:v 1 -src_range 0 -dst_range 1 example.png/jpg
This tells FFmpeg to assume studio or limited range and to output to full range. The colours still won't be identical due to color encoding conversion but the major change should disappear.
I don't know about ffmpeg specifically. But, in general, JPEG images can have compression that lowers the quality slightly in exchange for a large reduction in file size. Most programs that can write JPEG files will have a switch (or however they take options) which sets the "quality" or "compression" or something like that. I don't seem to have an ffmpeg on any of the half dozen machines I have open, or I'd tell you what I thought the right one was.

FFMPEG not extracting yuv colorspace images

I am using FFMPEG to extract images from MXF videos. I am interested in extracting tiff format images with the YUV (preferably 422) color space. The MXF videos are of the YUV color space. Hence why I want to continue to work in that color space. I have tried:
ffmpeg -i video.mxf -pix_fmt yuv422p f%10d.tiff
However the output images appear to be of the RGB color space. I use ImageMagick and the command line:
identify -verbose output.tiff
Which informs me that the image files are of the RGB color space. I have googled and tried variations to my FFMPEG command line but to no avail. What am I doing wrong?
ffmpeg console output as requested:
First part of output
Second part of output
imagemajick identify (partial) result:
(I'm not allowed to post more than two links*)
Check ffprobe output.tiff. It should report a YUV pixel format.
Don't confuse pixel format with absolute color space, sRGB, if it's the source colorspace, will remain the destination colorspace as well. The YUV pixel formats are a way of decoupling luminance/intensity from chromaticity/hue, which aids with efficiently compressing a video signal.
From Y'CbCr:
Y′CbCr is not an absolute color space; rather, it is a way of encoding
RGB information.

What is the variable "a" in ffmpeg?

In using the scale filter with ffmpeg, I see many examples similar to this:
ffmpeg -i input.mov -vf scale="'if(gt(a,4/3),320,-2)':'if(gt(a,4/3),-2,240)'" output.mov
What does the variable a signify?
From the ffmpeg scale options docs.
a The same as iw / ih
where
iw Input Width ih Input Height
My guess after reading https://trac.ffmpeg.org/wiki/Scaling%20(resizing)%20with%20ffmpeg is that a is the aspect ratio of the input file.
The example given on the webpage gives you an idea how to use it:
Sometimes there is a need to scale the input image in such way it fits
into a specified rectangle, i.e. if you have a placeholder (empty
rectangle) in which you want to scale any given image. This is a
little bit tricky, since you need to check the original aspect ratio,
in order to decide which component to specify and to set the other
component to -1 (to keep the aspect ratio). For example, if we would
like to scale our input image into a rectangle with dimensions of
320x240, we could use something like this:
ffmpeg -i input.jpg -vf scale="'if(gt(a,4/3),320,-1)':'if(gt(a,4/3),-1,240)'"
output_320x240_boxed.png
In the ffmpeg wiki "Scaling (resizing) with ffmpeg", they use this example:
ffmpeg -i input.jpg -vf scale="'if(gt(a,4/3),320,-1)':'if(gt(a,4/3),-1,240)'" output.png
The purpose of the gt(a,4/3) is, as far as I can tell, to determine the orientation (portrait or landscape) of the video (or image, in this case).
This wouldn't work for some strange aspect ratios (7:6, for an example, where gt(a,4/3) would incorrectly turn false.
It seems to me better to use the height and width of the video, so the above line would instead be:
ffmpeg -i input.jpg -vf scale="'if(gt(iw,ih),320,-1)':'if(gt(iw,ih),-1,240)'" output.png

Resources