I am confused about image formats. I have read what theoretically YUV and RGB mean. The problem is that I understand them just theoretically.
I have a PNG file. Are PNG files RGB by default? Or can they be NV12(YUV) too?
If no, in what format do NV12 images come from? How do I recognize they are NV12 images? Do I have to perform conversion from a PNG file to read it as NV12?
Part of my confusion is because I read a program that converted RGB to YUV but then later extracted only the Y part and saved it in a file. So the file only had the Y part. Are there files with the whole YUV information?
Related
I am using FFMPEG to extract images from MXF videos. I am interested in extracting tiff format images with the YUV (preferably 422) color space. The MXF videos are of the YUV color space. Hence why I want to continue to work in that color space. I have tried:
ffmpeg -i video.mxf -pix_fmt yuv422p f%10d.tiff
However the output images appear to be of the RGB color space. I use ImageMagick and the command line:
identify -verbose output.tiff
Which informs me that the image files are of the RGB color space. I have googled and tried variations to my FFMPEG command line but to no avail. What am I doing wrong?
ffmpeg console output as requested:
First part of output
Second part of output
imagemajick identify (partial) result:
(I'm not allowed to post more than two links*)
Check ffprobe output.tiff. It should report a YUV pixel format.
Don't confuse pixel format with absolute color space, sRGB, if it's the source colorspace, will remain the destination colorspace as well. The YUV pixel formats are a way of decoupling luminance/intensity from chromaticity/hue, which aids with efficiently compressing a video signal.
From Y'CbCr:
Y′CbCr is not an absolute color space; rather, it is a way of encoding
RGB information.
I was given two inputs one is an image (image from .mp4 video file)and the other one is video(mostly in .ts format). Mostly the video is lossy encoding. I need to find the image in the video. Here I can't compare the raw frames of video and image as they are different in encoding . To my knowledge, I need to find same alike image/frame in the video with respect to image. Is there any tools/api to find the image in the video.
Detect features and try to establish a homography.
Then pick the frame with the most homography inliers (the cv::findHomography function has an output parameter named mask)
Any help on transform the .jpg to .yuv?
I have a.jpg file, and I wanna read the file and transform to a.yuv.
How can I do it by using the ffmpeg?
From man ffmpeg (modified slightly):
You can output to a raw YUV420P file:
ffmpeg -i mydivx.avi hugefile.yuv
hugefile.yuv is a file containing raw YUV planar data. Each frame is
composed of the Y plane followed by the U and V planes at half
vertical and horizontal resolution.
Remember that ffmpeg usage questions are better suited for superuser.com (and therefore you'll probably get better and more detailed answers there too).
I'm building one part of H264 encoder. For testing system, I need to created input image for encoding. We have a programme for read image to RAM file format to use.
My question is how to create a RAW file: bitmap or tiff (I don't want to use compressed format link JPEG)? I googled and recognize alot of raw file type. So what type i should use and how to create? . I think i will use C/C++ or Matlab to create raw file.
P/S: my need format is : YUV ( or Y Cb Cr) 4:2:0 and 8 bit colour deepth
The easiest raw format is just a stream of numbers, representing the pixels. Each raw format can be associated with metadata such as:
width, heigth
width / image row (ie. gstreamer & x-window align each row to dword boundaries)
bits per pixel
byte format / endianness (if 16 bits per pixel or more)
number of image channels
color system HSV, RGB, Bayer, YUV
order of channels, e.g. RGBA, ABGR, GBR
planar vs. packed (or FOURCC code)
or this metadata can be just an internal specification...
I believe one of the easiest approaches (after of course a steep learning curve :) is to use e.g. gstreamer, where you can use existing file/stream sources that read data from camera, file, pre-existing jpeg etc. and pass those raw streams inside a defined pipeline. One useful element is a filesink, which would simply write a single or few successive raw data frames to your filesystem. The gstreamer infrastructure has possibly hundreds of converters and filters, btw. including h264 encoder...
I would bet that if you just dump your memory, that output will conform already to some FOURCC -format (also recognized by gstreamer).
I got a raw YUV file format all I know at this point is that the clip has a resolution of 176x144.
the Y pla is 176x144=25344 bytes, and the UV plan is half of that. Now, I did some readings about YUV, and there are different formats corresponding to different ways how the Y & US planes are stored.
Now, how can perform some sort of check in Cocoa to find the raw YUV file format. Is there a file header in the YUV frame where I can extract some information?
Thanks in advance to everyone
Unfortunately, if it's just a raw YUV stream, it will just be the data for the frames written to disk, one after another. There probably won't be a header that indicates what specific format is being used.
It sounds like you have determined that it's a YUV 4:2:2 stream, so you just need to determine the interleaving order (the most common possibilities are listed here). In response to your previous question, I posted a function which converts a frame from the UYVY (Y422) YUV format to the 2VUY format used by Apple's YUV OpenGL extension. Your best bet may be to try that out and see how the images look, then modify the interleaving format until the colors and image clears up.