ffmpeg need perfect pixels for LED processing (h264, mpeg 1, 2) - ffmpeg

We have .mov with Animation codec and the pixels look great. But the LED media players are looking for h264, mpeg-1 or mpeg2. Is this even possible to get high pixel accuracy? I read a lot of the comments and tried the h264 lossless to no avail. Thanks for your help!

Related

what is the fastest ffmpeg video codec for decoding?

I am using ffmpeg on Linux to transcode video files. The files are video from a race car camera. They have been downloaded from Youtube as "webm" format. I want to compare two of the videos, side-by-side, using GridPlayer, which uses vlc as its underlying video processor. GridPlayer has very nice, frame-by-frame controls, but, they are very slow. What video codec should I use to impose the least decoding overhead on vlc/GridPlayer for smoother playback?
I have tried re-encoding as h264, 1920x1080, 30 fps, in mp4 container. I have since discovered a "-tune fastdecode" option that seems to be helpful, along with resizing to 854x480. Any other suggestions?

ffmpeg convert mp4 video to rgba stream

I'm looking to process the frames of an mp4 video as an rgba matrix, much like what can be done with HTML5 canvas.
This SU question/answer seemed promising: https://superuser.com/questions/1230385/convert-video-into-low-resolution-rgb32-format
But the output is not as promised. A 800KB mp4 file produced a 56MB out.bin file that seems to be gibberish, not an rgba matrix.
If anyone can clarify or provide alternate suggestions, that'd be great.

Implementing custom h264 quantization for Ffmpeg?

I have a Raspberry Pi, and I'm livestreaming using FFmpeg. Unfortunately my wifi signal varies over the course of my stream. I'm currently using raspivid to send h264 encoded video to the stream. I have set a constant resolution and FPS, but have not set bitrate nor quantization, so they are variable.
However, the issue is that the quantization doesn't vary enough for my needs. If my wifi signal drops, my ffmpeg streaming speed will dip below 1.0x to 0.95xish for minutes, but my bitrate drops so slowly that ffmpeg can never make it back to 1.0x. As a result my stream will run into problems and start buffering.
I would like the following to happen:
If Ffmpeg (my stream command)'s reported speed goes below 1.0x (slower than realtime streaming), then increase quantization compression (lower bitrate) exponentially until Ffmpeg speed stabilizes at 1.0x. Prioritize stabilizing at 1.0x as quickly as possible.
My understanding is that the quantization logic Ffmpeg is using should be in the h264 encoder, but I can't find any mention of quantization at all in this github: https://github.com/cisco/openh264
My knowledge of h264 is almost zilch, so I'm trying to figure out
A) How does h264 currently vary the quantization during my stream, if at all?
B) Where is that code?
C) How hard is it for me to implement what I'm describing?
Thanks in advance!!

How to compress output file using FFmpeg - Apple ProRes 422

I am new to video encoding and trying to encode a music video for the apple itunes video store.
I am currently using FFmpeg for encoding.
My source file is mp4 file type and file size=650MB
I encode the file using the Apple ProRes 422 (HQ) codec and output a mov file.
ffmpeg -y -i busy1.mp4 -vcodec prores -profile:v 3 -r "29.97" -c:a mp2 busy2.mov
I am trying to encode the video according to the following specs:
● Apple ProRes 422 (HQ)
● VBR expected at ~220 Mbps
Encoded PASP Converted to ProRes From
1920 x 1080 1:1 HDCAM SR, D5, ATSC
1280 x 720 1:1 ATSC progressive
29.97 interlaced frames per second for video sourced
Music Video Audio Source Profile
● MPEG-2 layer II stereo
● 384 kpbs
● 48Khz
The file is encoded perfectly fine however the output is 6Gb in size.
Why would the file be so large after encoding?
Am I doing something wrong here?
The Apple ProRes is not intended for high compression. It is an intermediate codec used in post-production which optimizes the storage as opposed to keeping the videos uncompressed while retaining a high image quality.
You are supposed to use your uncompressed source file as input to retain the maximum quality and not an already lossy-compressed video.
You only mentioned the container format of your input file: MP4 but not the codecs which is the actual important information.
Since the HQ flavor of ProRes uses 220 Mbps the file size can actually increase but you don't gain anything in quality if the source is lossy.
See more here: Apple ProRes
Though you don't gain much by decompressing a source clip thats "Lossy", you do gain in some ways. Compressed video uses a compressed color palette, which can be detrimental when making color corrections or corrections to detail level, especially when you're given interlaced footage to clean up. If you put in the time on detail, microcontrast, and color, you know the benefit of expanded color detail for compressing back down. It also encodes much faster on the back end of your edits. Simply compressing the data down is faster than expanding and then compressing.
However, if you recompress all your video down to the same size and codec as what went in, most encoders and editor apps now test the datarate of the GOP, working on only those GOP's that need to be redone to fit the new settings.

FFMPEG reports different (wrong) video resolution compared to how it actually plays

Quick question, i have a movie, which was cut and rendered with Sony Vegas from its original format to a .wmv file. Here comes the tricky part, movie when played, either with VLC or WMP, has a resolution of 656x480 ... BUT when i run a ffmpeg -i on it, it says it has a resolution of 600x480 ....
I took the time of actually capturing a frame and croping it with photoshop and its 656 and not 600 like ffmpeg its reporting, why would this could be happening? How could i fix the headers resolution? Would that have any impact on video re-rendering? As i said, VLC and WMP seems not to care about the incorrect headers and are playing it right, BUT, jwplayer seems to be using the header information, which i don't blame him, its correct to do that, but why the video headers could be wrong?
ffmpeg -i trailer.wmv
Input #0, asf, from 'trailer.wmv':
Duration: 00:01:04.93, start: 3.000000, bitrate: 2144 kb/s
Stream #0.0: Audio: wmav2, 44100 Hz, mono, 32 kb/s
Stream #0.1: Video: wmv3, yuv420p, 600x480 [PAR 59:54 DAR 295:216], 2065 kb/
s, 25.00 tb(r)
And yeah, the PAR/DAR parameters are also wrong, but honestly, i don't understand that technical shit, usually watch video and make sure it look good, any feedback would be appreciated :P
Is there a way to change the container information with ffmpeg so applications that actually do use the container information don't render video incorrectly?
FFMPEG is 100% correct, that technical stuff is important :D
Your PAR (pixel aspect ratio) and DAR (display aspect ratio) are actually correct, and you proved it by capturing a screenshot and measuring.
What threw you off was the PAR. Not all pixels are square! IE: 1:1, although most downloaded videos will be so you probably never noticed. Some players such as VLC will recognize the PAR value and stretch the video accordingly to meet the DAR. DVD video is a great example of this.
See also: http://en.wikipedia.org/wiki/Pixel_aspect_ratio
So ffmpeg says your video width is 600. Multiply that by the PAR and you'll get "real" width. Meaning as if the pixels were square and not rectangular (horizontally).
600 * (59/54) = 656 (rounded) Number look familiar?
Now take the "real" size: 656 / 480 = 1.366 and look at your DAR: 295 / 216 = 1.366
Magic!
As you found out not all video players are smart enough to recognize the PAR and perform the appropriate stretching. You can change it to a 1:1 using ffmpeg easily using the setsar and scale video filters.
ffmpeg ...stuff... -vf "scale=656:480,setsar=1:1" ...more stuff...
For the curious, it's called setsar because it's also refered to as Sample (aka Pixel) Aspect Ratio: http://ffmpeg.org/ffmpeg.html#setsar-1
Hope this help, I'm sure it confuses many people (including myself) at first.

Resources