why is a sony .mts file so large? - ffmpeg

I don't know much about multimedia knowledge. I know sony .mts file is a type of H.264 container. I use ffmpeg to dump my .mts file into a .mpeg file. Except the .mpeg file is shrunk around 5 times smaller in size than .mts, the ffmpeg dump information on both files is identical. I am confusing why .mts files have large size. What important features are lost by my conversion?
Thanks!
Kejia
Thanks to all answers.
I checked the output of both again and found that there is one different place: bitrate. Then I definitely lost quality. Now I adjust the bit rate in terms of the expectation to displaying equipment---yes, considering displaying equipments is necessary (an expert's advice): $ ffmpeg -b 9498k -i my.mts my.mpg. Another interesting option is -ab, audio bit rate.

MTS files typically come from high-definition camcorders. They use the AVCHD coded which uses MPEG-4 AVC/H.264 video encoding and Dolby AC-3 (Dolby Digital) or uncompressed linear PCM audio coding. Are you sure that you are not decreasing the quality or resolution?

Your file has H.264/MPEG-4 AVC video compression and Dolby Digital (AC-3) audio compression or uncompressed LPCM audio, so this equals to a fairly large source file size.
When you export (convert) to MPG, you most likely perform a lossy compression. Please double check, especially the audio track.

Related

FFmpeg H264 Lossless Video Sharing

A video file need to be transferred for further video processing. Sharing raw video (y4m) seems impossible. I am having two options
Encoding video file to h264 with crf 0 - lossless - file size is huge.
Encoding video file to h264 with crf 17/18 - virtually lossless - file size is manageable.
After the video is shared, it will be re-encoded only once with crf 22/23 with client info added.
Option 2 seems okay, but the quality should not be degraded on the re-encoding.
Is going with Option 1 and managing huge file is better option than Option 2?

How to compress output file using FFmpeg - Apple ProRes 422

I am new to video encoding and trying to encode a music video for the apple itunes video store.
I am currently using FFmpeg for encoding.
My source file is mp4 file type and file size=650MB
I encode the file using the Apple ProRes 422 (HQ) codec and output a mov file.
ffmpeg -y -i busy1.mp4 -vcodec prores -profile:v 3 -r "29.97" -c:a mp2 busy2.mov
I am trying to encode the video according to the following specs:
● Apple ProRes 422 (HQ)
● VBR expected at ~220 Mbps
Encoded PASP Converted to ProRes From
1920 x 1080 1:1 HDCAM SR, D5, ATSC
1280 x 720 1:1 ATSC progressive
29.97 interlaced frames per second for video sourced
Music Video Audio Source Profile
● MPEG-2 layer II stereo
● 384 kpbs
● 48Khz
The file is encoded perfectly fine however the output is 6Gb in size.
Why would the file be so large after encoding?
Am I doing something wrong here?
The Apple ProRes is not intended for high compression. It is an intermediate codec used in post-production which optimizes the storage as opposed to keeping the videos uncompressed while retaining a high image quality.
You are supposed to use your uncompressed source file as input to retain the maximum quality and not an already lossy-compressed video.
You only mentioned the container format of your input file: MP4 but not the codecs which is the actual important information.
Since the HQ flavor of ProRes uses 220 Mbps the file size can actually increase but you don't gain anything in quality if the source is lossy.
See more here: Apple ProRes
Though you don't gain much by decompressing a source clip thats "Lossy", you do gain in some ways. Compressed video uses a compressed color palette, which can be detrimental when making color corrections or corrections to detail level, especially when you're given interlaced footage to clean up. If you put in the time on detail, microcontrast, and color, you know the benefit of expanded color detail for compressing back down. It also encodes much faster on the back end of your edits. Simply compressing the data down is faster than expanding and then compressing.
However, if you recompress all your video down to the same size and codec as what went in, most encoders and editor apps now test the datarate of the GOP, working on only those GOP's that need to be redone to fit the new settings.

Decode/decompress H.264 back into raw/original file format, then encode into H.265

I have some files encoded using the H.264 codec.
There is a loss of quality when I convert them from H.264 to H.265.
I imagine I should convert them back to raw/original file format, then encode them into H.265.
Is it possible to decompress/decode H.264 into the original format (perhaps using FFMpeg)?
Is it the best way to convert from H.264 to H.265 without quality loss?
Thank you again for your help,
H.264 is lossy; the quality is lost at encoding time. There is no way to reconstruct the original from encoded form. In contrast, decoding is lossless - it produces exactly all of the information present in H.264 file, no more, no less. If your video editing software is not horrible, your H.264->H.265 conversion is the highest quality you can theoretically achieve given the compression settings you provide (without finding your original uncompressed file); there is no benefit in a separate decoding step, as that is what your software needs to do anyway.
Imagine a bad photocopy: there is no unphotocopier that can give you the original. That's what is happening with lossy compression.

Could a compression algorithm be lossless and lossy at the same time?

I have seen ffmpeg has some codecs (e.g. H.264) which are defined as lossless and lossy at the same time, and from my understanding, lossless and lossy are mutually exclusive: a compression algorithm either losses information or doesn't.
How is it possible to be lossless and lossy at the same time?
Running ffmpeg -codecs 2>/dev/null| grep h264, I get:
DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 [...]
DEV.LS stands for Decoder, Encoder, Video, Not only intraframe compression, Lossy compression, Lossless compression.
Checking in Wikipedia for H.264 it says:
H.264 is typically used for lossy compression in the strict mathematical sense, although
the amount of loss may sometimes be imperceptible. It is also possible to create truly
lossless encodings using it — e.g., to have localized lossless-coded regions within
lossy-coded pictures or to support rare use cases for which the entire encoding is lossless.
Yes, it could be lossy as well as lossless at the same time. When it comes to H.264, MPEG and AVC, the colors, frames are highly affected and it creates visual viewing issues while zooming-in the videos.
I have also posted a research study on it --- Check it out
The answer was mentioned by #MoDJ in the comments.
The h.264 codec, like many others, has encoding options. Chief among them is the Constant Rate Factor, aka CRF. The documentation from FFmpeg (an encoder which uses libx264 for encoding h.264/AVC) is a good reference in this case. It says:
The range of the CRF scale is 0–51, where 0 is lossless, 23 is the
default, and 51 is worst quality possible. A lower value generally
leads to higher quality, and a subjectively sane range is 17–28.
Consider 17 or 18 to be visually lossless or nearly so; it should look
the same or nearly the same as the input but it isn't technically
lossless.
(...)
You can use -crf 0 to create a lossless video. Two useful presets for
this are ultrafast or veryslow since either a fast encoding speed or
best compression are usually the most important factors. (...)
Note that lossless output files will likely be huge, and most non-FFmpeg
based players will not be able to decode lossless. Therefore, if
compatibility or file size are an issue, you should not use lossless. (...)
To sum up: One particular stream cannot be both lossy and lossless at the same time, but whether a stream is lossy or lossless can be adjusted by a codec setting.

What movie formats and resolutions should be generated to ensure cross-browser/platform compatibility?

I'm looking to generate web videos from movies taken with my digital camera. What formats should I generate, and at what resolution and bitrate to ensure playback on mobile and desktop devices?
Here's what I was thinking:
Input format: AVI, MOV
Output format: webm, ogv, mp4
Output resolutions: 1080p, 720p, 320p
Not really a programming question but I will answer it anyways:
WebM can be ditched completely. Very few devices support it. mp4 is the most common format that all devices support. Low end phones support 3gpp format instead [cousin of mp4]. If you have it you should be fine for 90% of the devices.
mp4 with h.264/aac is the most common and for devices that don't support those mpeg4 with mp3 will suffice.
How many devices do you have are 1080p resolution. Better to ditch the 1080p and get one SD resolution 480p in there.
Bitrates depend on the encoding profile and content. Just ensure do two pass encoding using ffmpeg and libx264 to get the best quality.
Most mobile devices can display "HD" content fairly well, these days. However, if you're looking to save on bandwidth on peoples data plans, a good resolution would probably be 852x480.
now, depending on if you need near lossless quality, or if you can accept minor artifacts in your video will determine your bitrate. for 1080p and x264, you can get near lossless with about 15mbps, but you could have watchable video with 10-11mbps. im not sure how well the other codecs compare, so you may have to try a couple test runs with a short video.
if you do 720p, you can most certainly get away with 4-6mbps.
with 852x480, you may be successful with as low as 1.5-2mbps.
480x320, or maybe even 320x240 may be a good option, if you suspect people will be watching this on lower end devices or on really slow connection, or very limited bandwidth. you could probably get away with 500kbps for 320x240, and 1mbps for 480x320.
these are all starting points, as each codec and selected encoding options will increase/decrease the quality. but i believe these to be good starting points.

Resources