I am trying to write a decoder using ffmpeg and I want to display some information about the video stream. I can detect if a frame is progressive or interlaced (tff, bff) only after decoding the frame. i.e,
avcodec_decode_video2(pCodecCtx, pFrame, &gotFrame, &packet);
.....(assume we have a frame)
.....
// print information
printf("prog=%d inter=%d", !pFrame->interlaced_frame, pFrame->interlaced_frame);
This works well.
But I want to know if there is a way of detecting this from the AVFormatContext, AVCodecCtx or AVCodec structs or some other functions. This would be very useful, if for example I want to abort decoding if the file is interlaced. I don't want to decode a frame to get this piece of information.
I am trying to support MPEG2, H.264/AVC and HEVC codecs (either elementary streams or in MP4 container).
Sorry if this is a trivial question! Thank you very much!
ffmpeg can run in "idet" (interlace detection) mode and gives a summary of frame types it finds in the file. I use:
$ ffmpeg -filter:v idet -frames:v 360 -an -f rawvideo -y /dev/null -i HitchhikersGuide.mp4
(ffmpeg with the idet filter, sample 360 frames, blocking audio, using rawvideo format, send output to /dev/null, with input file HitchhikersGuide.mp4)
which produces a report that contains (in part):
[Parsed_idet_0 # 0x7fd5f3c121c0] Repeated Fields: Neither: 360 Top: 0 Bottom: 0
[Parsed_idet_0 # 0x7fd5f3c121c0] Single frame detection: TFF: 30 BFF: 0 Progressive: 330 Undetermined: 0
[Parsed_idet_0 # 0x7fd5f3c121c0] Multi frame detection: TFF: 22 BFF: 0 Progressive: 338 Undetermined: 0
In this case, 92% of the sampled frames are progressive, and 8% are interlaced, so most services would call this an interlaced video. TFF and BFF are Top-Field-First and Bottom-Field-First, respectively, and both indicate an interlaced frame.
Note that it is possible to encode interlaced video as progressive and progressive video as interlaced, and this method will report on the encoding only. If you want to know if the video was originally interlaced or progressive, then you will need to visually inspect it and look for a "combing" effect where alternate lines don't quite line up with each other (especially when the camera is moving fast), and if you see combing, then the original video is interlaced.
You could use ffprobe, which comes with ffmpeg. I don't know how you would use that from a library, but the command-line version can show the field_order.
Example command, with a few additional fields:
ffprobe -v quiet -select_streams v stream=codec_name,height,width,pix_fmt,field_order -of csv=p=0 "$Your_File"
Example output with different files:
prores,1920,1080,yuva444p12le,progressive
h264,1920,1080,yuv420p,unknown # some progressive files show unknown
prores,720,576,yuv422p10le,tb # tb = interlaced TFF interleaved
mpeg2video,1920,1080,yuv422p,tt # tt = interlaced TFF
dvvideo,720,576,yuv420p,bt # bt = interlaced BFF interleaved
An alternative would be Mediainfo :
mediainfo --Inform='Video;%ScanType%,%ScanOrder%,%ScanType_StoreMethod%' "$Your_File"
Example output with different files:
Progressive,,
Interlaced,TFF,
Interlaced,TFF,InterleavedFields
Interlaced,BFF,InterleavedFields
Mediainfo's source is available here, and on Github.
Related
I was examining an audio file and noticed that the numbers of channels returned by mediainfo and ffprobe were different.
The mediainfo command:
mediainfo audio.mp4
The ffprobe command (see the channels value):
ffprobe -i audio.mp4 -show_streams
Does anyone know what's happening?
Here is the audio file for your own test.
AAC content, announced as mono. But AAC may have an hidden Parametric Stereo feature which makes this announcement not the reality 99.99% of the time (HE-AAC is rarely used for mono content).
FFmpeg is not able to switch from mono to stereo if stereo is detected after decoder init so it forces its output to stereo due to some anticipation of getting Parametric Stereo at some point.
MediaInfo does not have this limitation so shows stereo only if Parametric Stereo is detected. Parametric Stereo is not detected in this file.
In this case MediaInfo shows the correct value (mono) and FFmpeg shows an incorrect value (stereo). Not blaming FFmpeg here, their developers decided to do that for good reasons (decoding and handling lot of tools not able to deal with channel count change in the middle of a file), just not what you are looking for.
If you are not convinced, try to decode each channel in a different file, and compare files. There is only 1 byte not same between the 2 files, it is the byte saying that this is the left channel vs the right channel. Audio data is same: your file is really mono and even FFmpeg agrees on that when it decodes (in practice it duplicates the mono channel).
Jérôme, developer of MediaInfo.
My task is to create html5 compatible video from input video (.avi, .mov, .mp4, etc.). My understanding is that my output should be .webm or .mp4 (H264 video, aac audio).
I use ffmpeg for conversion and it takes a lot of time. I wonder if I could use ffprobe to test if input video is "H264" and "aac" and if so then maybe I could just copy video/audio into output without modifications.
I.e. I have next idea:
Get input video info using ffprobe:
ffprobe {input} -v quiet -show_entries stream=codec_name,codec_type -print_format json
The result would be JSON like this:
"streams": [
{codec_name="mjpeg",codec_type="video"},
{codec_name="aac",codec_type="audio"}
]
If JSON tells that video codec is h264 then I think I could just copy video stream. If JSON tells that audio codec is h264 aac then I think I could just copy audio stream.
JSON above tells that my audio is "aac". So I think I could just copy audio stream into ouput video but still needs video stream conversion. For the above example my ffmpeg command would be like:
ffmpeg -i {input} -c:v libx264 -c:a copy ouput.mp4
The question is if I could always use this idea to produce html5 compatible video and if this method will actually speed up video conversion.
The question is if I could always use this idea to produce html5 compatible video
Probably, but some caveats:
Your output may use H.264 High profile, but your target device may not support that (but that is not too likely now).
Ensure that the pixel format is yuv420p. If it is not then it may not play and you will have to re-encode with -vf format=yuv420p. You can check with pix_fmt in your -show_entries stream.
If the file is directly from a video camera, or some other device with inefficient encoding, then the file size may be excessively large for your viewer.
Add -movflags +faststart to your command so the video can begin playback before the file is completely downloaded.
and if this method will actually speed up video conversion.
Yes, because you're only stream copying (re-muxing) which is fast, and not re-encoding some/all streams which is slow.
I can download http://www.w6rz.net/adv8dvbt23.ts.
And there are many samples for dvbt sample ts files.
But, I want to convert my video file to TS file for dvbt.
First, I checked on google, but I cannot find any answer.
I think, this does not make sense, or, the way of thinking may have been wrong.
FFmpeg can used for this?
but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
FFmpeg can used for this? but, there is no any parmameter for Transmit mode, QAM / 64QAB, guard interval.
As I explained already:
ffmpeg doesn't know anything about RF things like Constellation type; it is just a tool to transcode between different video formats. .ts is for "transport stream", and it's the video container format that DVB uses. The GNU Radio transmit flowgraphs on the other hand know nothing about video things – all they do is take the bits from a file. So that file needs to be in a format that the receiver would understand, and that's why I instructed you to use FFMPEG with the parameters you need. Since I don't know which bitrate you're planning on transmitting, I can't help you with how to use ffmpeg
So, you need to generate video data that your DVB-T receiver understands, but more importantly even, you need to put them in a container that ensures constant bitrate.
As pointed out in a different comment to your ham.stackexchange.com question about the topic, your prime source of examples would be GNU Radio's own gr-dtv module; when you look into gnuradio/gr-dtv/examples/README.dvbt, you'll find a link to https://github.com/drmpeg/dtv-utils , W6RZ's own tooling :)
There you'll find the tools necessary to calculate the exact stream bitrate you need your MPEG transport stream to have. Remember, a DVB-T transmitter has to transmit at a constant bits per second, so your video container must be constant-bitrate. That's why a transport stream pads the video data to achieve constant rate.
Then, you'll use ffmpeg to transcode your video and put into the transport stream container:
ffmpeg -re -i inputvideo.mpeg \
-vcodec mpeg2video \
-s 720x576 #resolution; this is a good choice, since most TVs will deal with it \
-r 25 #frames per second, use 25\
-flags cgop+ilme -sc_threshold 1000000000 #MPEG codec options\
-b:v 2M #Video *codec data* bit rate (defines video quality). Must be lower than stream bit rate, so < muxrate-(audio bitrate)\
-minrate:v 2M -maxrate:v 2M #enforce constant video bit rate\
-acodec mp2 -ac 2 -b:a 192k #audio codec, quality and bitrate\
-muxrate ${RATE FROM TOOL}
-f mpegts #specify you want a MPEG Transport Stream container as output\
outputfile.ts
I am starting with a high res video file and I would like to create 3 variants, low quality, mid quality, and high quality for mobile streaming. I want these mid/low/high variants to be segmented into ts pieces that the m3u8 file will be pointing that. Is there a way to do this in one line in ffmpeg?
I have successfully generated an m3u8 file and ts segments with ffmpeg, do I need to do this 3x and set specs for low/mid/high? If so, how do I get the singular m3u8 file to point to all variants as opposed to one for each variant?
This is the command I used to generate the m3u8 file along with the ts segments.
ffmpeg -i C:\Users\george\Desktop\video\hos.mp4 -strict -2 -acodec aac -vcodec libx264 -crf 25 C:\Users\user\Desktop\video\hos_Phone.m3u8
Yes, you need to encode all variants and generate the media playlists first (the playlists containing the segments).
If you want you can do it in one command since ffmepg supports multiple inputs/outputs. Eg:
ffmpeg -i input \
... [encoding parameters 1] ... output1 \
... [encoding parameters 2] ... output2 \
....[encoding parameters 3] ... output3
You must provide the variants in multiple qualities/bitrates but the aspect ratio should remain the same. Keeping the aspect ratio was initially mandatory but in the latest HLS authoring guide it's downgraded to a recommendation.
All variant streams must be keyframe aligned so set a GOP size using the -g option, disable scene-cut detection and use a segment duration hls_time which is a multiple of your keyframe interval.
Once you have all 3x m3u8 media playlist you can manually create the master playlist which points to each media playlist.
Example from the Apple HLS documentation, you must change the bandwidth, codecs, resolution and playlist filenames according to your own encoding options:
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2855600,CODECS="avc1.4d001f,mp4a.40.2",RESOLUTION=960x540
medium.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=5605600,CODECS="avc1.640028,mp4a.40.2",RESOLUTION=1280x720
high.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1755600,CODECS="avc1.42001f,mp4a.40.2",RESOLUTION=640x360
low.m3u8
The Aspect ratio Does not have to be the same, that makes no sense.
How could you know what the client can play?
Aspect ratios are 4:3 for non-HD, 16:9 for a HD variants.
You don't want to do all your variants in one ffmpeg command if you
need segment times to be consistent.
Also watch transcoding downward, if you go from 1080 to 360, there
might be issues. One that I often get is that the audio degrades and
sounds weird. I try to go down no more than half, if I want high
quality.
#DavidC That hex is the codec version number.
I need convert all videos to my video player (in website) when file type is other than flv/mp4/webm.
When I use: ffmpeg -i filename.mkv -sameq -ar 22050 filename.mp4 :
[h264 # 0x645ee0] error while decoding MB 22 1, bytestream (8786)
My point is, what I should do, when I need convert file type: .mkv and other(not supported by jwplayer) to flv/mp4 without quality loss.
Instead of -sameq (removed by FFMpeg), use -qscale 0 : the file size will increase but it will preserve the quality.
Do not use -sameq, it does not mean "same quality"
This option has been removed from FFmpeg a while ago. This means you are using an outdated build.
Use the -crf option instead when encoding with libx264. This is the H.264 video encoder used by ffmepg and, if available, is the default encoder for MP4 output. See the FFmpeg H.264 Video Encoding Guide for more info on that.
Get a recent ffmpeg
Go to the FFmpeg Download page and get a build there. There are options for Linux, OS X, and Windows. Or you can follow one of the FFmpeg Compile Guides. Because FFmpeg development is so active it is always recommended that you use the newest version that is practical for you to use.
You're going to have to accept some quality loss
You can produce a lossless output with libx264, but that will likely create absolutely huge files and may not be decodeable by the browser and/or be supported by JW Player (I've never tried).
The good news is that you can create a video that is roughly visually lossless. Again, the files may be somewhat large, but you need to make a choice between quality and file size.
With -crf choose a value between 18 to around 29. Choose the highest number that still gives an acceptable quality. Use that value for your videos.
Other things
Add -movflags +faststart. This will relocate the moov atom from the end of the file to the beginning. This will allow the video to begin playback while it is still being downloaded. Otherwise the whole video must be completely downloaded before it can begin playing.
Add -pix_fmt yuv420p. This will ensure a chroma subsampling that is compatible for all players. Otherwise, ffmpeg, by default and depending on several factors, will attempt to minimize or avoid chroma subsampling and the result is often not playable by non-FFmpeg based players.
convert all mkv to mp4 without quality loss (actually it is only re-packaging):
for %a in ("*.mkv") do ffmpeg.exe -i "%a" -vcodec copy -acodec copy -scodec mov_text "%~na.mp4"
For me that was the best way to convert it.
ffmpeg -i {input} -vcodec copy {output}
I am writing a script in python that appends multiple .webm files to one .mp4. It was taking me 10 to 20 seconds to convert one chunk of 5 seconds using:
ffmpeg -i {input} -qscale 0 copy {output}
There's some folders with more than 500 chunks.
Now it takes less than a second per chunk. It took me 5 minutes to convert a 1:20:00 long video.
For MP3, the best is to use -q:a 0 (same as '-qscale 0'), but MP3 has always loss quality.
To have less loss quality, use FLAC
See this documentation link