avformat_write_header produces invalid header (resulting MPG broken) - ffmpeg

I am rendering a video file from input pictures that come from a 3D engine at runtime (I don't pass an actual picture file, just RGB memory).
This works perfectly when outputting MP4 using CODEC_ID_H264 as video codec.
But when I want to create an MPG file using CODEC_ID_MPEG2VIDEO, the resulting file is simply broken. No player can play the video correctly and when I then want to concatenate that MPG with another MPG file, and transform the result MP4 in another step, the resulting .mp4 file has both videos, but many frames from the original MPG video (and only video! Sound works fine) are simply skipped.
At first I thought the MPG -> MP4 conversion was the problem, but then I noticed that the initial MPG, which comes from the video render engine, is already broken, which would speak for broken headers. Not sure if it is the system or sequence headers that are broken, though.
Or if it could be something totally different.
If you want to have a look, here is the file:
http://www.file-upload.net/download-7093306/broken.mpg.html
Again, the exact same muxing code works perfectly fine when directly creating an MP4 from the video render engine, so I'm pretty sure the input data, swscale(), etc. is correct. The only difference is that CODEC_ID_H264 is used and some additional variables (like qmin, qmax, etc.) are set, which are all specific to H264 so should not have an impact.
Also, neither avformat_write_header nor av_write_trailer report an error.
As an additional info, when viewing the codec data of the MPG in VLC player, it is not able to show the FPS, resolution and format (should show 640x360, 30 fps and 4:2:0 YUV).
I am using a rather new (2-3 months old, maybe) FFmpeg version, which I compiled from sources with MinGW.
Any ideas on how to resolve this would be welcome. Currently, I am out of those :)

Alright, the problem was not the avformat_write_header, but that I did not set the PTS value of each written video packet to AV_NOPTS_VALUE.
Once I do set it for each video packet, everything works fine.
I assumed that AV_NOPTS_VALUE was the default, as I never needed to set any special PTS value.

Related

Set specific frame as thumbnail for video?

I just want some confirmation, because I have the sneaking suspicion that I wont be able to do what I want to do, given that I already ran into some errors about ffmpeg not being able to overwrite the input file. I still have some hope that what I want to do is some kind of exception, but I doubt it.
I already used ffmpeg to extract a specific frame into its own image file, I've set the thumbnail of a video with an existing image file, but I can't seem to figure out how to set a specific frame from the video as the thumbnail. I want to do this without having to extract the frame into a separate file and I don't want to create an output file, I want to edit the video directly and change the thumbnail using a frame from the video itself. Is that possible?
You're probably better off asking it in IRC zeronode #ffmpeg-devel.
I'd look at "-ss 33.5" or a more precise filter "-vf 'select=gte(n,1000)'" both will give same or very similar result at 30 fps video.
You can pipe the image out to your own process via pipe if you like of course without saving it : "ffmpeg ... -f jpeg -|..."

How to add a Poster Frame to an MP4 video by timecode?

The mvhd atom or box of the original Quicktime MOV format supports a poster time variable for a timecode to use as a poster frame that can be used in preview scenarios as a thumbnail image or cover picture. As far as I can tell, the ISOBMFF-based MP4 format (.m4v) has inherited this feature, but I cannot find a way to set it using FFmpeg or MP4box or similar cross-platform CLI software. Edit: Actually, neither ISOBMFF nor MP4 imports this feature from MOV. Is there any other way to achieve this, e.g. using something like HEIFʼs derived images with a thmb (see Amendment 2) role?
The original Apple Quicktime (Pro) editor did have a menu option for doing just that. (Apple Compressor and Photos could do it, too).
To be clear, I do not want to attach a separate image file, which could possibly be a screenshot grabbed from a movie still, as a separate track to the multimedia container. I know how to do that:
Stackoverflow #54717175
Superuser #597945
I also know that some people used to copy the designated poster frame from its original position to the very first frame, but many automatically generated previews use a later time index, e.g. from 10 seconds, 30 seconds, 10% or 50% into the video stream.

Animated GIF to video with ffmpeg - wrong timing

I'm trying to convert an animated GIF to video with ffmpeg, but there's a strange problem: the time delays of each frame seem to be off by one frame.
For example, if the frame #1 is supposed to be shown for 2000 ms and the frames from #2 to #10 are supposed to be shown for 100 ms each, in the resulting video it immediately skips to the frame #2 which is shown for 2000 ms instead :P
Is this some kind of a bug? Or am I doing something wrong?
Here's my command line:
ffmpeg –i Mnozenie_anim_deop.gif Mnozenie_anim.mp4
(Aside: why doesn't the "-" show up in code blocks unless I replace it with "–"?)
so nothing extraordinary, just the defaults. (Unless this is the root of the problem? Maybe my defaults are bad, and I need to specify some magic options?)
This problem seems to appear for any video formats except MKV, and when I play these files in mplayer, they all behave that way except MKV.
But when I open them in kdenlive (a non-linear video editing program), the problem appears in all of them, including MKV (which is strange, because it plays back just fine in mplayer :q ).
I tried converting the same exact file with this online converter here:
https://ezgif.com/gif-to-mp4
and there is no problem with its output – it plays back fine both in mplayer and when imported to kdenlive, so I guess they must have been using some magic command line options that I'm missing.
Any ideas what can be wrong and how to track down the culprit?
Edit: Here's a sample animated GIF file I'm trying to convert:
http://nauka.mistu.info/Matematyka/Algebra/Szeregi/Mnozenie_anim.gif
and the MP4 file that I generated from it which demonstrates this problem:
http://sasq.comyr.com/Stuff/Mnozenie_anim.mp4
As you can see, the fade in starts prematurely but stops for a couple of seconds instead of waiting for a couple of seconds BEFORE the fade in begins.

DirectShow WAV file source does not produce any sound when graph runs

We have a DirectShow application where we capture video input from USB, multiplex with audio from a WAV file (backing music), overlay audio and video effects, compress and write to an MP4 file.
Originally we were using an audio input source (microphone) and mixing our backing music and sound effects over the top but the decision was made to not capture live audio, and so I thought it would make more sense to use the backing music WAV file itself as the audio source.
Here is the filter graph we have:
backing.wav is a simple WAV file (stored locally), and was added to the graph using IFilterGraph::AddSourceFilter.
The problem is that when the graph is run, no audio samples are delivered from the WAV file. The video part of the graph runs as normal, but it's as if the audio part of the graph simply isn't running.
If I stop the graph in GraphEdit, add the Default DirectSound Device audio renderer and hook that up in place of the AAC Encoder filter and then run the graph again, the audio plays as you would expect.
Additionally, if backing.wav is replaced with an audio capture source like a microphone, audio data flows through as normal.
Does anyone have any ideas why the above graph, using a WAV file as the audio source, would fail to produce any audio samples?
I suppose the title is incorrectly identifying/summarizing the problem. There is nothing to tell for the fact that audio is not produced. It is likely that it is produced equally well with DirectSound Renderer and with AAC Encoder, specifically the data is reaching output pin of Mixing Transform Filter (is this your filter? You should be able to trace its flow and see media samples passing though).
With the information given, I would say it's likely that custom AAC encoder somehow does not like the feed and somehow either drops data or switches to erroneous state. You should be able to debug this further by inserting a Sample Grabber (or alike) filter¹ before the AAC encoder and tracing the media samples. Also comparing them to data from another source. The encoder might be sensitive to small details like media sample duration or discontinuity flag on the first sample streamed.
¹ With GraphStudioNext (GraphEdit makes no longer sense compared to) you can use internal Analyzer Filter and review media sample flow interactively using filter property page.

Is the values in avcC box in .mp4 video files affected only by FFmpeg version?

I am studying on the source identification of video files especially about those from smartphones.
I got to know that the values in avcC box in .mp4 video files have the encoding options(h.264) which decoder must know when processing the encoded stream.
And I guess most of the smartphone uses the customized FFmpeg to encode the raw stream. I want to know if the values in the avcC box are affected only by the version of FFmpeg(if not customized version is used).
I didn't delve into this but think that the libavcodec.so in FFmpeg fill the values in avcC box when doing encoding(is this right?).
So what I want to ask is if two different smartphones use the same libavcodec.so(even in the case whether other .so files, .apk file used for the recording, etc are different) and two video files which have the same resolution were filmed from each smartphone, do the values in avcC box the same?
I think this question may equal to "are the values in avcC box affected by other FFmpeg library or other layers in overall Android framework"?
++ there is one more question! Is there any case that two videos which have same resolution from the same smartphone have different values in avcC box? (I suggest the the difference of encoding option originating from low-battery mode, execution conditions of other apps, etc and if any core developer customize FFmpeg for that.)
It would be a great help if anyone let me know the answer~!
the avcC box contains the out of band extradata for the AVC stream. This stores way more than just resolution, such as profile, level, entropy encoding mode, color space information, etc. This is a standard, ffmpeg just implements that standard. iPhones for example produce perfectly valid mp4 file and do not use libav* / ffmpeg. See exactly what is is the avcC box here Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream

Resources