How to fix/prevent overlapping timecode - SBV to SRT conversion (Exporting from Youtube to upload to LinkedIn) - caption

I am uploading a video to LinkedIn and want to add subtitles.
To achieve this I have exported an SBV file from YouTube and converted it to SRT, however the SRT file doesn't read correctly back into LinkedIn, giving me an error about the timecode referenced being in the past.
Looking at both the original SBV file and the converted SRT file I can see what's happening and suspect it's because Youtube has multi-line subtitles..? So the captions end up essentially overlapping? (At least, that's what it looks like based on the timecode, snippet below).
// Sample from the SBV file generated by YouTube:
0:00:14.070,0:00:20.670
theatre workshop and two weeks ago I
0:00:18.029,0:00:22.680
found out that two were vegetarian and
0:00:20.670,0:00:24.359
one was gluten-free but that's fine
0:00:22.680,0:00:27.240
that's not a challenge I can do that a
// Sample from converted SRT file:
5
00:00:14,070 --> 00:00:20,670
theatre workshop and two weeks ago I
6
00:00:18,029 --> 00:00:22,680
found out that two were vegetarian and
7
00:00:20,670 --> 00:00:24,359
one was gluten-free but that's fine
8
00:00:22,680 --> 00:00:27,240
that's not a challenge I can do that a
I was able to resolve this by manually editing the timecode in the SRT so that each new line/caption references timecode that is sequentially after the previous. This fixed the issue and I was able to add the SRT file successfully, but the process was laborious.
Can anyone suggest a way to generate the SRT file correctly, so it doesn't need to be manually edited?

I also had overlapping subtitles timing issue
This helped me: https://gist.github.com/nimatrueway/4589700f49c691e5413c5b2df4d02f4f
Kudos to Nima Taheri https://github.com/nimatrueway
I have uploaded my subtitles to YouTube, downloaded it as SRT, Ran the go fixer and uploaded it again as subtitles file, it fixed it.

Related

how to merge segmented m4s or find the init file with ffmpeg

i have this locked video that i can watch but looking at the responses i see that the video looks like its split into 3 m4s segments
this video seems to be hosted in vimeo but i can't seem to know where to find the init.mp4 nor do i knoew how to merge the segment and turn it into mp4 with ffmpeg
hi , i have this locked video that i can watch but looking at the responses i see that the video looks like its split into 3 m4s segments
this video seems to be hosted in vimeo but i can't seem to know where to find the init.mp4 nor do i knoew how to merge the segment and turn it into mp4 with ffmpeg

Streaming video playlist from collection of identical mp4 files

I am looking for a way to play/stream to browser tag a list of mp4 files (same size, bitrate, etc) without hickups in between the files. I am hoping the following approach would work:
* convert mp4 files to m4s/m4v files
* generate MPEG-Dash MPD file (xml)
* stream MPD to dash player in browser
Is this in any way possible? I am aware the m4s/m4v files need special headers and an entry file must be made somehow, and there you have my roadblock.
Bottom-line is I want to avoid to concatenate the separate videos into one big video file and avoid the hick-ups you see when sequencing via a straightforward 'ended-event' way in JS.
Any suggestion much appreciated!
If you want a basic client side solution you can use two separate players or video tags in your web page, showing one and hiding the other.
The one that is visible plays the current video.
The other player loads starts and immediately pauses the next video.
When the first video ends, you hide that player and make the other one visible, un-pausing the video at the same time.
You then preload the next video into the original player and continue.
This technique is used successfully in some sites where ad breaks are mixed with the main video, as an example.

Animated GIF to video with ffmpeg - wrong timing

I'm trying to convert an animated GIF to video with ffmpeg, but there's a strange problem: the time delays of each frame seem to be off by one frame.
For example, if the frame #1 is supposed to be shown for 2000 ms and the frames from #2 to #10 are supposed to be shown for 100 ms each, in the resulting video it immediately skips to the frame #2 which is shown for 2000 ms instead :P
Is this some kind of a bug? Or am I doing something wrong?
Here's my command line:
ffmpeg –i Mnozenie_anim_deop.gif Mnozenie_anim.mp4
(Aside: why doesn't the "-" show up in code blocks unless I replace it with "–"?)
so nothing extraordinary, just the defaults. (Unless this is the root of the problem? Maybe my defaults are bad, and I need to specify some magic options?)
This problem seems to appear for any video formats except MKV, and when I play these files in mplayer, they all behave that way except MKV.
But when I open them in kdenlive (a non-linear video editing program), the problem appears in all of them, including MKV (which is strange, because it plays back just fine in mplayer :q ).
I tried converting the same exact file with this online converter here:
https://ezgif.com/gif-to-mp4
and there is no problem with its output – it plays back fine both in mplayer and when imported to kdenlive, so I guess they must have been using some magic command line options that I'm missing.
Any ideas what can be wrong and how to track down the culprit?
Edit: Here's a sample animated GIF file I'm trying to convert:
http://nauka.mistu.info/Matematyka/Algebra/Szeregi/Mnozenie_anim.gif
and the MP4 file that I generated from it which demonstrates this problem:
http://sasq.comyr.com/Stuff/Mnozenie_anim.mp4
As you can see, the fade in starts prematurely but stops for a couple of seconds instead of waiting for a couple of seconds BEFORE the fade in begins.

FFmpeg image sequence to video with variable image durations

I have been looking for a way to convert a sequence of PNGs to a video. There are ways to do that using the CONCAT function within FFmpeg and using a script.
The problem is that I want to show certain images longer than others. And I need it to be accurate. I can set a duration (in seconds) in the script file. But I need it to be frame-accurate. So far I have not been successful.
This is what I want to make:
Quicktime video with transparancy (Prores4444 or other codec that supports transparancy + alpha channel)
25fps
This is what I have: [ TimecodeIn - TimecodeOut in destination video ]
img001.png [0:00:05:10 - 0:00:07:24]
img002.png [0:00:09:02 - 0:00:12:11]
img003.png [0:00:15:00 - 0:00:17:20]
...
img120.png [0:17:03:11 - 0:17:07:01]
Of course this is not the format of the script file. Just an idea about what kind of data I am dealing with. The PNG-imagefiles are subtitles I generate elsewhere in my application. I would like to be able to export the subtitles as a transparent movie that I can easily import in my video editing software.
I also have been thinking of using blank transparent images I will use as spacers, between the actual subtitle images.
After looking around I think this might help:
On the FFMPEG site they explain about making a timed slideshow
In the Concat demuxer section they talk about making a slideshow, based on a text file, with references to the image files and the duration of the image.
So, I create all the PNG images I need. These images have the subtitle text. Each image holds one subtitle page.
For the moments I want to hide the subtitle, I use a blank PNG.
I generate a text file as explained on the FFMPEG website.
This text file will reference to all the PNGs. For the duration I just calculate the outcue - incue. Easy... I think...

avformat_write_header produces invalid header (resulting MPG broken)

I am rendering a video file from input pictures that come from a 3D engine at runtime (I don't pass an actual picture file, just RGB memory).
This works perfectly when outputting MP4 using CODEC_ID_H264 as video codec.
But when I want to create an MPG file using CODEC_ID_MPEG2VIDEO, the resulting file is simply broken. No player can play the video correctly and when I then want to concatenate that MPG with another MPG file, and transform the result MP4 in another step, the resulting .mp4 file has both videos, but many frames from the original MPG video (and only video! Sound works fine) are simply skipped.
At first I thought the MPG -> MP4 conversion was the problem, but then I noticed that the initial MPG, which comes from the video render engine, is already broken, which would speak for broken headers. Not sure if it is the system or sequence headers that are broken, though.
Or if it could be something totally different.
If you want to have a look, here is the file:
http://www.file-upload.net/download-7093306/broken.mpg.html
Again, the exact same muxing code works perfectly fine when directly creating an MP4 from the video render engine, so I'm pretty sure the input data, swscale(), etc. is correct. The only difference is that CODEC_ID_H264 is used and some additional variables (like qmin, qmax, etc.) are set, which are all specific to H264 so should not have an impact.
Also, neither avformat_write_header nor av_write_trailer report an error.
As an additional info, when viewing the codec data of the MPG in VLC player, it is not able to show the FPS, resolution and format (should show 640x360, 30 fps and 4:2:0 YUV).
I am using a rather new (2-3 months old, maybe) FFmpeg version, which I compiled from sources with MinGW.
Any ideas on how to resolve this would be welcome. Currently, I am out of those :)
Alright, the problem was not the avformat_write_header, but that I did not set the PTS value of each written video packet to AV_NOPTS_VALUE.
Once I do set it for each video packet, everything works fine.
I assumed that AV_NOPTS_VALUE was the default, as I never needed to set any special PTS value.

Resources