Converting subtitles for different framerates - algorithm

I'm trying to make a simple CLI program that parses a SRT subtitle file and creates a new one, editing the timestamps to fit the desired framerate.
Eg I have a one-hour video track that runs at 25.0fps, with proper subtitles.
When encoding the same video at 23.976fps, the output video is a few seconds shorter (3 seconds approximately)
I've tried applying the following cross product to each time value in my srt file :
timestamp = timestamp * outputfps / inputfps
This produces captions that are approx. 3 minutes earlier compared to the input SRT (for the last captions - for the first ones the delay is obviously lesser), where the maximum delay should be 3 seconds, according to the new video file length.
This is all new for me and it seems obvious that something's wrong with the way I convert these timestamps. Could you please highlight my mistake?
Edit : According to j_random_hacker clever answer, the video should have the same duration at 25 than at 12 fps, which is easily verified. Seems like the 3 seconds offset I have is there no matter what the output framerate is - I guess there's some sort of trimming happening back there.
The main question remains : how does one convert a subtitle track so it doesn't go out of sync as the video file plays? (see my own comment below if this is unclear)

Related

When creating a Xing or Info tag in an MP3, may I use any MP3 header or does it have to match other frames?

I have a set of bare MP3 files. Bare as in I removed all tags (no ID3, no Xing, no Info) from those files.
Just before sending one of these files to the client, I want to add an Info tag. All of my files are CBR so we will use an Info tag (no Xing).
Right now I get the first 4 bytes of the existing MP3 to get the Version (MPEG-1, Layer III), Bitrate, Frequency, Stereo Mode, etc. and thus determine the size of one frame. I create the tag that way, reusing these 4 bytes for the Info tag and determining the size of the frame.
For those wondering, these 4 bytes may look like this:
FF FB 78 04
To me it felt like you are expected to use the exact same first 4 bytes in the Info tag as found in the other audio frames of the MP3, but when using ffmpeg, they stick an Info tag with a hard coded header (wrong bitrate, wrong frequency, etc.)
My question is: Is ffmpeg really doing it right? (LAME doesn't do that) Could I do the same, skipping the load of the first 4 bytes and still have the greater majority of the players out there play my files as expected?
Note: since I read these 4 bytes over the network, it would definitely save a lot of time and some bandwidth to not have to load these 4 bytes on a HEAD request. Resources I could use for the GET requests instead...
The reason for the difference is that with certain configurations, the size of a frame is less than 192 bytes. In that case, the full Info/Xing tag will not fit (and from what I can see, the four optional fields are always included, so an Info/Xing tag is always full even if not required to be).
So, for example, if you have a single channel with 44.1kHz data at 32kbps, the MP3 frame is 117 or 118 bytes. This is less than what is necessary to save the Info/Xing tag.
What LAME does in that situation is forfeit the Info/Xing tag. It's not going to be seen anywhere in the file.
On the other hand, what FFMPEG does is create a frame with a higher bitrate. So instead of 32kbps, it will try with 48kbps and then 64kbps. Once it finds a configuration which offers a frame large enough to support the Info/Xing tag, it stops. (I have not looked at the code, so how FFMPEG really finds a large enough frame, I do not know, but on my end I just incremented the bitrate index field by one until frame size >= 192 and it works).
You can replicate the feat by first creating (or converting) a WAVE file at 44.1kHz using a 32kbps bitrate then try to convert it to MP3 using ffmpeg and see that the Info/Xing tag has a different bitrate.

How to convert images to a video by the time they have been taken/saved and not by name?

I have a Livestream that takes pictures every one or two seconds. Then I have a folder in which the pictures are being saved.
Usually I would just simply take ffmpeg -start_number -i IMG_%d.JPG video.webm (I need the end format in .webm). I want to use the saved pictures to form a time-lapse to put up next to the stream. So people can see the last 24h/ 7d and 90d. So I wondered if I could code that all the pics of that time period are used and only them.
I later want it to happen automatic, so i dont have to write the code new every day.
Thanks for your help

Capture Video from Public Web Video Feed

I've unsuccessfully mucked around with this on my own and need help.
Given the public Web camera feed at https://itsvideo.arlingtonva.us:8011/live/cam58.stream/playlist.m3u8 I'd like to be able to be able to capture the video feed into an MP4 or MPG file with a reasonably accurate timestamp using the Windows command line (so I can put it into a batch script, etc.).
This is probably easy for someone who is already a wiz with VLC or FFmpeg or some such tool.
Additional wish list items would be to call up a higher resolution stream for a shorter duration (so as to balance I/O impact) and/or to just get still images instead of the video offered.
For instance, the m3u file has the following parameters:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=214105,CODECS="avc1.100.40",RESOLUTION=352x288
chunklist_w977413411.m3u8
Would there be a way to substitute any of these to increase the resolution and reduce the video duration in a corresponding way so that net I/O is the same? Or even to just get a still image, whether higher res or not?

How to scale and mux audio?

First problem is with audio rescaling. I'm trying to redo doc/examples/transcode_aac.c so that it also resamples from 41100 to 48000, it contained a warning that it can't do it.
Using doc/examples/resampling_audio.c as a reference, I saw that before doing swr_convert, I need to find the number of audio samples at the output with the code like this:
int dst_nb_samples = av_rescale_rnd( input_frame->nb_samples + swr_get_delay(resampler_context, 41100),
48000, 41100, AV_ROUND_UP);
Problem is, when I just set int dst_nb_samples = input_frame->nb_samples (which is 1024), it encodes and plays normally, but when I do that av_rescale_rnd thing (which results in 1196), audio is slowed down and distorted, like there are skips in the audio.
Second problem is with trying to mux webm with opus audio.
When I set AVStream->time_base to 1/48000, and increase AVFrame->pts by 960, the resulted file is played in the player as a file that is much bigger. 17 seconds audio shows as 16m11s audio, but it plays normally.
When I increase pts by 20, it displays normally, but has a lot of [libopus # 00ffa660] Queue input is backward in time messages during the encoding. Same for pts 30, still has those messages.
Should I try time_scale 1/1000? webm always have timecodes in milliseconds, and opus have packet size of 20ms (960 samples at 48000 Hz).
Search for pts += 20;
Here is the whole file, all modification I did are marked with //MINE: http://www.mediafire.com/file/jlgo7x4hiz7bw64/transcode_aac.c
Here is the file I tested it on http://www.mediafire.com/file/zdy0zarlqw3qn6s/480P_600K_71149981_soundonly.mkv
The easiest way to achieve that is by using swr_convert_frame which take a frame and resample it to a completely different one.
You can read more about it here: https://ffmpeg.org/doxygen/3.2/swresample_8h_source.html
dst_nb_samples can be calculated as this:
dst_nb_samples = 48000.0 / audio_stream->codec->sample_rate * inputAudioFrame->nb_samples;
Yours probably correct too, I didn't check, but this one I used before, confirm with yours but the number you gave check out. So real problem is probably somewhere else. Try to supply 960 samples in sync with video frames, to do this you need to store audio frames to an additional liner buffer. See if problem fixes.
And/or:
2ndly my experiences says audio pts increase as number of samples per frame (i.e. 960 for 50fps video for 48000hz (48000/50)), not by ms. If you supply 1196 samples, use pts += 1196 (if not used additional buffer I mentioned above). This is different then video frame pts. Hope that helps.
You are definitely in right path. I'll examine the source code if I have time. Anyway hope that helps.

Length of an audio file (e.g .wav) using RubyAudio

How can I determine the length (in ms) of an audio file (e.g .wav) using RubyAudio
s = RubyAudio::Sound.open("1.wav")
You can get the SongInfo by:
songInfo = s.info
And then the song info contains the sample rate and the number of frames which you can use to calculate the duration of the sound file:
duration = songInfo.frames / songInfo.samplerate
From a cursory look at the docs, it looks like you can't do that with RubyAudio.
Have you tried looking at ruby-mp3info? I don't know if it's still actively developed, nor if it works for multiple audio formats, but it claims to be able to give you the duration of an mp3.
An alternate way would be to do an estimate based on the bitrate and the file length.
RubyAudio doesn't appear to have been updated in six years and its documentation is sparse. If you're able I'd recommend using rtaglib instead.
However, if you're married to RubyAudio it looks like you can get both a frame count (Audio::Soundfile#frames) and a sample (frame) rate (Audio::Soundfile#samplerate). Knowing this you should be able to divide the number of frames by sample rate to get the length of the file in seconds.

Resources