I need to change the id of a video track in an mp4 container and of course without re-encoding. How can I do that with ffmpeg or MP4Box ? Is that even possible ?
With MP4Box you can fine-tune these parameters (more in MP4Box -h general):
-set-track-id id1:id2 changes the id of a track from id1 to id2
-swap-track-id id1:id2 swaps the IDs of the identified tracks
Example:
inplace: MP4Box -set-track-id 100:101 file.mp4
new file: MP4Box -set-track-id 100:101 file.mp4 -out new.mp4
Related
I asked a question on slack on why the quality and bitrate of my dash video wasn't changing and I got this response: You only have one quality in your manifest. there is no way for the player to choose a different one
So how can I create different "qualities"?
I have a mp4box command like:
MP4Box -dash 2000 -profile dashavc264:live -bs-switching multi -url-template whatever.mp4#trackID=1:id=vid0:role=vid0 whatever.mp4#trackID=2:id=aud0:role=aud0 -out whatever.mpd
Would it be possible to create different "qualities" with only mp4box or would I have to create the same video with different resolutions with something like ffmpeg and then feed them into the inputs to the command above?
GPAC contributor here. Since v0.9, GPAC has introduced a new architecture that allows to transcode by leveraging FFmpeg.
Example (forced intra period of 2 seconds):
MP4Box -dash 2000 -profile dashavc264:live -out session.mpd source.mp4:##enc:c=avc:fintra=2
Edit: since 2020/09/29 multi-encoding is possible
MP4Box -fgraph -dash 2000 -profile dashavc264:live source.mp4:#ffsws:osize=160x120#enc:c=avc:fintra=2:b=100k:#Representation=1##ffsws:osize=320x240#enc:c=avc:fintra=2:b=200k:#Representation=2
Please let us know if you have any questions!
So...probably a very basic question for those of you familiar with FFMPEG (I'm really not). I know that you can combine multiple videos into one using FFMPEG, but what about if each video has its own srt file, saved separately in a 'subs' folder and NOT included in the video itself?
Is it possible for FFMPEG to also combine the srt files into a single one (and recalculate the timestamps), and then merge this into the final, combined video? If so, what would the command be?
For example, I have video1.mp4 and video2.mp4. They have corresponding sub1.srt and sub2.srt. When video1.mp4 and video2.mp4 are merged, the timestamps for sub2.srt will, of course, be out of sync now and need to be corrected by adding the duration of video1.mp4 to the individual timestamps (i.e., if video1 is 30 seconds long, and the first subtitle in sub2.srt appears at the 2-second mark, then after the combination, it should now appear at the (30+2)=32-second mark, and so on.
If it helps, all the files are mp4, and have the same dimensions (720p).
While there might be a (complicated) way to concatenate the srt files first, the easiest way is to combine pairs of video and text first, and then concatenate the resulting container files.
Copy everything from video1.mp4 and add subtitles from sub1.srt
# Assuming English for subtitle language
ffmpeg -i video1.mp4 -i sub1.srt -c copy -c:s mov_text -metadata:s:s:0 language=en -metadata:s:s:0 title=English 1.mp4
-c copy will copy everything that might be in video1.mp4, and -c:s mov_text will format the text stream from sub1.srt into subtitles for mp4 (mov_text). The result will be written to 1.mp4.
Repeat the same command for all the other video-subtitles pairs.
Create a text file (f.e. chapters.txt) with the resulting file names
file 1.mp4
file 2.mp4
file 3.mp4
…
Concatenate the resulting container files listed in the text file
ffmpeg -f concat -safe 0 -i chapters.txt -c copy everything.mp4
See ffmpeg's concatenate demuxer
There are other ffmpeg commands that can also deal with different dimensions, mentioned in the docs.
For whatever reason I had to explicitly copy the video, audio, and subtitle streams individually on step 4, otherwise I ended up with silent videos. So my step 4 looked like this:
ffmpeg -f concat -safe 0 -i chapter.txt -c:v copy -c:a copy -c:s copy everything.mp4
I am using Terminal to run ffmpeg commands (Mac OS) in order to record radio shows streamed online. The stream is in m3u8 I want to output it in mp3. So far so good, I am able to achieve that. However, I'd like the output file to read YYYYMMDD-fm93-segal.mp3 where YYYYMMMDD are the date the recording was made.
I am not able to achieve this using -strftime 1 for some reason. When using my code, the output file reads %Y%m&d-fm93-segal.mp3 instead of replacing the strings by the real date.
Here is the line I'm using:
ffmpeg -i "https://cogecomedia.leanstream.co/cogecomedia/CJMFFM.stream/playlist.m3u8" -acodec mp3 -strftime 1 "%Y%m%d-fm93-segal.mp3"
Anyone knows why and could help me with that?
-strftime is not a generic option, but is only supported by some muxers: hls, image, segment.
One method is to use the segment muxer and give it a big -segment_time value if desired:
ffmpeg -i "https://cogecomedia.leanstream.co/cogecomedia/CJMFFM.stream/playlist.m3u8" -f segment -segment_time 24:00:00 -acodec mp3 -strftime 1 "%Y%m%d-fm93-segal.mp3"
I am trying to create a video out of a sequence of images and various audio files using FFmpeg. While it is no problem to create a video containing the sequence of images with the following command:
ffmpeg -f image2 -i image%d.jpg video.mpg
I haven't found a way yet to add audio files at specific points to the generated video.
Is it possible to do something like:
ffmpeg -f image2 -i image%d.jpg -i audio1.mp3 AT 10s -i audio2.mp3 AT 15s video.mpg
Any help is much appreciated!
EDIT:
The solution in my case was to use sox as suggested by blahdiblah in the answer below. You first have to create an empty audio file as a starting point like that:
sox -n -r 44100 -c 2 silence.wav trim 0.0 20.0
This generates a 20 sec empty WAV file. After that you can mix the empty file with other audio files.
sox -m silence.wav "|sox sound1.mp3 -p pad 0" "|sox sound2.mp3 -p pad 2" out.wav
The final audio file has a duration of 20 seconds and plays sound1.mp3 right at the beginning and sound2.mp3 after 2 seconds.
To combine the sequence of images with the audio file we can use FFmpeg.
ffmpeg -i video_%05d.png -i out.wav -r 25 out.mp4
See this question on adding a single audio input with some offset. The -itsoffset bug mentioned there is still open, but see users' comments for some cases in which it does work.
If it works in your case, that would be ideal:
ffmpeg -i in%d.jpg -itsoffset 10 -i audio1.mp3 -itsoffset 15 -i audio2.mp3 out.mpg
If not, you should be able to combine all the audio files with sox, overlaying or inserting silence to produce the correct offsets and then use that as input to FFmpeg. Not as convenient, but guaranteed to work.
One approach I can think of is to create your audio file for the whole duration of the video first and then mux the audio with the video file
is it possible to use ffmpeg to convert a movie with variable framerate to a sequence of still pics without duplicates?
I tried something like:
ffmpeg -i vid.avi pic%d.png
but each frame generates thousands of pictures. I also tried:
ffmpeg -i vid.avi -r 10 pic%d.png
but I still have lots of duplicates AND some frames are missing
is it possible to specify something like "-r natural"???
TIA
A bit late, but...
ffmpeg -vsync 2 -i <input movie> <output with %d>
will do the trick. For example:
ffmpeg -vsync 2 -i "C:\Test.mp4" "C:\Thumbnails\Test%d.jpg"
will create exactly one jpeg per frame in the C:\Thumbnails folder, and each jpeg will be sequentially numbered with no gaps or duplicates in numbering.