concat multiple mp4 files with an effect - ffmpeg

I have a folder that contains n video files of mp4 format: 00001.mp4, 00002.mp4, etc...
They are all the same resolution, fps, and dimensions.
What I need is a way to concat them together into one large mp4 file, but to have a specific effect between every two subsequent videos.
For example a flash fade.
Here are some examples: https://biteable.com/blog/video-transitions-effects-examples/
I have access to ffmpeg and looking for some sample commands.
Thanks,

This tutorial here will provide you with a starting point: link
What you will need to do is use:
ffmpeg -i slide.mp4 -y -vf fade=in:0:30 slide_fade_in.mp4
and:
ffmpeg -i slide_fade_in.mp4 -y -vf fade=out:120:30 slide_fade_in_out.mp4
Then combine the files with the effects. In the tutorial, they have setup the script for the combinations.

Related

Different brightness merging multiple BMP files into MP4 using ffmpeg

I want to edit a small video frame/image by image without losing quality (or without losing much of it). I have used ffmpeg to split into images using the following line:
ffmpeg -i test.mp4 $filename%%03d.bmp
This worked fine. I tried merging the images back using several lines including:
ffmpeg -re -f image2 -framerate 30 -i $filename%%03d.bmp -c:v prores_aw -pix_fmt yuv422p10le test.mkv
Though, this results in a difference between brightness/contast between original and merged videos. The merged file would be a bit darker (you have to look close) than original file. What can I do to fix this?
Thanks for your time.

ffmpeg: crop video into two grayscale sub-videos; guarantee monotonical frames; and get timestamps

The need
Hello, I need to extract two regions of a .h264 video file via the crop filter into two files. The output videos need to be monochrome and extension .mp4. The encoding (or format?) should guarantee that video frames are organized monotonically. Finally, I need to get the timestamps for both files (which I'd bet are the same timestamps that I would get from the input file, see below).
In the end I will be happy to do everything in one command via an elegant one liner (via a complex filter I guess), but I start doing it in multiple steps to break it down in simpler problems.
In this path I get into many difficulties and despite having searched in many places I don't seem to find solutions that work. Unfortunately I'm no expert of ffmpeg or video conversion, so the more I search, the more details I discover, the less I solve problems.
Below you find some of my attempts to work with the following options:
-filter:v "crop=400:ih:260:0,format=gray" to do the crop and the monochrome conversion
-vf showinfo possibly combined with -vsync 0 or -copyts to get the timestamps via stderr redirection &> filename
-c:v mjpeg to force monotony of frames (are there other ways?)
1. cropping each region and obtaining monochrome videos
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:260:0,format=gray" outL.mp4
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:1280:0,format=gray" outR.mp4
The issue here is that in the output files the frames are not organized monotonically (I don't understand why; how come would that make sense in any video format? I can't say if that comes from the input file).
EDIT. Maybe it is not frames, but packets, as returned by av .demux() method that are not monotonic (see below "instructions to reproduce...")
I have got the advice to do a ffmpeg -i outL.mp4 outL.mjpeg after, but this produces two videos that look very pixellated (at least playing them with ffplay) despite being surprisingly 4x bigger than the input. Needless to say, I need both monotonic frames and lossless conversion.
EDIT. I acknowledge the advice to specify -q:v 1; this fixes the pixellation effect but produces a file even bigger, ~12x in size. Is it necessary? (see below "instructions to reproduce...")
2. getting the timestamps
I found this piece of advice, but I don't want to generate hundreds of image files, so I tried the following:
$ ffmpeg -y -hide_banner -i outL.mp4 -vf showinfo -vsync 0 &>tsL.txt
$ ffmpeg -y -hide_banner -i outR.mp4 -vf showinfo -vsync 0 &>tsR.txt
The issue here is that I don't get any output because ffmpeg claims it needs an output file.
The need to produce an output file, and the doubt that the timestamps could be lost in the previous conversions, leads me back to making a first attempt of a one liner, where I am testing also the -copyts option, and the forcing the encoding with -c:v mjpeg option as per the advice mentioned above (don't know if in the right position though)
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:1280:0,format=gray" -vf showinfo -c:v mjpeg eyeL.mp4 &>tsL.txt
This does not work because surprisingly the output .mp4 I get is the same as the input. If instead I put the -vf showinfo option just before the stderr redirection, I get no redirected output
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:260:0,format=gray" -c:v mjpeg outR.mp4 -vf showinfo dummy.mp4 &>tsR.txt
In this case I get the desired timestamps output (too much: I will need some solution to grab only the pts and pts_time data out of it) but I have to produce a big dummy file. The worst thing is anyway, that the mjpeg encoding produces a low resolution very pixellated video again
I admit that the logic how to place the options and the output files on the command line is obscure to me. Possible combinations are many, and the more options I try the more complicated it gets, and I am not getting much closer to the solution.
3. [EDIT] instructions how to reproduce this
get a .h264 video
turn it into .mp by ffmpeg command $ ffmpeg -i inVideo.h264 out.mp4
run the following python cell in a jupyter-notebook
see that the packets timestamps have diffs greater and less than zero
%matplotlib inline
import av
import numpy as np
import matplotlib.pyplot as mpl
fname, ext="outL.direct", "mp4"
cont=av.open(f"{fname}.{ext}")
pk_pts=np.array([p.pts for p in cont.demux(video=0) if p.pts is not None])
cont=av.open(f"{fname}.{ext}")
fm_pts=np.array([f.pts for f in cont.decode(video=0) if f.pts is not None])
print(pk_pts.shape,fm_pts.shape)
mpl.subplot(211)
mpl.plot(np.diff(pk_pts))
mpl.subplot(212)
mpl.plot(np.diff(fm_pts))
finally create also the mjpeg encoded files in various ways, and check packets monotony with the same script (see also file size)
$ ffmpeg -i inVideo.h264 out.mjpeg
$ ffmpeg -i inVideo.h264 -c:v mjpeg out.c_mjpeg.mp4
$ ffmpeg -i inVideo.h264 -c:v mjpeg -q:v 1 out.c_mjpeg_q1.mp4
Finally, the question
What is a working way / the right way to do it?
Any hints, even about single steps and how to rightly combine them will be appreciated. Also, I am not limited tio the command line, and I would be able to try some more programmatic solution in python (jupyter notebook) instead of the command line if someone points me in that direction.

Use FFMPEG to combine different MP4s with srt into one file

So...probably a very basic question for those of you familiar with FFMPEG (I'm really not). I know that you can combine multiple videos into one using FFMPEG, but what about if each video has its own srt file, saved separately in a 'subs' folder and NOT included in the video itself?
Is it possible for FFMPEG to also combine the srt files into a single one (and recalculate the timestamps), and then merge this into the final, combined video? If so, what would the command be?
For example, I have video1.mp4 and video2.mp4. They have corresponding sub1.srt and sub2.srt. When video1.mp4 and video2.mp4 are merged, the timestamps for sub2.srt will, of course, be out of sync now and need to be corrected by adding the duration of video1.mp4 to the individual timestamps (i.e., if video1 is 30 seconds long, and the first subtitle in sub2.srt appears at the 2-second mark, then after the combination, it should now appear at the (30+2)=32-second mark, and so on.
If it helps, all the files are mp4, and have the same dimensions (720p).
While there might be a (complicated) way to concatenate the srt files first, the easiest way is to combine pairs of video and text first, and then concatenate the resulting container files.
Copy everything from video1.mp4 and add subtitles from sub1.srt
# Assuming English for subtitle language
ffmpeg -i video1.mp4 -i sub1.srt -c copy -c:s mov_text -metadata:s:s:0 language=en -metadata:s:s:0 title=English 1.mp4
-c copy will copy everything that might be in video1.mp4, and -c:s mov_text will format the text stream from sub1.srt into subtitles for mp4 (mov_text). The result will be written to 1.mp4.
Repeat the same command for all the other video-subtitles pairs.
Create a text file (f.e. chapters.txt) with the resulting file names
file 1.mp4
file 2.mp4
file 3.mp4
…
Concatenate the resulting container files listed in the text file
ffmpeg -f concat -safe 0 -i chapters.txt -c copy everything.mp4
See ffmpeg's concatenate demuxer
There are other ffmpeg commands that can also deal with different dimensions, mentioned in the docs.
For whatever reason I had to explicitly copy the video, audio, and subtitle streams individually on step 4, otherwise I ended up with silent videos. So my step 4 looked like this:
ffmpeg -f concat -safe 0 -i chapter.txt -c:v copy -c:a copy -c:s copy everything.mp4

Join multiple flv with ffmpeg

i am trying to join two flv files using -concat option in ffmpeg-1.1 . I have created a list named mylist.txt and placed two flv files into it, but the problem i am facing is that the output of first file in mylist.txt streams perfect but video breaks into pieces when it comes to the second file. Looks like i am using the wrong options with -concat, please guide me for suitable commands with -concat option. Following are the commands and configurations i am using for transcoding .flv files:-
mylist.txt
file '/root/1.flv'
file '/root/2.flv'
ffmpeg command :-
ffmpeg -re -f concat -i /root/mylist.txt -acodec copy -vcodec copy output.flv
Following link is the output of ffmpeg command :-
http://pastebin.com/P3uaUDEd
Unless the 2 files were encoded the same (and even if they were it could still be a problem) you would need to transcode the audio and video so that things like time stamps, bitrates, resolutions and other codec internals are correct in both streams. Change you acodec copy and vcodec copy to the codecs of your choice (x264 and mp3/aac are good choices).

ffmpeg -r option

I am trying to use ffmpeg (under linux) to add a small title to a video. So, I use:
ffmpeg -i hk.avi -r 30000/1001 -metadata title="SOF" hk_titled.avi
The addition of title seems to work, but, the problem is the output file is about a 1/3rd of the file size of the input file and I was wondering why this is? Is this at the expense of quality of the video? I am unsure.. How do I preserve the same quality/size as the input file?
The main point I am unable to figure out is the use of -r option. Going through the ffmpeg docs, it seems to suggest that -r is frames per second (The input video is 23.9fps). At the moment, (30000/1001) works out to 29 fps, but I was unsure if I should be using this value.
Thanks for your time.
The default settings for ffmpeg do not always provide a good quality output when you encode, but this depends on your output format and the available encoders. With your output ffmpeg will use the default of -b 200k or -b:v 200k.
However, you can tell ffmpeg to simply copy the input streams without re-encoding and this is recommended if you just want to add or edit metadata. These examples do the same thing but use different syntax depending on your ffmpeg version:
ffmpeg -i hk.avi -vcodec copy -acodec copy -metadata title="SOF" hk_titled.avi
ffmpeg -i hk.avi -c copy -metadata title="SOF" hk_titled.avi

Resources