Splitting video in short clip result in some being empty? - bash

I'm trying to split a video (50-100Mo) into several small clips of a few seconds each. I don't need re-encoding, hence my use of codec copy.
However some of the resulting clips don't have any video.
Fast but no video in some files
ffmpeg \
-y \
-i ./data/partie-1:-Apprendre-300-mots-du-quotidien-en-LSF.jauvert-laura.hd.mkv \
-ss 0:00:07.00 \
-codec copy \
-loglevel error \
-to 0:00:10.36 \
'raw/0:00:07.00.au revoir.mkv'
I also tried -map 0 -c copy, -acodec copy -map 0:a -vcodec copy -map 0:v or no option related to codec.
Slow but complete
No argument related to audio/video encoding, it's working but pretty slow.
ffmpeg -y \
-i "$SOURCE_VIDEO_FILE" \
-ss 0:05:37.69 \
-to 0:05:40.64 \
-loglevel error
'raw/0:05:37.69.pas la peine.mkv'
Question
How do I split a video into small chunk ~2-4s when I have no need for re-encoding?
related: https://video.stackexchange.com/q/25365/23799

Your constraint can't be satisfied. Some video codecs appear to use chunks, where they start with a complete frame and then store "diffs", so in order to use -vcodec copy, ffmpeg has to honour the chunk boundaries.
Don't use -vcodec copy if you encounter this problem.

Related

FFMPEG zoom-pan multiple images

I am trying to stitch multiple images with some zoom-pan happening on the images to create a video.
Command:-
ffmpeg -f lavfi -r 30 -t 10 -i \
color=#000000:1920x1080 \
-f lavfi \
-r 30 -t 10 \
-i aevalsrc=0 \
-i "image-1.png" \
-i "image-2.png" \
-y -filter_complex \
"[0:v]fifo[bg];\
[2:v]setpts=PTS-STARTPTS+0/TB,scale=4455:2506:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps='30':s='1920x1080'[v2];\
[bg][v2]overlay=0:0:enable='between(t,0, 5)'[bg];\
[3:v]setpts=PTS-STARTPTS+5.07/TB,scale=3840:2160:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps='30':s='1920x1080'[v3];\
[bg][v3]overlay=0:0:enable='between(t,5, 10)'[bg];\
[1:a]amix=inputs=1:duration=first:dropout_transition=0" \
-map "[bg]" -vcodec "libx264" -preset "veryfast" -crf "15" "output.mp4"
The output is not as expected, it only zooms only on the first image, the second image is just static.
FFMPEG version - 4.1
Use
ffmpeg -f lavfi -i color=#000000:1920x1080:r=30:d=10 \
-f lavfi -t 10 -i anullsrc \
-i "image-1.png" \
-i "image-2.png" \
-filter_complex \
"[2:v]scale=4455:2506:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps=30:s='1920x1080'[v2];\
[bg][v2]overlay=0:0:enable='between(t,0,5)'[bg];\
[3:v]scale=3840:2160:force_original_aspect_ratio=decrease,zoompan=z='min(zoom+0.0015,2.5)':x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':d=150:fps=30:s='1920x1080',setpts=PTS+5/TB[v3];\
[bg][v3]overlay=0:0:enable='between(t,5,10)'[bg];\
-map "[bg]" -map 1:a -vcodec libx264 -preset veryfast -crf 15 -y "output.mp4"
For lavfi sources, it's best to set frame rate and duration where applicable within the filter.
Since you're not looping the images, -t won't have any effect. Since zoompan will set fps in its output, you can skip input rate setting. And since it's a single image, setpts before zoompan has no relevance. It should be set only on the zoompan whose timestamps need to be shifted.
Since you've only one audio, no point sending it to amix - there's nothing to mix with! Just map it directly.

Combining multiple image files into a video while using filter_complex to apply a watermark

I'm trying to combine two ffmpeg operations into a single one.
Currently I have two sets of ffmpeg commands that first generate a video from existing images, then runs that video through ffmpeg again to apply a watermark.
I'd like to see if its possible to combine these into a single operation.
# Create the source video
ffmpeg -y \
-framerate 1/1 \
-i layer-%d.png \
-r 30 -vcodec libx264 -preset ultrafast -crf 23 -pix_fmt yuv420p \
output.mp4
# Apply the watermark and render the final output
ffmpeg -y \
-i output.mp4 \
-i logo.png \
-filter_complex "[1:v][0:v]scale2ref=40:40[a][b];[b][a]overlay=(80):(main_h-200-80)" \
final.mp4
Use
ffmpeg -y \
-framerate 1/1 -i layer-%d.png \
-i logo.png \
-filter_complex "[0:v]fps=30[img];
[1:v][img]scale2ref=40:40[a][b];[b][a]overlay=(80):(main_h-200-80)" \
final.mp4
(The use of scale2ref doesn't make sense since you're scaling to a fixed size).

How to assemble three videos with cross fade using FFMPEG

I am trying to assemble 3 videos (static title) (main feature) (static trailer). The title and trailer are encoded text with the main feature being h264 encoded (at 6Mbs). The title and trailer have nul audio encoded. The specific goal is a crossfade between the three segments. I have concat working fine, but adding crossfade is causing me issues.
How does setpts=PTS-STARTPTS+(4/TB)[v2]; work?
This code puts it together, but the bit rate and errors are wrong.
ffmpeg -y -i title.mp4 -i vid.mp4 -i trailer.mp4 -f lavfi -i color=black:s=1920x1080 -filter_complex \
"[0:v]format=pix_fmts=yuva420p,fade=t=out:st=04:d=2:alpha=1,setpts=PTS-STARTPTS[v0]; \
[1:v]format=pix_fmts=yuva420p,fade=t=in:st=0:d=2:alpha=1,fade=t=out:st=6:d=1:alpha=1,setpts=PTS-STARTPTS+10/TB[v1]; \
[2:v]format=pix_fmts=yuva420p,fade=t=in:st=0:d=2:alpha=1,fade=t=out:st=2:d=1:alpha=1,setpts=PTS-STARTPTS+20/TB[v2]; \
[3:v]trim=duration=30[over]; \
[over][v0]overlay[over1]; \
[over1][v1]overlay[over2]; \
[over2][v2]overlay=format=yuv420[outv]" \
-vcodec h264_videotoolbox -b:v 6000k -maxrate 6000k -bufsize 6000000 -map [outv] merge.mp4

Using hex colors with ffmpeg's showwaves

I've been trying to create a video with ffmpeg's showwaves filter and have cobbled together the below command which I sort of understand. I'm wondering if it is possible to set the color of the wav form using hex colors. (i.e. #F3ECDA instead of "blue")
Also, feel free to tell me if there's any unneeded garbage in the command as is. Thanks.
ffmpeg -i audio.mp3 -loop 1 -i picture.jpg -filter_complex \
"[0:a]showwaves=s=960x202:mode=cline:colors=blue[fg]; \
[1:v]scale=960:-1,crop=iw:540[bg]; \
[bg][fg]overlay=shortest=1:main_h-overlay_h-30,format=yuv420p[out]" \
-map "[out]" -map 0:a -c:v libx264 -preset fast -crf 18 -c:a libopus output.col.mkv
See https://ffmpeg.org/ffmpeg-utils.html#Color for syntax. In short, it is colors=0xRRGGBB or colors=#RRGGBB. Rest looks fine.

Run FFMPEG multiple overlay commands in one command

I'm using ffmpeg to do more operation on one video
the operation that i want to do is add many text in difference time, audio and image.
i can do all of them but not in one command, Do all separately
any suggestions to do multiple text , overlay image and audio in one command
Thanks
To achieve the commands provided in comments in one execution, use
ffmpeg –i input.mp4 –i img.png -i audio.mp4 -filter_complex \
"[0:v][1:v]overlay=15 :15:enable=between(t,10,20), \
drawtext=enable='between(t,12,3*60)': \
fontfile=/usr/share/fonts/truetype/freefon‌​t/FreeSerif.ttf: text='Test Text'[v]" \
-map "[v]" -map 2:a -acodec copy -qscale 4 -vcodec mpeg4 outvideo.mp4
To add more drawtext filters, insert them after the first drawtext filter e.g.
ffmpeg –i input.mp4 –i img.png -i audio.mp4 -filter_complex \
"[0:v][1:v]overlay=15 :15:enable=between(t,10,20), \
drawtext=enable='between(t,12,3*60)': \
fontfile=/usr/share/fonts/truetype/freefon‌​t/FreeSerif.ttf: text='Test Text', \
drawtext=enable='between(t,12,3*60)': \
fontfile=/usr/share/fonts/truetype/freefon‌​t/FreeSerif.ttf: text='Text2'[v]" \
-map "[v]" -map 2:a -acodec copy -qscale 4 -vcodec mpeg4 outvideo.mp4

Resources