FFMPEG - Accurately cutting video/audio and merging multiple videos together - ffmpeg

This post has been updated since the original post
thanks in advance for any help I get with this...
I am working with 4 video assets....all stereo audio, same a/v spec.
intro -
mainvideo -
midroll_video (advert) -
endboard
I need to add an audio only cross fade between the 2 video elements part2.mp4 and pip.mp4. These 2 videos are made by the code (rather than being 2 of the 4 videos listed above). I have added in the code as kindly instructed by #Баяр Гончикжапов but unfortunately it is still not working. See conversation below for more info/ the part i need help with.
Thanks in advance!
This is the code I am using.
# Input parameters
mainvideo=mezzfile.mp4
endboard=endboard.mp4
intro=sting.mp4
midroll_video=midroll.mp4
tailtime=20
fadelength=0.2
midroll_edit_value=00:11:31.600
# Time calculations to define cut point
duration=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 $mainvideo)
endboard_cut_point=$(echo "scale=2;$duration-$tailtime" | bc)
duration=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 $mainvideo)
midroll_cut_point=$(echo "scale=2;$duration-$midroll_edit_value" | bc)
# Safety check
tbn_main=$(ffprobe -v error -select_streams v -show_entries stream=time_base -of default=noprint_wrappers=1:nokey=1 $mainvideo)
tbn_main=${tbn_main#*/}
tbn_intro=$(ffprobe -v error -select_streams v -show_entries stream=time_base -of default=noprint_wrappers=1:nokey=1 $intro)
tbn_intro=${tbn_intro#*/}
tbn_end=$(ffprobe -v error -select_streams v -show_entries stream=time_base -of default=noprint_wrappers=1:nokey=1 $endboard)
tbn_end=${tbn_end#*/}
if [[ $(( ($tbn_intro+$tbn_main+$tbn_end)/3 )) -ne $tbn_main ]]; then
echo "WARNING: source video files have the different timebase."
echo "The use of the concat demuxer will produce incorrect output."
echo "Re-encoding is highly recommended."
read -s -k $'?Press any key to exit.\n'
exit 1
fi
# Trim the main part of mainvideo
ffmpeg -hide_banner -y -i $mainvideo -to $midroll_edit_value -c copy part1.mp4
ffmpeg -hide_banner -y -i $mainvideo -ss $midroll_edit_value -to $endboard_cut_point -c copy part2.mp4
ffmpeg -hide_banner -y -i $mainvideo -ss $midroll_edit_value -to $duration -c copy part2av.mp4
# Trim the tail of mainvideo and overlay it onto endboard
ffmpeg -hide_banner -y \
-i $mainvideo \
-i $endboard \
-filter_complex \
"[0:v]select='gt(t,$duration-$tailtime)',scale=w=iw/2.03:h=ih/2.03,setpts=PTS-STARTPTS[v_tail]; \
[0:a]aselect='gt(t,$duration-$tailtime)',asetpts=PTS-STARTPTS[a_out]; \
[1:v][v_tail]overlay=format=auto[v_out]" \
-map "[v_out]" \
-map "[a_out]" \
-video_track_timescale $tbn_main \
pip.mp4
ffmpeg -hide_banner -y -i "part2.mp4" -i "part2av.mp4" -i "pip.mp4" \
-filter_complex \
-map "[0:v];"
-map "[1:a];"
-map "[2:v]"
-c copy "part2pip.mp4" \
# Pass all parts through the concat demuxer
[ -f filelist.txt ] && rm filelist.txt
for f in $intro part1.mp4 $midroll_video part2pip.mp4; do echo "file '$PWD/$f'" >> filelist.txt; done
ffmpeg -hide_banner -y -f concat -safe 0 -i filelist.txt -c copy TEST_FILE.mp4
# Sweep the table
rm pip.mp4 part1.mp4 part2.mp4 filelist.txt```

Insert video in the middle of main video and use -f concat:
# Input parameters.
# All video has same properties: codec_name, width, height, sample_aspect_ratio, r_frame_rate, time_base, pix_fmt
# if one of the video has difference then this video needs to be transcoded
# All audio has same properties: codec_name, channels, sample_rate
# if audio has diff: aresample=async=1,apad,atrim=0:$duration -ac 2
# get durations of audio and video and container: ffprobe -v quiet -show_entries format=duration:stream=duration -of default=nw=1:nk=1 "$f"
intro=VID_20230206_113828.mp4
mainvideo=VID_20230206_113949.mp4
endboard=VID_20230206_114456.mp4
midroll_video=VID_20230206_114248.mp4
#tailtime=20
tailtime=$(ffprobe -v 0 -select_streams v:0 -show_entries format=duration -of default=nw=1:nk=1 "$endboard")
midroll_edit_value=25
#WID=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=width -of default=nw=1:nk=1 "$mainvideo")
#HEI=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=height -of default=nw=1:nk=1 "$mainvideo")
#SAR=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=sample_aspect_ratio -of default=nw=1:nk=1 "$mainvideo")
#if [ "$SAR" = "N/A" ]; then SAR=1; fi
FPS=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=r_frame_rate -of default=nw=1:nk=1 "$mainvideo")
TBN=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=time_base -of default=nw=1:nk=1 "$mainvideo")
TBN=${TBN#*/}
#FMT=$(ffprobe -v 0 -select_streams v:0 -show_entries stream=pix_fmt -of default=nw=1:nk=1 "$mainvideo")
echo $WID $HEI $SAR $FPS $TBN $FMT
#CHL=$(ffprobe -v 0 -select_streams a:0 -show_entries stream=channel_layout -of default=nw=1:nk=1 "$main")
#SRA=$(ffprobe -v 0 -select_streams a:0 -show_entries stream=sample_rate -of default=nw=1:nk=1 "$main")
#echo $CHL $SRA
# Trim the main part of mainvideo
ffmpeg -hide_banner -y -i "$mainvideo" -c copy \
-f segment -segment_times $(($midroll_edit_value-5)),$(($midroll_edit_value+5)) \
-reset_timestamps 1 %d.mp4
# Insert mid
dura_0mp4=$(ffprobe -v error -show_entries format=duration -of default=nw=1:nk=1 0.mp4)
m=$( echo "$midroll_edit_value - $dura_0mp4" | bc -l )
ffmpeg -hide_banner -y -i 1.mp4 -i $midroll_video -filter_complex "
[0:v]trim=0:${m}[v0];
[0:v]trim=${m},setpts=PTS-STARTPTS[v2];
[0:a]atrim=0:${m}[a0];
[0:a]atrim=${m},asetpts=PTS-STARTPTS[a2];
[v0][a0][1:v][1:a][v2][a2]concat=n=3:v=1:a=1" \
-r $FPS \
-video_track_timescale $TBN \
1_mid.mp4
# Time calculations to define cut point
dura_2mp4=$(ffprobe -v error -show_entries format=duration -of default=nw=1:nk=1 2.mp4)
endboard_cut_point=$(echo "$dura_2mp4-$tailtime" | bc -l)
ffmpeg -hide_banner -y -i 2.mp4 -t $endboard_cut_point -c copy part2.mp4
# Trim the tail of mainvideo and overlay it onto endboard
echo "" > fcs.txt
PAD="[1:v]"
NUM=25
SCA=$((3*$NUM))
for (( i=0; i<$NUM; i++ )); do
echo "[0:v]trim=start_frame=$i:end_frame=$(($i+1)),setpts=PTS-STARTPTS+$i*N,scale=iw-iw*$i/$SCA:ih-ih*$i/$SCA[f$i];" >> fcs.txt
echo "$PAD[f$i]overlay=enable='eq(n,$i)'[o$i];" >> fcs.txt
PAD="[o$i]"
done
echo "[0:v]trim=1,setpts=PTS-STARTPTS+1/TB,scale=iw-iw*$i/$SCA:ih-ih*$i/$SCA[f$i];" >> fcs.txt
echo "$PAD[f$i]overlay=enable='gte(t,1)'" >> fcs.txt
ffmpeg -hide_banner -y \
-ss $endboard_cut_point -i 2.mp4 \
-i $endboard \
-filter_complex_script fcs.txt \
-r $FPS \
-video_track_timescale $TBN \
pip.mp4
# Pass all parts through the concat demuxer
[ -f filelist.txt ] && rm filelist.txt
for f in $intro 0.mp4 1_mid.mp4 part2.mp4 pip.mp4; do echo "file '$PWD/$f'" >> filelist.txt; done
ffmpeg -hide_banner -y -f concat -safe 0 -i filelist.txt -c copy TEST_FILE.mp4
mpv TEST_FILE.mp4
If you cut video by this code and check:
ffmpeg -i $mainvideo -ss 30 -c copy part2.mp4
ffprobe -select_streams v:0 -show_entries packet=pts_time,flags \
-of csv=print_section=0 "part2.mp4" | awk -F',' '/K/ {print $1}'
you can see something like this start:0.016000 and keyframes didn't starts from 0.000000, starts from 2.266016, for example. Because first keyframe is before beginning. If you concat that file without encoding you can get desync.

Related

How can I sync the frames of multiple videos from a multi-camera capture system using FFMPEG

I have a multi-camera capture setup with 2 canon cameras. Each of these cameras have a tentacle sync e timecode generator connected to them.
After a video capture with these 2 cameras, the generated timecode (SMPTE format) is stored in the video files metadata.
It looks like this 00:00:53;30
Is there a bash script that uses FFmpeg to trim the start time of the video that started earlier (based on timecode) to match the other and then trim the end time of the video that ended last to match the one that ended first?
The two trimmed output videos should be synced based on the timecode and have the same duration.
So far, my bash script looks like this:
file1="A001C002_220101EB_CANON.MXF"
file2="A001C002_220101US_CANON.MXF"
# Get the SMPTE timecodes of the two files
timecode1=$(ffmpeg -i "$file1" 2>&1 | sed -n 's/timecode.*: \(.*\)/\1/p')
timecode2=$(ffmpeg -i "$file2" 2>&1 | sed -n 's/timecode.*: \(.*\)/\1/p')
# Convert the SMPTE timecode to start time in seconds
start_time_1=$(echo "$timecode1" | awk -F ':' '{print 3600*$1 + 60*$2 + $3}')
start_time_2=$(echo "$timecode2" | awk -F ':' '{print 3600*$1 + 60*$2 + $3}')
# Trim the start of the video with the earlier start timecode so that both videos have the same start time
if [ "$start_time_1" -lt "$start_time_2" ]; then
ffmpeg -i "$file1" -ss "$start_time_2" -c:v libx264 -crf 18 -preset veryfast trimmed_file1.mp4
ffmpeg -i "$file2" -c:v libx264 -crf 18 -preset veryfast trimmed_file2.mp4
else
ffmpeg -i "$file2" -ss "$start_time_1" -c:v libx264 -crf 18 -preset veryfast trimmed_file2.mp4
ffmpeg -i "$file1" -c:v libx264 -crf 18 -preset veryfast trimmed_file1.mp4
fi
# Get the duration of both files
duration_1=$(ffmpeg -i trimmed_file1.mp4 2>&1 | grep "Duration" | cut -d ' ' -f 4 | sed s/,//)
duration_2=$(ffmpeg -i trimmed_file2.mp4 2>&1 | grep "Duration" | cut -d ' ' -f 4 | sed s/,//)
# Convert the duration to seconds
duration_1_secs=$(echo $duration_1 | awk -F: '{ print ($1 * 3600) + ($2 * 60) + $3 }')
duration_2_secs=$(echo $duration_2 | awk -F: '{ print ($1 * 3600) + ($2 * 60) + $3 }')
# Trim the end time of the video that ended last to match the one that ended first
if [ "$duration_1_secs" -gt "$duration_2_secs" ]; then
echo "Trimming end time of file1 to match file2"
ffmpeg -i trimmed_file1.mp4 -t "$duration_2" -c:v libx264 -c:a aac trimmed_file1.mp4
else
echo "Trimming end time of file2 to match file1"
ffmpeg -i trimmed_file2.mp4 -t "$duration_1" -c:v libx264 -c:a aac trimmed_file2.mp4
fi
But this does not make the videos have matching frames.
Thanks!

How to the stream should be automatically restart after 10 seconds if the stream cut

I will restart using this script . But sometime for some reason the stream goes cut....
How to the stream should be automatically restart after 10 seconds if the stream cut.
#!/bin/bash
while true;do
grep -c "Non-monotonous DTS in output stream" file.txt >nonmonotonus.txt
grep -c "Timestamps are unset in a packet for stream" file.txt >timestamp.txt
grep -c "PES packet size mismatch" file.txt >pespacket.txt
grep -c "Error while decoding stream" file.txt >errordecoding.txt
grep -c "Circular buffer overrun" file.txt >circularbuffer.txt
grep -c "Header missing" file.txt >header.txt
grep -c "Conversion failed" file.txt >conversion.txt
file=nonmonotonus.txt
file1=timestamp.txt
file2=pespacket.txt
file3=errordecoding.txt
file4=circularbuffer.txt
file5=header.txt
file6=conversion.txt
if (($(<"$file")>=3000)) || (($(<"$file1")>=500)) || (($(<"$file2")>=100)) || (($(<"$file3")>=1000)) || (($(<"$file4")>=500)) || (($(<"$file5")>=6)) || (($(<"$file6")>=1)); then
stream1 restart > restart.txt
sleep 1
fi
done
__________________________________________________________________________
FFmpeg -re -threads 3 -c:s webvtt -i "$INPUT_URL?source=null&overrun_nonfatal=1&fifo_size=1000000" \
-c:v copy \
-map 0:0 -map 0:1 \
-c:a aac -b:a 128k -ar 48000 \
-threads 4 -f hls -hls_time 2 -hls_wrap 15 \
"manifest.m3u8" \
</dev/null > /dev/null 2>&1 2>file.txt & echo $! > $STREAM_PID_PATH
How to automatically restart the stream.. after cut the .ts file
Thankyou ...

Bash Trying to write name of converted file to text

I have this script with which I convert my files to 16 bit. I would like to know which files have been converted so I try to write them to a list. However, the created file remains empty even when files have been converted. What is wrong?
#!/bin/bash
files=()
find . -name "*.wav" -o -name "*.WAV"|while read i; do
if [ "$(ffprobe -v error -show_entries stream=sample_fmt -of csv=p=0 "$i")" != "s16" ] || [ "$(ffprobe -v error -show_entries stream=sample_rate -of csv=p=0 "$i")" != "44100" ]
then
ffmpeg -y -i "$i" -c:a pcm_s16le -ar 44100 "${i%.*}_16.wav";
files+=("$i");
fi
done
printf "%s\n" "${files[*]}" > converted_files.txt;

Bash script to recursive find and convert movies

in my large movie collection I would like to search for movies with the primary (first) audio track with DTS coding to be converted to Dolby.
My problem would be the first track I think. My current bash script will list any movie containing a DTS track, but does not specify which track.
#!/bin/bash
# My message to create DTS list
find /home/Movies -name '*.mkv' | while read f
do
if mediainfo "$f" | grep A_DTS; then
echo $f
fi
done
After that I would like to run this command
ffmpeg -i $f -map 0:v -map 0:a:0 -map 0:a -map 0:s -c:v copy -c:a copy -c:s copy -c:a:0 ac3 -b:a:0 640k $f
or is there a way to move all the audio tracks down and adding the new AAC track?
###Progress
Thanks to #llogan I have finetuned the bash to find the required files.
#!/bin/bash
# My DTS conversion script
# credits to llogan
find /Mymovies -name '*.mkv' | while read f
do
if ffprobe -v error -select_streams a:0 -show_entries stream=codec_name -of csv=p=0 "$f" | grep dts; then
echo "$f"
fi
done
Now digging into the command I think I may have a working command. Anybody spot a problem?
ffmpeg -i $f
-map 0:v -c:v copy
-map 0:a:0? -c:a:0 ac3
-map 0:a:0? -c:a:1 copy
-map 0:a:1? -c:a:2 copy
-map 0:a:2? -c:a:3 copy
-map 0:a:3? -c:a:4 copy
-map 0:a:4? -c:a:5 copy
-map 0:a:5? -c:a:6 copy
-map 0:a:6? -c:a:7 copy
-map 0:a:7? -c:a:8 copy
-map 0:a:8? -c:a:9 copy
-map 0:s? -c copy
-b:a:0 640k
/tmp/output.mkv
mv $f /home/DTS_BACKUP/
mv /tmp/output.mkv $f
rm /tmp/output.mkv
So the end result would look like:
#!/bin/bash
# My DTS conversion script
# credits to llogan
find /Mymovies -name '*.mkv' | while read f
do
if ffprobe -v error -select_streams a:0 -show_entries stream=codec_name -of csv=p=0 "$f" | grep dts; then
ffmpeg -i $f
-map 0:v -c:v copy
-map 0:a:0? -c:a:0 ac3
-map 0:a:0? -c:a:1 copy
-map 0:a:1? -c:a:2 copy
-map 0:a:2? -c:a:3 copy
-map 0:a:3? -c:a:4 copy
-map 0:a:4? -c:a:5 copy
-map 0:a:5? -c:a:6 copy
-map 0:a:6? -c:a:7 copy
-map 0:a:7? -c:a:8 copy
-map 0:a:8? -c:a:9 copy
-map 0:s? -c copy
-b:a:0 640k
/tmp/output.mkv
mv $f /home/DTS_BACKUP/
mv /tmp/output.mkv $f
rm /tmp/output.mkv
fi
done
Ok, so i finetuned the script to seperate dts and dts-hd. I came to the conclusion this was not needed because i cant decode dts-hd to e-ac3 and may as well also encode it to ac3. But i had fun in bash.
Current bash:
#!/bin/bash
# My DTS conversion script
# credits to llogan
find /MyMovies -name '*.mkv' | while read f
do
function codec_profile {
ffprobe -v error -select_streams a:0 -show_entries stream=$1 -of csv=p=0 "$f"
}
#first check for audio format
if [ "$(ffprobe -v error -select_streams a:0 -show_entries stream=codec_name -of csv=p=0 "$f")" = "dts" ]; then
if [ "$(codec_profile "profile")" = "DTS" ]; then
echo "$f" >> dts.txt
codec_profile "profile" >> dts.txt
codec_profile "channels" >> dts.txt
if [ "$(codec_profile "channels")" -gt 5 ]; then
echo "check" >> dts.txt; else
echo "stereo" >> dts.txt;
fi
else [ "$(ffprobe -v error -select_streams a:0 -show_entries stream=profile -of csv=p=0 "$f")" = "DTS-HD MA" ];
echo "$f" >> dts-hd.txt
codec_profile "profile" >> dts-hd.txt
codec_profile "channels" >> dts-hd.txt
fi
fi
done
I checked the created txt files and the result is spot op. I also tested the command that #llogan gave me and works perfect.
ffmpeg -i "$f" -map 0 -map -0:a -map 0:a:0 -map 0:a -c copy -c:a:0 ac3 -b:a:0 640k /tmp/output.mkv
Last thing to figure out is how to check the exit code on this and replace the text file creation with this command
The idea:
ffmpeg -i "$f" -map 0 -map -0:a -map 0:a:0 -map 0:a -c copy -c:a:0 ac3 -b:a:0 640k /tmp/output.mkv
RC=$?
if [ "${RC}" -ne "0" ]; then
# list error in txt file and move on to next
else
# mv output file to overwrite original file
fi

TimeStamp variable stopped working in bash script with avconv

The following script was working fine until I upgraded all my Raspberry Pis to Release 9:
#!/bin/bash
cd /home/pi/Videos/SecurityCam/
DToday=`date '+%Y%m%d-%H%M%S'`
fn="VID $DToday"
SubT="PP $PB $DToday"
avconv -f video4linux2 -i /dev/video0 -t 3600 -r 4 -vf "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf: \text=\'$SubT \%T \' : fontcolor=white#0.8: x=7: y=460" -vcodec libx264 -vb 2000k \-y ${fn}.avi
It is now choking on the %T. Why would that be and what is the right way to get a rolling timestamp in the video?
try with this:
#!/bin/bash
cd /home/pi/Videos/SecurityCam/ || exit
DToday=$(date '+%Y%m%d-%H%M%S')
fn="VID $DToday"
SubT="PP $PB $DToday"
avconv -f video4linux2 -i /dev/video0 -t 3600 -r 4 -vf "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf: \text=\'$SubT \%T \' : fontcolor=white#0.8: x=7: y=460"
-vcodec libx264 -vb 2000k -y "${fn}.avi"

Resources