FFmpeg cut video end with no sound - ffmpeg

Is it possible to remove the end of a video that has no sound?
: = Has Sound
. = No Sound
00:00 [::::::::::::::::::::::::::::........] 01:24
____________________^^^^^ Cut this
Thanks

Note time when sound ends, such as 00:01:23, then use:
ffmpeg -i input -c copy -t 00:01:23 output
You can use seconds if you prefer, such as -t 83.
Exact time is not guaranteed with stream copy mode (-c copy). If exactness is a requirement then remove -c copy.
If you want to automate it (you didn't specify) then integrate the silencedetect filter. It will output timestamps of detected silence, then use the supplied timestamps to create your ffmpeg command. See ffprobe/ffmpeg silence detection command for an example showing how to get the timestamps.

Related

FFmpeg extract every frame with timestamp

How can I extract every frame of an video with the respective timestamp?
ffmpeg -r 1 -i in.mp4 images/frame-%d.jpg
This code extracts all frames, but without timestamp. Since Windows do not allow ":" in filenames, I'll replace it for ".". I would like something like this:
frame-h.m.s.ms.random_value.jpg
frame-00.10.15.17.1.jpg
frame-00.11.16.04.2.jpg
frame-00.11.17.11.3.jpg
frame-00.11.17.22.4.jpg
frame-00.12.04.01.5.jpg
...
The reason of "random id" is to allow repeated frames without replacing them.
I have tried:
ffmpeg -r 1 -i rabbit.mp4 images/frame-%{pts\:hms}.jpg
But this doesn't work! => Could not open file : images/
I have no clue how can I do that! Can you help me? Thank you.

Remove a section from the middle of a video without concat

How do I cut a section out of a video with ffmpeg?
Imagine I have a 60 second mp4 A.
I want to remove all the stuff from 0:15 to 0:45.
The result should be a 30-second mp4, which is composed of the first 15 seconds of A directly followed by the last 15 seconds of A.
How can I do this without using concat?
I know how I could do it by creating two intermediary files and then using ffmpeg to concat them. I don't want to have to perform so much manual work for this (simple?) operation.
I have also seen the trim filder used for removing multiple parts from a video. All the usages I've found show that it seems to be very verbose, and I haven't found an example for a case as simple as I would like (just a single section removed).
Do I have to use trim for this operation? Or are there other less verbose solutions?
The ideal would of course be something at least simple as -ss 0:15 -to 0:45 which removes the ends of a video (-cut 0:15-0:45 for example).
I started from
https://stackoverflow.com/a/54192662/3499840 (currently the only answer to "FFmpeg remove 2 sec from middle of video and concat the parts. Single line solution").
Working from that example, the following works for me:
# In order to keep <start-15s> and <45s-end>, you need to
# keep all the frames which are "not between 15s and 45s":
ffmpeg -i input.mp4 \
-vf "select='not(between(t,15,45))', setpts=N/FRAME_RATE/TB" \
-af "aselect='not(between(t,15,45))', asetpts=N/SR/TB" \
output.mp4
This is a one-line linux command, but I've used the bash line-continuation character ('\') so that I can vertically align the equals-signs as this helps me to understand what is going on.
I had never seen ffmpeg's not and between operators before, but I found their documentation here.
Regarding the usual ffmpeg "copy vs re-encode" dichotomy, I was hoping to be able to use ffmpeg's "copy" "codec" (yeah, I know that it's not really a codec) so that ffmpeg would not re-encode my video, but if I specify "copy", then ffmpeg starts and stops at the nearest keyframes which are not sufficiently close to my desired start and stop points. (I want to remove a piece of video that is approximately 20 seconds long, but my source video only has one keyframe every 45 seconds!). Hence I am obliged to re-encode. See https://trac.ffmpeg.org/wiki/Seeking#Seekingwhiledoingacodeccopy for more info.
The setpts/asetpts filters set the timestamps on each frame to the correct values so that your media player will play each frame at the correct time.
HTH.
If you want to use the copy "codec", consider the following approach:
ffmpeg -i input.mp4 -t "$start_cut_section" -c copy part1.mp4&
ffmpeg -i input.mp4 -ss "$end_cut_section" -c copy part2.mp4&
echo "file 'part1.mp4'" > filelist;
echo "file 'part2.mp4'" >> filelist;
wait;
ffmpeg -f concat -i filelist -c copy output.mp4;
rm filelist;
This creates two files from before and after the cut, then combines them into a new trimmed final video. Obviously, this can be used to create as many cuts as you like. It may seem like a longer approach than the accepted answer, but it likely will execute much faster because of the use of the copy codec.

Does ffmpeg offer an option to prevent the creation of the Info tag in an MP3 file?

I have various MP3 files which I need to process.
The processing includes quite a few steps, but one important one is to remove the Info/Xing tag from the file.
I successfully do so by running lame -t .... However, there are times when I want to run ffmpeg to do a conversion which will happen after the lame -t ... conversion and I see that ffmpeg re-insert an Info/Xing tag in the MP3 file.
Is there a command line option I can use so ffmpeg does not re-insert the Info/Xing tag?
-write_xing 0
See ffmpeg -h muxer=mp3

Subtract two timecodes in bash, for use in ffmpeg

I am running ffmpeg in terminal on a mac, to trim a movie file losslessly using the following in bash:
startPosition=00:00:14.9
endPosition=00:00:52.1
ffmpeg -i mymovie.mov -ss $startPosition -to $endPosition -c copy mymovie_trimmed.mov
But that doesn't seek the nearest keyframe and causes sync issues. See here: https://github.com/mifi/lossless-cut/pull/13
So I need to rearrange my code like this:
ffmpeg -ss $startPosition -i mymovie.mov -t $endPosition -c copy mymovie_trimmed.mov
(the -to property seems to get ignored, so I am using -t (duration) instead). My question is how can I reliably subtract the $startPosition variable from the $endPosition to get the duration?
EDIT: I used oguz-ismail's suggestion with using gdate instead of date (and brew install coreutils):
startPosition=00:00:10.1
endPosition=00:00:50.1
x=$(gdate -d"$endPosition" +'%s%N')
y=$(gdate -d"$startPosition" +'%s%N')
duration=$(bc -lq <<<"scale=1; ($x - $y) / 1000000000")
This gives me output of 40.1, how would I output it as 00:00:40.0 ?

How to accurately match an image to a video frame and exit with ffmpeg

In Bash,
I am trying to match an image to a frame in ffmpeg. I also want to exit the ffmpeg process when the match is found. Here is a (simplified version) of the code currently:
ffmpeg --hide_banner -ss 0 -to 60 \
-i "video.mp4" -i "image.jpg" -filter_complex \
"blend=difference, blackframe" -f null - </dev/null 2>log.txt &
pid=$!
trap "kill $pid 2>/dev/null" EXIT
while kill -0 $pid 2>/dev/null; do
# (grep command to monitor log file)
# if grep finds blackframe match, return blackframe time
done
To my understanding, if the video actually contains a blackframe I will get a false-positive. How can I effectively mitigate this?
While this is unnecessary to answer the question, I would like to exit the ffmpeg process without having to use grep to constantly monitor the log file, instead using pure ffmpeg
Edit: I say this because while I understand the blend filter is computing the difference, I am getting a false positive on a blackframe in my video and I don't know why.
Edit: A possible solution to this issue is to not use blackframe at all, but psnr (Peak Signal to Noise Ratio) but normal usage is by comparing two videos frame by frame, and I don't know how to effectively use it with an image as input.
Use
ffmpeg -ss 0 -t 60 -copyts -i video.mp4 -i image.jpg -filter_complex "[0]extractplanes=y[v];[1]extractplanes=y[i];[v][i]blend=difference,blackframe=0,metadata=select:key=lavfi.blackframe.pblack:value=100:function=equal,trim=duration=0.0001,metadata=print:file=-" -an -v 0 -vsync 0 -f null -
If a match is found, it will print to stdout a line of the form,
frame:179 pts:2316800 pts_time:6.03333
lavfi.blackframe.pblack=100
else no lines will be printed. It will exit after the first match, if found, or till whole input is processed.
Since blackframe only looks at luma, I use extractplanes both to speed up blend and also avoid any unexpected format conversions blend may request.
blackframe threshold is set to 0, so all frames have the blackframe value metadata tagged. False positives are not possible since blend computes the difference. The difference between a black input frame and the reference frame is equal to the reference frame, unless the reference is a black frame, in which case, it is not a false positive.
The first metadata filter only passes through frames with blackframe value of 100. The trim filter stops a 2nd frame from passing through (except if your video's fps is greater than 10000). The 2nd metadata filter prints the selected frame's metadata.

Resources