I am trying to split several video files (.mov) into 30 second blocks.
I do not need to specify where the 30 seconds start or finish.
EXAMPLE- A single 45 second video (VID1.mov) will be split into VID1_part1.mov (30 seconds), VID1_part2.mov (15 seconds). Ideally, I can remove audio too.
I made an attempt, using bash (osx), but was unsuccessful. It did not split the video into multiple parts- instead it just seemed to modify the original file (and made it into a length of 1-2 seconds):
find . -name '*.mov' -exec ffmpeg -t 30 -i \{\} -c copy \{\} \;
You can use FFmpeg's segment muxer for this.
ffmpeg -i input -c copy -segment_time 30 -f segment input%d.mov
Depending on where the video keyframes are, each segment won't start at mod 30 secs. You'll have to omit -c copy for that.
Also, FFmpeg does not do in-place editing. Your bash script seems to present the input name for output as well. That won't work.
Related
How do I cut a section out of a video with ffmpeg?
Imagine I have a 60 second mp4 A.
I want to remove all the stuff from 0:15 to 0:45.
The result should be a 30-second mp4, which is composed of the first 15 seconds of A directly followed by the last 15 seconds of A.
How can I do this without using concat?
I know how I could do it by creating two intermediary files and then using ffmpeg to concat them. I don't want to have to perform so much manual work for this (simple?) operation.
I have also seen the trim filder used for removing multiple parts from a video. All the usages I've found show that it seems to be very verbose, and I haven't found an example for a case as simple as I would like (just a single section removed).
Do I have to use trim for this operation? Or are there other less verbose solutions?
The ideal would of course be something at least simple as -ss 0:15 -to 0:45 which removes the ends of a video (-cut 0:15-0:45 for example).
I started from
https://stackoverflow.com/a/54192662/3499840 (currently the only answer to "FFmpeg remove 2 sec from middle of video and concat the parts. Single line solution").
Working from that example, the following works for me:
# In order to keep <start-15s> and <45s-end>, you need to
# keep all the frames which are "not between 15s and 45s":
ffmpeg -i input.mp4 \
-vf "select='not(between(t,15,45))', setpts=N/FRAME_RATE/TB" \
-af "aselect='not(between(t,15,45))', asetpts=N/SR/TB" \
output.mp4
This is a one-line linux command, but I've used the bash line-continuation character ('\') so that I can vertically align the equals-signs as this helps me to understand what is going on.
I had never seen ffmpeg's not and between operators before, but I found their documentation here.
Regarding the usual ffmpeg "copy vs re-encode" dichotomy, I was hoping to be able to use ffmpeg's "copy" "codec" (yeah, I know that it's not really a codec) so that ffmpeg would not re-encode my video, but if I specify "copy", then ffmpeg starts and stops at the nearest keyframes which are not sufficiently close to my desired start and stop points. (I want to remove a piece of video that is approximately 20 seconds long, but my source video only has one keyframe every 45 seconds!). Hence I am obliged to re-encode. See https://trac.ffmpeg.org/wiki/Seeking#Seekingwhiledoingacodeccopy for more info.
The setpts/asetpts filters set the timestamps on each frame to the correct values so that your media player will play each frame at the correct time.
HTH.
If you want to use the copy "codec", consider the following approach:
ffmpeg -i input.mp4 -t "$start_cut_section" -c copy part1.mp4&
ffmpeg -i input.mp4 -ss "$end_cut_section" -c copy part2.mp4&
echo "file 'part1.mp4'" > filelist;
echo "file 'part2.mp4'" >> filelist;
wait;
ffmpeg -f concat -i filelist -c copy output.mp4;
rm filelist;
This creates two files from before and after the cut, then combines them into a new trimmed final video. Obviously, this can be used to create as many cuts as you like. It may seem like a longer approach than the accepted answer, but it likely will execute much faster because of the use of the copy codec.
I currently have a working way to get a live stream and start downloading it locally while it is still live.
ffmpeg -i source_hls.m3u8 -c copy output.mkv -y
The problem is I do not actually want to save the entire thing, I just periodically run another command on the output.mkv command to create a clip of part of the live stream.
I was wondering if it was possible to limit the output.mkv file to be only 60s long so once the stream goes over 1 minute it will just cut off the old video and be replaced by the new rolling video.
Is this possible or no?
You can come close, using the segment muxer.
ffmpeg -i source_hls.m3u8 -c copy -f segment -segment_time 60 -segment_wrap 2 -reset_timestamps 1 out%02d.mkv -y
This will write to out00.mkv, then out01.mkv, then overwrite out00.mkv, next overwrite out01.mkv and so on.
The segment time is set at 60 seconds, so each segment will be around 60 seconds. The targets for splitting are 60,120,180,240... seconds of the input. However, video streams will be only be split at keyframes at or after the split target. So, if the first keyframe after t=59 is at 66, then the first segment will be 66s long. The next target is 120s. Let's say there's a KF at 121s, so the 2nd segment will be 66 to 121s = 55s long. Something to keep in mind when checking the segments.
Check the file modification times to see which segment contains the earlier data.
If you want to reduce the surplus duration, decrease segment_time and increase segment_wrap correspondingly. segment_time x segment_wrap should be target saved duration + segment_time long.
Late answer, but you can use -t duration, i.e.:
ffmpeg -y -t 60 -i source_hls.m3u8 -c copy output.mkv
From ffmpeg docs:
-t duration (input/output)
When used as an input option (before -i), limit the duration of data
read from the input file.
When used as an output option (before an output url), stop writing the
output after its duration reaches duration.
duration must be a time duration specification, see (ffmpeg-utils)the
Time duration section in the ffmpeg-utils(1)
manual.
-to and -t are mutually exclusive and -t has priority.
-t argument examples:
11 - 11 seconds
11.111 - 11.111 seconds
1:11:11 - 11 hours, 11 minutes and 11 seconds
I'm trying to create a video quiz, that will contain small parts of other videos, concatenated together (with the purpose, that people will identify from where these short snips are taken from).
For this purpose I created a file that contain the URL of the video, the starting time of the "snip", and its length. for example:
https://www.youtube.com/watch?v=5-j6LLkpQYY 00:00 01:00
https://www.youtube.com/watch?v=b-DqO_D1g1g 14:44 01:20
https://www.youtube.com/watch?v=DPAgWKseVhg 12:53 01:00
Meaning that the first part should take the video from the first URL from its beginning and last for a minute, the second part should be taken from the second URL starting from 14:44 (minutes:seconds) and last one minute and 20 seconds and so forth.
Then all these parts should be concatenated to a single video.
I'm trying to write a script (I use ubuntu and fluent in several scripting languages) that does that, and I tried to use youtube-dl command line package and ffmpeg, but I couldn't find the right options to achieve what I need.
Any suggestions will be appreciated.
Considering the list of videos is in foo.txt, and the output video to be foo.mp4, this bash script should do the job:
eval $(cat foo.txt | while read u s d; do echo "cat <(youtube-dl -q -o - $u | ffmpeg -v error -hide_banner -i - -ss 00:$s -t 00:$d -c copy -f mpegts -);"; done | tee /dev/tty) | ffmpeg -i - -c copy foo.mp4
This is using a little trick with process substitution and eval to avoid intermediate files, container mpegts to enable simple concat protocol, and tee /dev/tty just for debugging.
I have tested with youtube-dl 2018.09.26-1 and ffmpeg 1:4.0.2-3.
I have a camera taking time-lapse shots every 2–3 seconds, and I keep a rolling record of a few days' worth. Because that's a lot of files, I keep them in subdirectories by day and hour:
images/
2015-05-02/
00/
2015-05-02-0000-02
2015-05-02-0000-05
2015-05-02-0000-07
01/
(etc.)
2015-05-03/
I'm writing a script to automatically upload a timelapse of the sunrise to YouTube each day. I can get the sunrise time from the web in advance, then go back after the sunrise and get a list of the files that were taken in that period using find:
touch -d "$SUNRISE_START" sunrise-start.txt
touch -d "$SUNRISE_END" sunrise-end.txt
find images/"$TODAY" -type f -anewer sunrise-start.txt ! -anewer sunrise-end.txt
Now I want to convert those files to a video with ffmpeg. Ideally I'd like to do this without making a copy of all the files (because we're talking ~3.5 GB per hour of images), and I'd prefer not to rename them to something like image000n.jpg because other users may want to access the images. Copying the images is my fallback.
But I'm getting stuck sending the results of find to ffmpeg. I understand that ffmpeg can expand wildcards internally, but I'm not sure that this is going to work where the files aren't all in one directory. I also see a few people using find's --exec option with ffmpeg to do batch conversions, but I'm not sure if this is going to work with image sequence input (as opposed to, say, converting 1000 images into 1000 single-frame videos).
Any ideas on how I can connect the two—or, failing that, a better way to get files in a date range across several subdirectories into ffmpeg as an image sequence?
Use the concat demuxer with a list of files. The list format is:
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'
Basic ffmpeg usage:
`ffmpeg -f concat -i mylist.txt ... <output>`
Concatenate [FFmpeg wiki]
use pattern_type glob for this
ffmpeg -f image2 -r 25 -pattern_type glob -i '*.jpg' -an -c:v libx264 -r 25 timelapse.mp4
ffmpeg probably uses the same file name globbing facility as the shell, so all valid file name globbing patterns should work. Specifically in your case, a pattern of images/201?-??-??/??/201?-??-??-????-?? will expand to all files in question e.g.
ls -l images/201?-??-??/??/201?-??-??-????-??
ffmpeg ... 'images/201?-??-??/??/201?-??-??-????-??' ...
Note the quotes around the pattern in the ffmpeg invocation: you want to pass the pattern verbatim to ffmpeg to expand the pattern into file names, not have the shell do the expansion.
What would be the simplest ffmpeg command to truncate the video to the first two minutes (or do nothing if the video is less than two minutes) ? It can do a pass-through on any of the initial video settings.
This is an adapted version of David542's post since my edit was rejected as "completely superfluous or actively harm[ing] readability".
For extractions, it is essential to add the -c copy flag,
ffmpeg -i in.mov -ss 0 -t 120 -c copy out.mov
what is shorthand for -vcodec copy -acodec copy.
Otherwise ffmpeg reencodes the selection what is about 1000 times slower on my machine here and possibly alters the outcome in quality due to default settings taken.
Edit
As ack-inc indicated, it makes a difference whether -ss is placed before (input option) or behind (output option) the -i.
In conjunction with stream copying (-c copy), ffmpeg will seek to the closest seek point before the position (e.g. the closest keyframe), if used as an input option. This is very fast but may be inaccurate depending on the codec and codec settings used.
If used as an output option, ffmpeg first decodes the complete stream up to the position. This uses more CPU resources and therefore takes longer, but the cut is now precise.
The latter takes about 16 times longer on my computer, but in the end it only takes about 1.5 seconds to search through 1.5 hours of FHD stream (conjunction with -c copy). In other words, computers are now sufficiently fast that this is the way of choice for the vast majority of end users.
Without stream copying -- when trans-coding -- the difference was a factor of about 500 (or 6 min at 100 % CPU).
$ ffmpeg -i in.mov -ss 0 -t 120 out.mov