I'm trying to create a video quiz, that will contain small parts of other videos, concatenated together (with the purpose, that people will identify from where these short snips are taken from).
For this purpose I created a file that contain the URL of the video, the starting time of the "snip", and its length. for example:
https://www.youtube.com/watch?v=5-j6LLkpQYY 00:00 01:00
https://www.youtube.com/watch?v=b-DqO_D1g1g 14:44 01:20
https://www.youtube.com/watch?v=DPAgWKseVhg 12:53 01:00
Meaning that the first part should take the video from the first URL from its beginning and last for a minute, the second part should be taken from the second URL starting from 14:44 (minutes:seconds) and last one minute and 20 seconds and so forth.
Then all these parts should be concatenated to a single video.
I'm trying to write a script (I use ubuntu and fluent in several scripting languages) that does that, and I tried to use youtube-dl command line package and ffmpeg, but I couldn't find the right options to achieve what I need.
Any suggestions will be appreciated.
Considering the list of videos is in foo.txt, and the output video to be foo.mp4, this bash script should do the job:
eval $(cat foo.txt | while read u s d; do echo "cat <(youtube-dl -q -o - $u | ffmpeg -v error -hide_banner -i - -ss 00:$s -t 00:$d -c copy -f mpegts -);"; done | tee /dev/tty) | ffmpeg -i - -c copy foo.mp4
This is using a little trick with process substitution and eval to avoid intermediate files, container mpegts to enable simple concat protocol, and tee /dev/tty just for debugging.
I have tested with youtube-dl 2018.09.26-1 and ffmpeg 1:4.0.2-3.
Related
I am running ffmpeg in terminal on a mac, to trim a movie file losslessly using the following in bash:
startPosition=00:00:14.9
endPosition=00:00:52.1
ffmpeg -i mymovie.mov -ss $startPosition -to $endPosition -c copy mymovie_trimmed.mov
But that doesn't seek the nearest keyframe and causes sync issues. See here: https://github.com/mifi/lossless-cut/pull/13
So I need to rearrange my code like this:
ffmpeg -ss $startPosition -i mymovie.mov -t $endPosition -c copy mymovie_trimmed.mov
(the -to property seems to get ignored, so I am using -t (duration) instead). My question is how can I reliably subtract the $startPosition variable from the $endPosition to get the duration?
EDIT: I used oguz-ismail's suggestion with using gdate instead of date (and brew install coreutils):
startPosition=00:00:10.1
endPosition=00:00:50.1
x=$(gdate -d"$endPosition" +'%s%N')
y=$(gdate -d"$startPosition" +'%s%N')
duration=$(bc -lq <<<"scale=1; ($x - $y) / 1000000000")
This gives me output of 40.1, how would I output it as 00:00:40.0 ?
I am trying perform a batch operation to extract specific frames from multiple video files and save as a PNG, using a bash script. I hope to do this by using ffmpeg within a bash script, supplemented by a csv file that contains the name of the input video, the specific frame number to be extracted from the input video, and the output name of the PNG file.
The video files and the csv file would all be placed within the same folder. The script can also be placed in there if necessary.
My csv - called "select.csv" - currently takes the following format (input,output,frame):
mad0.m4v,mad0_out1,9950
mad0.m4v,mad0_out2,4500
mad1.m4v,mad1_out1,3200
My current script - called "frame.sh" - takes the following form:
#!/bin/bash
OLDIFS=$IFS
IFS=“,”
SDIR=/Users/myuser/Desktop/f-input/
cd $SDIR;
while read input output frame
do
echo "$input"
echo "$output"
echo "$frame"
input1=$input
output1=$output
frame1=$frame
ffmpeg -i "$input1" -vf select='eq(n\,'"$frame1"')' -vsync 0
"$output1".png
done < $1
IFS=$OLDIFS
This should allow me to run ./frame.sh select.csv to then process all relevant files in the "f-input" folder on my desktop and extract the specified frames.
I ended up echoing the variables read from the csv so that they could actually be used as variables and looped in the ffmpeg command because carrying out the ffmpeg command using $input, $frame and $output directly after the read operation only ever completed the process on the first line of the csv, without progressing further.
Essentially I would like the following to actually loop through each csv entry, instead of only the first line:
#!/bin/bash
OLDIFS=$IFS
IFS=“,”
SDIR=/Users/myuser/Desktop/f-input/
cd $SDIR;
while read input output frame
do
ffmpeg -i "$input" -vf select='eq(n\,'"$frame"')' -vsync 0 "$output".png
done < $1
IFS=$OLDIFS
Any and all advice appreciated!
Many thanks
Replace IFS=“,” with IFS=",".
I wrote a similar script that reads csv and process movies by ffmpeg.
It works well on the first line of the csv but fails after the second lines.
I found ffmpeg in a loop seems to affect the "read" command and trim the first character of the lines after the second line.
So I ended up with adding extra "garbage" column on the left-most side of the csv and let ffmpeg trim it.
my csv is like:
101,movie1.mp4
102,movie2.mp4
103,movie3.mp4
...
and the (simplified) script is like:
while IFS="," read id movie; do
ffmpeg -v quiet -s 1280x720 -i "$movie" "$id-$movie" </dev/null
done
it generates "101-movie1.mp4" for the first line of the csv just like I expect
but after the second line it generates "02-movie1.mp4" "03-movie3.mp4" and so force because ffmpeg (seems to have) trimmed the first character of the lines.
I added a garbage column on the 1st column like this
x,101,movie1.mp4
x,102,movie2.mp4
x,103,movie3.mp4
and fix the script:
while IFS="," read garbage id movie; do
ffmpeg -v quiet -s 1280x720 -i "$movie" "$id-$movie" </dev/null
done
this worked for me.
I am trying to make a script to turn a bunch of timelapse images into a movie, using ffmpeg.
The latest problem is how to loop thru the images in, say, batches of 500.
There could be 100 images from the day, or there could be 5000 images.
The reason for breaking this apart is due to running out of memory.
Afterwards I would need to cat them using MP4Box to join all together...
I am entirely new to bash, but not entirely programming.
What I think needs to happen is this
1) read in the folders contents as the images may not be consecutively named
2) send ffmpeg a list of 500 at a time to process (https://trac.ffmpeg.org/wiki/Concatenate)
2b) while you're looping thru this, set a counter to determine how many loops you've done
3) use the number of loops to create the MP4Box cat command line to join them all at the end.
the basic script that works if there's only say 500 images is:
#!/bin/bash
dy=$(date '+%Y-%m-%d')
ffmpeg -framerate 24 -s hd1080 -pattern_type glob -i "/mnt/cams/Camera1/$dy/*.jpg" -vcodec libx264 -pix_fmt yuv420p Cam1-"$dy".mp4
MP4Box's cat command looks like:
MP4Box -cat Cam1-$dy7.mp4 -cat Cam1-$dy6.mp4 -cat Cam1-$dy5.mp4 -cat Cam1-$dy4.mp4 -cat Cam1-$dy3.mp4 -cat Cam1-$dy2.mp4 -cat Cam1-$dy1.mp4 "Cam1 - $dy1 to $dy7.mp4"
Needless to say help is immensely appreciated for my project
Here is something to get you started. It sorts the individual frames into time order, and then chunks them up into chunks of 500 and loops through all the chunks:
#!/bin/bash
# User-changeable number of frames per chunk
chunksize=500
# Rename files by date/time so they collate in order
jhead -n%Y-%m-%d_%H-%M-%S *.jpg
# Remove any remnants from previous runs (which may have been longer)
rm chunk* sub-*mp4
# Split filename list into chunks - chunkaa, chunkab, chunkac ...
ls *jpg | split -l $chunksize - chunk
# Put 'file' keyword before each filename
sed -i.bak 's/^/file /' chunk*
n=0
for c in chunk*; do
# Generate zero-padded output filename so that collates for final assembly too
out=$(printf "sub-%03d.mp4" $n)
echo Processing chunk $c into sequence $out
ffmpeg -f concat -i "$c" ... "$out"
((n+=1))
done
# Final assembly of "sub-*.mp4"
ffmpeg ... sub-*mp4 ...
This question already has answers here:
Bash and sort files in order
(8 answers)
Closed 7 years ago.
I have 3 webcams set up in a building, uploading still images to a webserver. I'm using ffmpeg to encode the jpgs to mp4 video.
The directories are set up like this:
Cam1/201504
Cam1/201505
Cam2/201504
Cam2/201505
Cam3/201504
Cam3/201505
I'm using the following bash loop/ffmpeg parameters to make one video per camera, per year. This works well so far (well... except that my SSD is rapidly degrading in performance - too many simultaneous read/write operations?):
find Cam2/2013* -name "*.jpg" -print0 | xargs -0 cat | ffmpeg -f image2pipe -framerate 30 -vcodec mjpeg -i - -vcodec libx264 -profile:v baseline -level 3.0 -movflags +faststart -crf 19 -pix_fmt yuv420p -r 30 "Cam2-2013-30fps-19crf.mp4"
The individual files are named like this (confusing ffmpeg's built-in file sequencer):
Cam1_2015052413543201.jpg
Cam1_2015052413544601.jpg
Cam2_2015052413032601.jpg
Cam2_2015052413544901.jpg
I now need to create one video for an entire year across all 3 cameras, ordered by timestamp. To accomplish this, I need to sort the find results by the segment of the filename after the underscore.
What do I pipe the find output to to accomplish this? For example, the files above would be ordered like this:
Cam2_2015052413032601.jpg
Cam1_2015052413543201.jpg
Cam1_2015052413544601.jpg
Cam2_2015052413544901.jpg
Any help is very much appreciated!
sort
sort -t '_' -nk2
-t '_' # specifices that the field seperator should be an underscore
-nk2 # start sorting from the second field (after the underscore)..n sort according to numerical value/timestamp
output
Cam2_2015052413032601.jpg
Cam1_2015052413543201.jpg
Cam1_2015052413544601.jpg
Cam2_2015052413544901.jpg
pipe sort to find command like
sort -t '_' -nk2 --files0-from=-
Use sort with the --key option. See your man page of sort for details of the key format. Generally (for both coreutils and BSD sort) it should be F[.C][OPTS][,F[.C][OPTS]], where F is for field and C is for character position. Here you want to sort from the 5th character of the first field, so --key=1.5 will do:
> echo -e 'Cam1_2015052413543201.jpg\nCam1_2015052413544601.jpg\nCam2_2015052413032601.jpg\nCam2_2015052413544901.jpg' | sort --key=1.5
Cam2_2015052413032601.jpg
Cam1_2015052413543201.jpg
Cam1_2015052413544601.jpg
Cam2_2015052413544901.jpg
Here you seem to have not only basenames in the output of find, but relative paths with path segments like Cam1/201505/ prepended, but you can still count the number of characters and hence write the appropriate keydef. For instance, say the paths for the images in the example above are
Cam1/201505/Cam1_2015052413543201.jpg
Cam1/201505/Cam1_2015052413544601.jpg
Cam2/201505/Cam2_2015052413032601.jpg
Cam2/201505/Cam2_2015052413544901.jpg
Then
sort --key=1.17
will give you the correct order
Cam2/201505/Cam2_2015052413032601.jpg
Cam1/201505/Cam1_2015052413543201.jpg
Cam1/201505/Cam1_2015052413544601.jpg
Cam2/201505/Cam2_2015052413544901.jpg
I have a camera taking time-lapse shots every 2–3 seconds, and I keep a rolling record of a few days' worth. Because that's a lot of files, I keep them in subdirectories by day and hour:
images/
2015-05-02/
00/
2015-05-02-0000-02
2015-05-02-0000-05
2015-05-02-0000-07
01/
(etc.)
2015-05-03/
I'm writing a script to automatically upload a timelapse of the sunrise to YouTube each day. I can get the sunrise time from the web in advance, then go back after the sunrise and get a list of the files that were taken in that period using find:
touch -d "$SUNRISE_START" sunrise-start.txt
touch -d "$SUNRISE_END" sunrise-end.txt
find images/"$TODAY" -type f -anewer sunrise-start.txt ! -anewer sunrise-end.txt
Now I want to convert those files to a video with ffmpeg. Ideally I'd like to do this without making a copy of all the files (because we're talking ~3.5 GB per hour of images), and I'd prefer not to rename them to something like image000n.jpg because other users may want to access the images. Copying the images is my fallback.
But I'm getting stuck sending the results of find to ffmpeg. I understand that ffmpeg can expand wildcards internally, but I'm not sure that this is going to work where the files aren't all in one directory. I also see a few people using find's --exec option with ffmpeg to do batch conversions, but I'm not sure if this is going to work with image sequence input (as opposed to, say, converting 1000 images into 1000 single-frame videos).
Any ideas on how I can connect the two—or, failing that, a better way to get files in a date range across several subdirectories into ffmpeg as an image sequence?
Use the concat demuxer with a list of files. The list format is:
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'
Basic ffmpeg usage:
`ffmpeg -f concat -i mylist.txt ... <output>`
Concatenate [FFmpeg wiki]
use pattern_type glob for this
ffmpeg -f image2 -r 25 -pattern_type glob -i '*.jpg' -an -c:v libx264 -r 25 timelapse.mp4
ffmpeg probably uses the same file name globbing facility as the shell, so all valid file name globbing patterns should work. Specifically in your case, a pattern of images/201?-??-??/??/201?-??-??-????-?? will expand to all files in question e.g.
ls -l images/201?-??-??/??/201?-??-??-????-??
ffmpeg ... 'images/201?-??-??/??/201?-??-??-????-??' ...
Note the quotes around the pattern in the ffmpeg invocation: you want to pass the pattern verbatim to ffmpeg to expand the pattern into file names, not have the shell do the expansion.