Use FFmpeg to split a video into equal-length segments - bash

I need to split many videos of unknown length into 2-minute chunks. I can use the accepted answer from https://unix.stackexchange.com/questions/1670/how-can-i-use-ffmpeg-to-split-mpeg-video-into-10-minute-chunks to do this, but this would require copying, pasting, and modifying many lines and manually calculating how many 2-minute parts are in each video (not a big deal for a few videos, but after a while it gets really tedious).
I have used code in the past in which all I have to do is specify the input video, the start time, segment length, and the output name and by running it in a .sh file in the same folder as the input video it generates all the necessary separate videos which are labeled "output_video01," "output_video02," etc. However, somehow it doesn't want to work on my new computer. Other answers which claim to be able to do this don't work for me, either (when run as either a .bat or .sh file). I believe the code I previously used was:
ffmpeg -i "input_video.MTS" -ss 164 -f segment -segment_time 120 -vcodec copy -reset_timestamps 1 -map 0:0 -an output_video%d.MTS
Another suggestion that doesn't work for me but apparently works for others (see https://superuser.com/a/820773/313829):
ffmpeg -i input_video.MTS -c copy -map 0 -segment_time 8 -f segment output_video%03d.MTS
I currently have Windows 10 with the anniversary update build in which I have enabled Bash, but for some reason it doesn't want to work. I can't even see the error it gives me because the window closes so abruptly.

The original code (that didn't work for me) was:
ffmpeg -i input_video.MTS -c copy -map 0 -segment_time 8 -f segment output_video%03d.MTS
The code that did end up working for me:
ffmpeg -i input_video.MTS -c copy -map 0 -segment_time 8 -f segment output_video%%03d.MTS
The "%" needed to be "%%". Also, it worked in a .bat file (not .sh).

Related

ffmpeg: crop video into two grayscale sub-videos; guarantee monotonical frames; and get timestamps

The need
Hello, I need to extract two regions of a .h264 video file via the crop filter into two files. The output videos need to be monochrome and extension .mp4. The encoding (or format?) should guarantee that video frames are organized monotonically. Finally, I need to get the timestamps for both files (which I'd bet are the same timestamps that I would get from the input file, see below).
In the end I will be happy to do everything in one command via an elegant one liner (via a complex filter I guess), but I start doing it in multiple steps to break it down in simpler problems.
In this path I get into many difficulties and despite having searched in many places I don't seem to find solutions that work. Unfortunately I'm no expert of ffmpeg or video conversion, so the more I search, the more details I discover, the less I solve problems.
Below you find some of my attempts to work with the following options:
-filter:v "crop=400:ih:260:0,format=gray" to do the crop and the monochrome conversion
-vf showinfo possibly combined with -vsync 0 or -copyts to get the timestamps via stderr redirection &> filename
-c:v mjpeg to force monotony of frames (are there other ways?)
1. cropping each region and obtaining monochrome videos
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:260:0,format=gray" outL.mp4
$ ffmpeg -y -hide_banner -i inVideo.h264 -filter:v "crop=400:ih:1280:0,format=gray" outR.mp4
The issue here is that in the output files the frames are not organized monotonically (I don't understand why; how come would that make sense in any video format? I can't say if that comes from the input file).
EDIT. Maybe it is not frames, but packets, as returned by av .demux() method that are not monotonic (see below "instructions to reproduce...")
I have got the advice to do a ffmpeg -i outL.mp4 outL.mjpeg after, but this produces two videos that look very pixellated (at least playing them with ffplay) despite being surprisingly 4x bigger than the input. Needless to say, I need both monotonic frames and lossless conversion.
EDIT. I acknowledge the advice to specify -q:v 1; this fixes the pixellation effect but produces a file even bigger, ~12x in size. Is it necessary? (see below "instructions to reproduce...")
2. getting the timestamps
I found this piece of advice, but I don't want to generate hundreds of image files, so I tried the following:
$ ffmpeg -y -hide_banner -i outL.mp4 -vf showinfo -vsync 0 &>tsL.txt
$ ffmpeg -y -hide_banner -i outR.mp4 -vf showinfo -vsync 0 &>tsR.txt
The issue here is that I don't get any output because ffmpeg claims it needs an output file.
The need to produce an output file, and the doubt that the timestamps could be lost in the previous conversions, leads me back to making a first attempt of a one liner, where I am testing also the -copyts option, and the forcing the encoding with -c:v mjpeg option as per the advice mentioned above (don't know if in the right position though)
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:1280:0,format=gray" -vf showinfo -c:v mjpeg eyeL.mp4 &>tsL.txt
This does not work because surprisingly the output .mp4 I get is the same as the input. If instead I put the -vf showinfo option just before the stderr redirection, I get no redirected output
ffmpeg -y -hide_banner -i testTex2.h264 -copyts -filter:v "crop=400:ih:260:0,format=gray" -c:v mjpeg outR.mp4 -vf showinfo dummy.mp4 &>tsR.txt
In this case I get the desired timestamps output (too much: I will need some solution to grab only the pts and pts_time data out of it) but I have to produce a big dummy file. The worst thing is anyway, that the mjpeg encoding produces a low resolution very pixellated video again
I admit that the logic how to place the options and the output files on the command line is obscure to me. Possible combinations are many, and the more options I try the more complicated it gets, and I am not getting much closer to the solution.
3. [EDIT] instructions how to reproduce this
get a .h264 video
turn it into .mp by ffmpeg command $ ffmpeg -i inVideo.h264 out.mp4
run the following python cell in a jupyter-notebook
see that the packets timestamps have diffs greater and less than zero
%matplotlib inline
import av
import numpy as np
import matplotlib.pyplot as mpl
fname, ext="outL.direct", "mp4"
cont=av.open(f"{fname}.{ext}")
pk_pts=np.array([p.pts for p in cont.demux(video=0) if p.pts is not None])
cont=av.open(f"{fname}.{ext}")
fm_pts=np.array([f.pts for f in cont.decode(video=0) if f.pts is not None])
print(pk_pts.shape,fm_pts.shape)
mpl.subplot(211)
mpl.plot(np.diff(pk_pts))
mpl.subplot(212)
mpl.plot(np.diff(fm_pts))
finally create also the mjpeg encoded files in various ways, and check packets monotony with the same script (see also file size)
$ ffmpeg -i inVideo.h264 out.mjpeg
$ ffmpeg -i inVideo.h264 -c:v mjpeg out.c_mjpeg.mp4
$ ffmpeg -i inVideo.h264 -c:v mjpeg -q:v 1 out.c_mjpeg_q1.mp4
Finally, the question
What is a working way / the right way to do it?
Any hints, even about single steps and how to rightly combine them will be appreciated. Also, I am not limited tio the command line, and I would be able to try some more programmatic solution in python (jupyter notebook) instead of the command line if someone points me in that direction.

FFmpeg extracts different number of frames when using -filter_complex together with the split filter

I am fiddling with ffmpeg, extracting jpg pictures from videos. I am splitting the input stream into two output stream with -filter_complex, because I process my videos from direct http link (scarce free space on VPS), and I don't want to read through the whole video twice (traffic quota is also scarce). Furthermore I need two series of pitcures, one for applying some filters (fps changing, scale, unsharp, crop, scale) and then selecting from them by naked eye, and the other series being untouched (expect fps changing, and cropping the black borders), using them for furter processing after selecting from the first series. I call my ffmpeg command from Ruby script, so it contains some string interpolation / substitution in the form #{}. My working command line looked like:
ffmpeg -y -fflags +genpts -loglevel verbose -i #{url} -filter_complex "[0:v]fps=fps=#{new_fps.round(5).to_s},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -f #{format} -c copy #{options} -map_chapters -1 - -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
#{options} is set when output is MP4, then its value is "-movflags frag_keyframe+empty_moov" so I can send it to standard output without seeking capability and uploading the stream somewhere without making huge temporary video files.
So I get two series of pictures, one of them is filtered, sharpened, the other is in fact untouched. And I also get an output stream of the video on the standard output which is handled by Open3.popen3 library connecting the output stream of the input of two other commands.
Problem arise when I would like to seek in the video to a given point and omitting the streamed video output on the STDOUT. I try to apply combined seeking, fast seek before the given time code and the slow seek to the exact time code, given in floating seconds:
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss #{(seek_to-seek_before).to_s} -i #{url} -ss #{seek_before.to_s} -t #{t_duration.to_s} -filter_complex "[0:v]fps=fps=#{pics_per_sec},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Running this command I get the needed two series of pictures, but they contains different number of images, 233 vs. 484.
Actual values can be read from this interpolated / substituted command line:
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss 1619.0443599999999 -i fabf.avi -ss 50.0 -t 46.505879999999934 -filter_complex "[0:v]fps=fps=5,split=2[in1][in2];[in1]crop=iw-0:ih-0:0:0,scale=280:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(0.526316)[out1];[in2]crop=iw-0:ih-0:0:0[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Detailed log can be found here: http://www.filefactory.com/file/1yih17k2hrmp/ffmpeg-20160610-223820.txt
Before last line it shows 188 duplicated frames.
I also tried passing "-vsync 0" option, but didn't help. When I generate the two series of images in two consecutive steps, with two different command lines, then no problem arises, I get same amount of pictures in both series of course. So my question would be, how can I use the later command line, generating the two series of images by only one reading / parsing of the remote video file?
You have to replicate the -ss -t options for the 2nd output as well i.e.
...-f image2 -q 1 %06d.jpg -map '[out2]' -ss 50 -t 46.5 -f image2 -q 1 big_%06d.jpg
Each output option (those not before -i) only apply to the output that immediately follows.

Concatenate TS files with correct timestamps

I'm trying to merge multiple ts chunk files to one single file, without any loss of quality or reencoding. The files are taken from a live stream, however I'm trying to merge them in a diffrent order and not the order they were streamed.
Example of files:
0000000033.ts
0000000034.ts
0000000039.ts
0000000044.ts
I tried:
cat 0000000033.ts 0000000034.ts 0000000039.ts 0000000044.ts >combined.ts
and
ffmpeg -i "concat:0000000033.ts|concat:0000000034.ts|concat:0000000039.ts|concat:0000000044.ts" -c copy -bsf:a aac_adtstoasc output.mp4
This kinda works, however I instead of beeing 4 seconds long it's around 15. It plays this way:
[first 2 clips]
[5 secs pause]
[39.ts]
[5 secs pause]
[44.ts]
[done]
This happens to both the cat and ffmpeg combined version. So it seems the ts chunks contain timestamps from the stream that are beeing used.
How can I fix that to make it one continous clip?
The chunks here are more of an example, the chunks will be dynamically selected.
If you have a long list of TS files, you can create a playlist, a file containing a list of the TS files in this line format:
file 'seg-37-a.ts'
These commands produce such a file, with the TS files sorted numerically.
delimiterBeforeFileNumber="-"
ls |egrep '[.]ts$' \
|sort "-t$delimiterBeforeFileNumber" -k2,2n \
|sed -r "s/(.*)/file '\1'/" >ts.files.txt
Then the creation of the single file can read the playlist using the -f concat modifier of ffmpeg's -i option.
ffmpeg -f concat -i ts.files.txt -c copy tsw.014.ts.mp4
Haven't checked whether this works with the concat protocol, but you need to generate a new set of timestamps.
ffmpeg -i "concat:0000000033.ts|0000000034.ts|0000000039.ts|0000000044.ts" \
-c copy -bsf:a aac_adtstoasc -fflags +genpts output.mp4
TS files can actually be merged with the Windows 'copy' command. The following will merge every TS in the current folder. Then, once you have a single ts, transmux to mp4 without re-encoding. I confirm the video duration will be correct unlike ffmpeg's concat.
copy /b *.ts all.ts
ffmpeg -i all.ts -c copy all.mp4
In recent versions of ffmpeg, a bit stream filter setts has appeared that allows you to fix timestamps without transcoding. Try it, in my case it helped.
-bsf "setts=PTS-STARTPTS;DTS-STARTDTS"
I am combining a lot of mpeg ts chunks using unix cat, and then transmuxing to an mp4 container
ffmpeg -abort_on empty_output_stream -hide_banner -loglevel repeat+level+error -progress pipe:1 -i pipe:0 -map 0 -c copy -bsf:a aac_adtstoasc -bsf "setts=PTS-STARTPTS;DTS-STARTDTS" -f mp4 -movflags faststart -y out.mp4
Are the examples accurate in file numbers? as 0033.ts and 0034.ts play together but it takes 5 secs to get to 0039.ts and then another 5 to 0044.ts so 0034 + 5 secs = 0039 and + 5 secs = 0044 so are you joining them in their proper order?
Sorry I misread the question but in regards to your problem once you have the 15 sec clip there is a program called flv editor lite from moyea which will take in and convert a .mp4 file to .flv and allow you to cut the excess time out of the file and export it as one file but then you need to reconvert back to .mp4 again

ffmpeg to split mp4 file into segments... after first segment, audio unsynced

I've used the ffmpeg command line shown in this question to split MKV files perfectly for a long time. Now i have some MP4 files that i'd like to split and at first it seemed to work, but every subsequent segment after the first has the audio not synced! And by several seconds.
I've tried forcing keyframes (advice I found on some other sites) and that didn't help.
I tried a different program entirely (Avidemux) and it was able to split the file with proper output, but it was a LOT slower, taking upwards of 3 minutes vs less than 2 seconds with ffmpeg. With Avidemux I was able to determine the exact position of the i-frame where I wanted to split, so thinking perhaps that was the syncing problem I tried that exact position (ie. 00:12:17.111 instead of 00:12:16 or whatever) but that didn't help either.
Is there an option I'm missing with ffmpeg to get it to properly sync audio to the video when splitting?
I'm not sure I understand WHY, but the issue was order of parameters.
In the linked example, the command is as follows:
ffmpeg -i input.avi -vcodec copy -acodec copy -ss 00:30:00 -t 00:30:00 output2.avi
Of course, I'm using mp4 instead of avi, but otherwise I was entering the command exactly as above and (with mp4) I was getting an out-of-sync audio result. I accidentally stumpled onto this "fix"... if I instead enter the command thusly:
ffmpeg -ss 00:30:00 -i input.mp4 -vcodec copy -acodec copy -t 00:30:00 output2.mp4
I don't get the sync issues. Why? No idea. But it works. I've tried it a few times to confirm... making only that order of parameters change corrects the issue.

Can you splice a 1 min clip out of a larger file, without transcoding it?

I have a site that allows people to upload large video files in various formats (avi, mp4, mkv and flv). I need to generate a 1 minute "sample" from the larger file that has been uploaded, and the sample needs to be in the same format, have the same frame dimensions and bit-rate as the original file. Is there a way to simply cut out a section of the file into a new file? Preferably in ffmpeg (or any other tool if ffmpeg is impossible).
First you'll want to understand how video files actually work. Here's a set of tutorials explaining that: Overly Simplistic Guide to Internet Video.
Then, you can try a variety of tools that may help with slicing out a sample. One is flvtool (if your input is FLV) or FFmpeg. With FFmpeg you can specify a start time and stop time, and it will attempt to cut out just what you ask for (but it will have to find the nearest key-frame to begin slicing at).
Here's the FFmpeg command to read a file called input.flv, start 15 seconds into the video, and then cut out the next 60 seconds, but otherwise keep the same parameters for the audio code and video codec, and write it to an output file:
ffmpeg -i input.flv -ss 15 -t 60 -acodec copy -vcodec copy output.flv
Finally if you want you can write computer code in C or C++ (using FFmpeg's libav libraries) or Java (using Xuggler) to programatically do this, but that's pretty advanced for your use case.
If you are having problems keeping auto and video synced up as I was, the following may help (found on another website):
ffmpeg -sameq -i file.flv -ss 00:01:00 -t 00:00:30 -ac 2 -r 25 -copyts output.flv
As Evan notes, the approach in the accepted answer can result in loss of A/V sync. However his solution is not correct because -sameq was removed.
As stated at https://trac.ffmpeg.org/wiki/Seeking the -ss option should come before -i not after. This fixed the issue for me.
Next option is to use -fs switch. Example:
ff -i ip.mkv -fs 500M -c copy ~/Movies/reservoir/carbohydrates.mkv
Extract 500 megabytes (500×1000×1000 bytes + ‘muxing overhead’) off selected source.
–based on filesize, as you can tell
One love. And respect.

Resources