ffmpeg cut the video and output the audio additionally - ffmpeg

I can cut and output a video with:
ffmpeg -y -ss 00:00:05 -t 240 -i input.mov -to 10 -qscale 0 > output.mov
Also I can additionally output the audio of the input file by adding output_audio.wav at the end like:
ffmpeg -y -ss 00:00:05 -t 240 -i input.mov -to 10 -qscale 0 output.mov output_audio.wav
BUT:
The video output is trimmed to the segment specified in the command. But the audio output contains the entire input video.
Is it possible to additionally output the audio of JUST the segment defined within the command-line?

ffmpeg -y -ss 00:00:05 -to 10 -i input.mov output.mov output_audio.wav
You can use -to as an input option. This will limit the input duration and therefore the outputs as well.
-qscale 0 is ignored by libx264.

Related

How to add a hard code of subs to this filter_complex

ffmpeg -ss 00:11:47.970 -t 3.090 -i "file.mkv" -ss 00:11:46.470 -t 1.500 -i "file" -ss 00:11:51.060 -t 0.960 -i "file.mkv" -an -c:v libvpx -crf 31 -b:v 10000k -y -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[outv][outa];[outv]scale='min(960,iw)':-1[outv];[outv]subtitles='file.srt'[outv]" -map [outv] file_out.webm -map [outa] file.mp3
I have a filter where take three different points in a file concat them together and scale them down this part works
Im looking to see how to add to the filter_complex a sub burn in step rendering the subs from the exact timings usings a file that I specify when I use the above code it doesn't work
The subtitles filter is receiving a concatenated stream. It does not contain the timestamps from the original segments. So the subtitles filter starts from the beginning. I'm assuming this is the problem when you said, "it doesn't work".
The simple method to solve this is to make temporary files then concatenate them.
Output segments
ffmpeg -ss 00:11:47.970 -t 3.090 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp1.webm
ffmpeg -ss 00:11:46.470 -t 1.500 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp2.webm
ffmpeg -ss 00:11:51.060 -t 0.960 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp3.webm
The timestamps are reset when fast seek is used (-ss before -i). -copytswill preserve the timestamps so the subtitles filter knows where to start the subtitles.
Make input.txt:
file 'temp1.webm'
file 'temp2.webm'
file 'temp3.webm'
Concatenate with the concat demuxer:
ffmpeg -f concat -i input.txt -c copy output.webm
-c copy enables stream copy mode so it avoids re-encoding to concatenate.

FFMPEG: how to take 3000 snapshots faster?

I've got a video with the length of 200 minutes.
And also got the timestamp for each snapshot that will be taken.
I've tried to use ffmpeg to take snapshot with following commands.
However, it is very slow, and takes about 10 seconds for each snapshot.
Is there any way that can speed up? Thanks.
OS: win10; PC: Intel NUC8i5
ffmpeg -i 1.mp4 -ss 00:00:04 -vframes 1 000004.jpg
ffmpeg -i 1.mp4 -ss 00:00:08 -vframes 1 000008.jpg
ffmpeg -i 1.mp4 -ss 00:00:12 -vframes 1 000012.jpg
ffmpeg -i 1.mp4 -ss 00:00:16 -vframes 1 000016.jpg
ffmpeg -i 1.mp4 -ss 00:00:17 -vframes 1 000017.jpg
ffmpeg -i 1.mp4 -ss 00:00:20 -vframes 1 000020.jpg
ffmpeg -i 1.mp4 -ss 00:00:24 -vframes 1 000024.jpg
ffmpeg -i 1.mp4 -ss 00:00:26 -vframes 1 000026.jpg
ffmpeg -i 1.mp4 -ss 00:00:28 -vframes 1 000028.jpg
ffmpeg -i 1.mp4 -ss 00:00:32 -vframes 1 000032.jpg
ffmpeg -i 1.mp4 -ss 00:00:36 -vframes 1 000036.jpg
ffmpeg -i 1.mp4 -ss 00:00:38 -vframes 1 000038.jpg
ffmpeg -i 1.mp4 -ss 00:00:43 -vframes 1 000043.jpg
If the timestamps are at irregular intervals (as appears to be the case from your example) you can use a select filter:
ffmpeg -i 1.mp4 -filter:v \
"select='lt(prev_pts*TB\,4)*gte(pts*TB\,4) \
+lt(prev_pts*TB\,12)*gte(pts*TB\,12) \
+lt(prev_pts*TB\,17)*gte(pts*TB\,17) \
+lt(prev_pts*TB\,28)*gte(pts*TB\,28) \
+lt(prev_pts*TB\,43)*gte(pts*TB\,43)'" \
-vsync drop out/%03d.jpg
This will grab the frame at the specified timestamp and if there is not a frame at that precise timestamp, it will grab the following frame.
Try split video (200 min one) in 200 parts, then load one by one. Loading whole video to ram is expensive. Try with small video to see how it will improve
I used VLC to solve this problem.
The basic idea is to use VLC's http interface to control vlc to jump to specific timestamp and take the snapshot.
The cosumption of RAM is very low, and the speed is much faster.
It takes about 16 minutes to take 3800 snapshots.
The below code is in autohotkey.
#Include, VLC_HTTP2.ahk
Loop, read, %A_ScriptDir%\TimeStamps.txt
{
VLC_JumpTime:=A_LoopReadLine
VLCHTTP2_Jumpto(VLC_JumpTime)
Sleep, 100
VLCHTTP2_Snapshot()
}
return
VLCHTTP2_Jumpto(VLC_JumpToTime)
{
VLC_CurrentTime:=VLCHTTP2_TimeSeconds()
SeekBackwardTime:=VLC_CurrentTime-VLC_JumpToTime
if SeekBackwardTime >0
VLCHTTP2_JumpBackward(SeekBackwardTime)
else if SeekBackwardTime <0
{
SeekForwardTime:=-SeekBackwardTime
VLCHTTP2_JumpForward(SeekForwardTime)
}
}

FFMPEG cannot encode video with high speed change

Hi I am trying to speed up and trim clips with FFMPEG version 4.2.2. Is there a limit to how fast you can speed up a clip? If I try to speed up a clip over a certain then the output file cannot be opened.
I have tried two methods without any luck: 1. using the setPTS filter and 2. inputing the file at a faster frame rate.
1.
ffmpeg -i GH012088.MP4 -y -ss 18 -t 0.48 -an -filter:v "setpts=0.096*PTS" -r 25 output.MP4
2.
ffmpeg -r 312.1875 -i GH012088.MP4 -y -ss 18 -t 0.48 -r 25 -an output.MP4
I am trying to create a clip from the input that starts at 1 second in the original clip, plays at 10.4166 x speed and lasts for 0.48 seconds
What am I doing wrong?
Thanks
Use
ffmpeg -ss 1 -i GH012088.MP4 -y -t 0.48 -an -filter:v "setpts=0.096*PTS" -r 25 output.MP4
The seek has to be on the input side, before frames are retimed. The -t has to be on output side, after frames are retimed.
Does the movie have sound?
If yes, than we have to sync speed up audio and video by combine filter:
ffmpeg -i video.avi -filter_complex "[0:v]setpts=0.5*PTS[v];[0:a]atempo=2.0[a]" -map "[v]" -map "[a]" -f avi video1.avi

How to convert video to gif and mp3 that sync using ffmpeg?

I want to convert video to gif with audio. The two should match when played at the same time.
The command I use somehow generates results that's a bit off.
To create gif:
ffmpeg -ss 00:00:10 -i input.mp4 -t 4 -pix_fmt rgb24 -r 10 -filter:v "scale=-1:300" out.gif
To create mp3:
ffmpeg -ss 00:00:10 -i input.mp4 -t 4 out.mp3
I'm guessing this has something to do with the slicing.
Untested: You could try one of these two options below. If still not working then please provide a short clip (MP4 link) that can be tested to give you required solution...
Option 1) Try using -itsoffset instead of -t...
ffmpeg -ss 00:00:10 -itsoffset 4 -i input.mp4 -pix_fmt rgb24 -r 10 -filter:v "scale=-1:300" out.gif
ffmpeg -ss 00:00:10 -itsoffset 4 -i input.mp4 out.mp3
Option 2) Avoid issue of non-matching times for keyframes (of video track vs audio track)...
First trim the video (get's whatever audio is available at nearest video keyframe to your time):
ffmpeg -ss 00:00:10 -i input.mp4 -t 4 trimmed.mp4
Then use trimmed MP4 (will have synced audio) as source for your output GIF and MP3.
ffmpeg -i trimmed.mp4 -pix_fmt rgb24 -r 10 -filter:v "scale=-1:300" out.mp4
ffmpeg -i trimmed.mp4 out.mp3

ffmpeg concat video and image issue

I have a video it's 190 seconds long.
I want to show a part of the video with audio and a watermark (from 28th second to 154th second)
and then the video fades out, and then show an image for 5 seconds at the end of the video.
everything was working fine until i added concat and endpic.jpg
Here is the script it wrote but it's not working. It's really driving me crazy.
ffmpeg -y -ss 28 -i input.mp4 -loop 1 -i watermark.png -loop 1 -t 5 -i endpic.jpg -f lavfi -t 5 -i anullsrc -filter_complex "[1]fade=in:st=3:d=1:alpha=1,fade=out:st=20:d=1:alpha=1[w]; [0][w]overlay=main_w-overlay_w-10:main_h-overlay_h-10[sonh];[sonh]fade=out:st=154:d=1[sonhh];[sonhh:v][sonhh:a][2:v][3:a]concat=n=2:v=1:a=1[v][a]" -t 155 -map "[v]" -map "[a]" output.mp4
Use
ffmpeg -y -ss 28 -to 154 -i input.mp4 -loop 1 -t 22 -i watermark.png -loop 1 -t 5 -i endpic.jpg -f lavfi -t 5 -i anullsrc -filter_complex "[1]fade=in:st=3:d=1:alpha=1,fade=out:st=20:d=1:alpha=1[w]; [0][w]overlay=main_w-overlay_w-10:main_h-overlay_h-10,fade=out:st=154:d=1[sonhh];[sonhh][0:a][2:v][3:a]concat=n=2:v=1:a=1[v][a]" -t 155 -map "[v]" -map "[a]" output.mp4
If you don't limit the input duration, ffmpeg will feed till 190s of the input, and due to -t 155, the output will never get to the end of the input and the start of endpic.
Linklabels assigned within a filtergraph don't represent the original inputs so [sonhh:v][sonhh:a] isn't valid. The input audio remains [0:a].
Input -to was added a few months ago, so ensure you're using a recent build of ffmpeg.

Resources