Can ffmpeg seek when using an input stream (or url)? - ffmpeg

I've been using ffmpeg quite a lot in the past few weeks, and recently I've encountered a very annoying issue - when I use ffmpeg with an input stream (usually, just a url as the input) and try to set a start time (with -ss option), I always get a warn message that says "could not seek to position: XXX".
Then, ffmpeg just starts to download the file, and it ouputs nothing until it has downloaded enough data and got to my desired start time.
I'll give an example:
I use this command to execute ffmpeg:
ffmpeg -ss 50 -re -i https://ascent.usbank.com/acp/videos/041114ascent.flv -b:a 128k -ac 2 -acodec libvorbis -b:v 1024k -vcodec libtheora -strict 2 -preset ultrafast -tune zerolatency -pix_fmt yuv420p -f ogg pipe:1
and I get the warn message
https://ascent.usbank.com/acp/videos/041114ascent.flv: could not seek to position 50.000
Then, it takes about 30 seconds until ffmpeg starts to output data to stdout. And when I try this with longer videos (and longer seek times), it takes even longer.
My question is, what can I do? I guess it's impossible for ffmpeg to seek when it haven't got the whole input stream... Am I wrong? Or is there any other solution?
Of course I try to avoid downloading the entire file from the web...
Thanks in advance!
Roee.

I guess you can't do really anything about it other than to buffer the FLV locally and (eventually) seek that.
Whether or not a http resource allows seeking largely depends on the capabilities of the server, unfortunately...

Related

ffmpeg: How to keep audio synced when doing many (100) cuts with filter select='between(t,start,stop)+between...'

I am cutting out silent parts of a 45 minute video (a lecture).
To do this, I use a filter to select, say one hundred, non-silent parts (I already know their start and end times).
ffmpeg -i in.mp4
-vf "select='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', setpts=N/FRAME_RATE/TB"
-af "aselect='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', asetpts=N/SR/TB"
-c:a aac -c:v libx264 out.mp4
It works, but at the end of the video the images are delayed relative to the audio.
After reading this answer I also added
-shortest -avoid_negative_ts make_zero -fflags +genpts
at the end of the command. It didn't help.
As audio and video are concatenated independently I'm not surprised that tiny time errors due to finite frame rate add up.
Is there a solution that doesn't involve saving every non-silent part as a file?

Converting multiple ffmpeg command into one line (burn subtitle & watermark)

i'm first burning subtitles of mkv and then adding watermark which is taking so long to convert one video. It takes like x2 time i guess. For example on my current server it takes 30 min on each command. My server may not be good enough. But i was thinking if there way to do this in one command instead? will it effect the speed? i really have almost zero knowledge on ffmpeg:
here is command for burning subtitle. I'm using python for achieving this:
ffmpeg -i subtitles=/Users/Test/Desktop/test.mkv -vf subtitles=subtitles=subtitles=/Users/Test/Desktop/test.mkv -c:v libx264 -c:a aac -preset ultrafast -strict -2 /Users/Test/Desktop/test.mp4
command for adding watermark:
ffmpeg -i /Users/Test/Desktop/test.mp4 -i /Users/Test/Desktop/watermark-logo.png -filter_complex "[1][0]scale2ref=w='iw*10/100':h='ow/mdar'[wm][vid]; [vid][wm]overlay=main_w-overlay_w-5:main_h-overlay_h-5" /Users/Test/Desktop/output.mp4
If there are more ways to speed up then kindly let me know. All i want to achieve this faster and expecting best result.
Thank you.
First apply the subtitles on the video and then feed that to scale2ref inside the complex filtergraph.
Use
ffmpeg -i /Users/Test/Desktop/test.mkv -i /Users/Test/Desktop/watermark-logo.png -filter_complex "[0]subtitles=/Users/Test/Desktop/test.mkv[v];[1][v]scale2ref=w='iw*10/100':h='ow/mdar'[wm][vid]; [vid][wm]overlay=main_w-overlay_w-5:main_h-overlay_h-5" -preset fast /Users/Test/Desktop/output.mp4

FFMPEG screen capture outputting very poor and inconsistent framerate as webm with no audio

I've been testing different parameters to capture my desktop video and audio (desktop audio, not mic) and I find that no matter what settings I have, the resulting webm file's framerate is around 5fps and is horribly inconsistent. It starts at around 20fps and slowly drops over time until about 4-5fps. I'm not really sure what I'm doing wrong, but here is the basic command I'm using:
ffmpeg -y -video_size 1920x1080 -f gdigrab -framerate 60 -i desktop -c:v libvpx-vp9 -acodec libvorbis -c:a libopus -b:v 2M -threads 4 output.webm
I've tried anywhere between 30-60 fps and tested different bitrates but nothing seems to affect the output framerate.
Also, I know that acodec and c:a are for audio but I'm not sure how to specify the audio device to use.
So my issues are horrible framerate for webm and how to include desktop audio in the recording.
You can use arecord and pipe it through stdout and ffmpeg can read it from stdin.
aplay piping to arecord using a file instead of stdin and stdout
Replacing the aplay command with your ffmpeg. Dont forget to add '-i -' in ffmpeg.
A doubt: why are you defining audio encoder two times?
It's impossible to say why the video frame rate is low from the question. It can be an issue with encoder. Or issue in reading input. Remove the video encoding option. See if the issue persists. If it's working fine, try some other encoders.
Use -c:v libx264 instead of -c:v libvpx-vp9. libvpx-vp9's realtime encoding quality is really bad, even regular libvpx (i.e. VP8) is much better. If you insist on using libvpx, use options like -deadline realtime and -cpu-used -4

FFMPEG - Speed up video for time lapse - quicker/faster?

Okay, I know this question has been asked a bajillion times. However, I have one small addition to the question that I haven't seem to have been able to find in my googling.
I'm certainly not a pro at FFMPEG...I've been using the standard speed up/slow down template for FFMPEG, the one I'm using is:
ffmpeg -i input.mp4 -filter:v "setpts=PTS/60" -an output.mp4
I'm currently working with an hour long 4K/60FPS video...I want to shrink it down to about 30 seconds or so, so I'm using PTS/100, and I don't need audio...the problem is, this is taking FOREVER...which I completely expected it to.
But as I'm sitting here waiting for it to finish...I can't help but wonder...is there a faster/more efficient way to accomplish this? I know there's a lot of weird things about FFMPEG in regards to the order of the commands you use to speed up seek time, and presets and etc.
You can use
ffmpeg -itsscale 0.016667 -i input.mp4 -c copy -an output.mp4
where 0.016667 is 1/60.
However, this will keep all frames, and if the input timebase doesn't have sufficient resolution, you'll have incorrect timestamps. You can work around that by creating a temp file first.
ffmpeg -i input.mp4 -c copy -video_track_timescale 90k -an temp.mp4
and then running the first command on this temp file.
This sequence of commands may be helpful to solve that issue:
ffmpeg -i source.avi -r 0.016667 image/image%05d.bmp
ffmpeg -i image/image%05d.bmp -vcodec libx264 -b:v 500k -f avi video.avi

FFmpeg how generate a sequence of videos with bash

i try to write an .sh that read a folder create a playlist of mp4 files and then generate an only big video with a sequence of all videos find in the folder, and encode it for dash:
printf "file '%s'\n" ./*.mp4 > playlist.sh
ffmpeg -f concat -safe 0 -i playlist.sh -c copy concat.mp4
Till now i follow the demux concat official guido to ffmpeg website.
Without result, also the following give me "more than 1000 frames duplicated between videos of the sequence"
ffmpeg -f concat -i playlist.sh -c:a aac -b:a 384k -ar 48000 -ac 2 -c:v libx264 -x264opts 'keyint=50:min-keyint=50:no-scenecut' -r 25 -b:v 2400k -maxrate 2400k -bufsize 1200k -vf "scale=-1:432 " out.mp4
Thanks a lot
Sry, cannot comment (yet)...
Your commands are correct, I could just concat some sample videos.
Do you always get the mentioned error, or also something else? And is the video working, or no video is created?
In most cases, the input video is incorrect. Wrong input format (not fitting to file extension) or worse like ending at wrong frame.
Perhaps you can make the video available?
PS: Needed to add -safe 0 to the second command to avoid error [concat # 0x7fbfd1000000] Unsafe file name './small.mp4'
Hint: Do not use file extension .sh for your list of video files. This extension is used for shell scripts, so it can be confusing. Just use .txt.
UPDATE #Massimo Vantaggio
We should not create new answers, but I cannot comment yours and I also don't know how to continue our discussion, so I edit my answer.
Your videos don't look very different. Can't see, what's wrong with the first one.
Perhaps you could use ffprobe -report input.mp4 to get more informations. Look for errors or warnings.
My assumption is still that the video was cut in a hard way (by conversion software), so keyframes are messed up or something else.
You can also try to first reencode your video with ffmpeg. After that, it should be complete compatible with ffmpeg ;)
Something like this:
ffmpeg -i small.mp4 -acodec aac -ab 192k -vcodec libx264 -vb 1024k -f mp4 output.mp4
Use -ab and -vb from your input video, or at least the bitrate from input. Quality will decrease a little bit and file size increase, but it should be okay.

Resources