I have a usecase where I need to transcode a s3 file. I have two options
Option a : Download the file to local, then run ffmpeg on it
Option b : Provide a presigned URL as ffmpeg input eg -
"./ffmpeg -loglevel debug -y -i "https://mybucket/key?signedParams" -threads 0 -map_chapters -1 -f mp4 -movflags faststart -map 0:0 -acodec libfdk_aac -ac 2 -ar 44100 -b:a 48k -sn -vn /output.mp4"
I tried running both and comparing the time but I don't see any much performance improvement in #b as compared to #a.
I have 2 questions
Is #b is better in performance than #a? Or both of them are same?
In case of #b does ffmpeg wait for complete download and then start transcoding or it start downloading and transcoding simultaneously?
I tried running both and comparing the time but I don't see any much performance improvement in #b as compared to #a.
Your bottleneck is probably in the transcoding then.
Is #b is better in performance than #a? Or both of them are same?
I imagine streaming the input (when compatible) would be better performance. Certainly less work to do. But, if downloading that file in the first place isn't the bottleneck then it isn't going to matter a ton.
In case of #b does ffmpeg wait for complete download and then start transcoding or it start downloading and transcoding simultaneously?
It will transcode as it is downloading/streaming.
Related
I am cutting out silent parts of a 45 minute video (a lecture).
To do this, I use a filter to select, say one hundred, non-silent parts (I already know their start and end times).
ffmpeg -i in.mp4
-vf "select='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', setpts=N/FRAME_RATE/TB"
-af "aselect='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', asetpts=N/SR/TB"
-c:a aac -c:v libx264 out.mp4
It works, but at the end of the video the images are delayed relative to the audio.
After reading this answer I also added
-shortest -avoid_negative_ts make_zero -fflags +genpts
at the end of the command. It didn't help.
As audio and video are concatenated independently I'm not surprised that tiny time errors due to finite frame rate add up.
Is there a solution that doesn't involve saving every non-silent part as a file?
I'd like to increase the speed of playback so that I can catch up to whatever the newest available audio packet is. Using PulseAudio on archlinux for server, client uses windows although that really shouldn't matter.
Server commands issued:
pactl load-module module-null-sink sink_name=remote
ffmpeg -f pulse -i "remote.monitor" -ac 2 -acodec pcm_s16le -ar 48000 -f s16le "udp://{LAN_IP_OF_CLIENT}:{PORT}"
Client command issued:
ffplay.exe -nodisp -ac 2 -acodec pcm_s16le -ar 48000 -analyzeduration 0 -probesize 32 -f u8 -i udp://0.0.0.0:{PORT}
Current setup is using pavucontrol to put the audio output to the pactl sink from firefox and just keeping the cli application running somewhere. Often times the network is slow, and the audio will grow an increasingly noticable lag behind whatever is onscreen. When I re-execute the commands on both server and client it catches up. If possible I'd like to keep up with whatever's being broadcast- I figure the simplest solution is to nudge the playback speed a little faster than audio is being sent over so that in the mid-long term it will fix itself.
If there's just a way to discard audio packets that aren't the newest ones and jump ahead when possible I'd prefer that as a solution- I know too little about ffmpeg to know if that's possible to do easily.
Not an ideal solution, but it's possible to increase speed with -filter:a "atempo:{speed_here}" flag.
https://trac.ffmpeg.org/wiki/How%20to%20speed%20up%20/%20slow%20down%20a%20video
Still don't know how to do it from cli interface.
i try to write an .sh that read a folder create a playlist of mp4 files and then generate an only big video with a sequence of all videos find in the folder, and encode it for dash:
printf "file '%s'\n" ./*.mp4 > playlist.sh
ffmpeg -f concat -safe 0 -i playlist.sh -c copy concat.mp4
Till now i follow the demux concat official guido to ffmpeg website.
Without result, also the following give me "more than 1000 frames duplicated between videos of the sequence"
ffmpeg -f concat -i playlist.sh -c:a aac -b:a 384k -ar 48000 -ac 2 -c:v libx264 -x264opts 'keyint=50:min-keyint=50:no-scenecut' -r 25 -b:v 2400k -maxrate 2400k -bufsize 1200k -vf "scale=-1:432 " out.mp4
Thanks a lot
Sry, cannot comment (yet)...
Your commands are correct, I could just concat some sample videos.
Do you always get the mentioned error, or also something else? And is the video working, or no video is created?
In most cases, the input video is incorrect. Wrong input format (not fitting to file extension) or worse like ending at wrong frame.
Perhaps you can make the video available?
PS: Needed to add -safe 0 to the second command to avoid error [concat # 0x7fbfd1000000] Unsafe file name './small.mp4'
Hint: Do not use file extension .sh for your list of video files. This extension is used for shell scripts, so it can be confusing. Just use .txt.
UPDATE #Massimo Vantaggio
We should not create new answers, but I cannot comment yours and I also don't know how to continue our discussion, so I edit my answer.
Your videos don't look very different. Can't see, what's wrong with the first one.
Perhaps you could use ffprobe -report input.mp4 to get more informations. Look for errors or warnings.
My assumption is still that the video was cut in a hard way (by conversion software), so keyframes are messed up or something else.
You can also try to first reencode your video with ffmpeg. After that, it should be complete compatible with ffmpeg ;)
Something like this:
ffmpeg -i small.mp4 -acodec aac -ab 192k -vcodec libx264 -vb 1024k -f mp4 output.mp4
Use -ab and -vb from your input video, or at least the bitrate from input. Quality will decrease a little bit and file size increase, but it should be okay.
I've been using ffmpeg quite a lot in the past few weeks, and recently I've encountered a very annoying issue - when I use ffmpeg with an input stream (usually, just a url as the input) and try to set a start time (with -ss option), I always get a warn message that says "could not seek to position: XXX".
Then, ffmpeg just starts to download the file, and it ouputs nothing until it has downloaded enough data and got to my desired start time.
I'll give an example:
I use this command to execute ffmpeg:
ffmpeg -ss 50 -re -i https://ascent.usbank.com/acp/videos/041114ascent.flv -b:a 128k -ac 2 -acodec libvorbis -b:v 1024k -vcodec libtheora -strict 2 -preset ultrafast -tune zerolatency -pix_fmt yuv420p -f ogg pipe:1
and I get the warn message
https://ascent.usbank.com/acp/videos/041114ascent.flv: could not seek to position 50.000
Then, it takes about 30 seconds until ffmpeg starts to output data to stdout. And when I try this with longer videos (and longer seek times), it takes even longer.
My question is, what can I do? I guess it's impossible for ffmpeg to seek when it haven't got the whole input stream... Am I wrong? Or is there any other solution?
Of course I try to avoid downloading the entire file from the web...
Thanks in advance!
Roee.
I guess you can't do really anything about it other than to buffer the FLV locally and (eventually) seek that.
Whether or not a http resource allows seeking largely depends on the capabilities of the server, unfortunately...
I am trying to implement HLS using FFmpeg for transcoding + segmenting but have been facing a couple of issues that have been bugging me for the past week.
Issue
Webserver currently receives live MP4 fragments being recorded on-the-go and needs to take care of transcoding and segmentation.
As mp4 fragments are being received, they need to be encoded. Then segmented. If i run a segmenter (be it ffmpeg or apple mediastreamsegmenter), every mp4 fragment is being treated as a VOD by itself and I'm not being able to integrate them as part of a larger live event implementation.
I thought of a solution where every time I receive an mp4 fragment, I first use fmpeg to concatenate it with previous ones to form the larger mp4 that I then pass to be segmented for HLS. That did not work either because the entire stream has to be re-segmented each and every time and existing TS fragments replaced by new ones that are similar yet shifted in time.
Implementation 1
ffmpeg -re -i fragmentX.mp4 -b:v 118k -b:a 32k -vcodec copy -preset:v veryfast -acodec aac -strict -2 -ac 2 -f mpegts -y fragmentX.ts
I manage the m3u8 manifest on my own, deleting old fragments and appending new ones.
When validating the stream, I find it stacked with EXT-X-DISCONTINUITY tags making the stream unwatchable.
Implementation 2
First combine latest fragment with overall.mp4
ffmpeg -i "concat:newfragment.mp4|existing.mp4" -c copy overall.mp4
Then pass the combination to ffmpeg for HLS segmentation
ffmpeg -re -i overall.mp4 -ac 2 -r 20 -vcodec libx264 -b:v 318k -preset:v veryfast -acodec aac -strict -2 -b:a 32k -hls_time 2 -hls_list_size 3 -hls_allow_cache 0 -hls_base_url /Users/JosephKalash/Desktop/test/350/ -hls_segment_filename '350/fragment%03d.ts' -hls_flags delete_segments 350/index.m3u8
Concatenation is not perfect and there are noticeable glitches where the fragments are supposed to be stitched. Segmentation replaces older fragments and the manifest is rewritten as if it's a new HLS stream every time ffmpeg is called.
I cannot figure out how to get this to work properly.
Any ideas?
Solved by relying on nginx rtmp module which turned out to be suited for the above implementation.