I am using ffmpeg to convert a series of images and an audio file to video.
This process happens in parallel, i.e, for different sets of images and audio file, I am spawning ffmpeg process in parallel.
For a single request, ffmpeg response time is around 20 seconds, which degrades to around 80 seconds [avg] for 1000 req. I am using 30 as throttle limit.
preset is veryfast. Can anyone suggest ways to improve ffmpeg performance?
Update:
Please find below configuration used:
CPU : 16 * 3.00 GHZ
Ram : 32 gb
OS: Win Server 2012
ffmpeg command:
ffmpeg -f concat -i <path to text file containing images sequence> -i <path to audio file> -y -filter_complex_script <path to filter script file> -crf 22 -threads 2 -preset veryfast <output.mp4>
Related
I'm running the following command using FFMpeg to extract an image every 1 second from a UDP stream:
ffmpeg -i "udp://224.1.2.123:9001" -s 256x144 -vf fps=1 -update 1 test.jpg -y
This works well, but it takes about 5 seconds to actually start producing images. Is there any way to lower the startup time?
The UDP stream uses mpegts format and is encoded with H264/AAC.
Thanks!
I have 1300 frames, and I convert them at 21fps. That should be over a minute of footage from my sequence of images, but the lossless command I'm using is producing an 18 second video out of the 1300 frames. Am I doing this wrong?
Command:
ffmpeg -framerate 21 -i Blots_%04d.0001_x2-standard-scale-2_00x.tif -c:v libx264rgb -crf 0 Blots.mp4
The video in Media Player Classic says it only draws 337 frames at 21fps.
I also tried the following which resutls in the same size file and issue:
ffmpeg -r 21 -i Blots_%04d.0001_x2-standard-scale-2_00x.tif -c:v libx264rgb -crf 0 Blots.mp4
Turns out there is no problem with FFMPEG, but an error with my batch processing program, which did a 300 of the frames in one setting, and the rest in another, resulting in a different file name setup.
I am running a ffmpeg command as a systemd service to catch a live RTSP stream and generate hls chunks, the chunks are set to be 30 seconds long with the -hls_time option, when I run the command on the console myself it works ok, but when it runs from the service chunks, wish are supposed to be 30 seconds long are 7 or 8 seconds.
This is the command:
/usr/bin/ffmpeg -rtsp_flags prefer_tcp -i
"rtsp://192.168.1.16:554/user=admin&password=&channel=1&stream=1.sdp"
-acodec copy -vcodec copy -hls_time 30 -hls_list_size 10 -hls_flags append_list+delete_segments -f hls -use_localtime 1
-hls_segment_filename "/home/zurikato/video-backup/${FILENAME_FORMAT}_hls.ts"
/home/zurikato/video-backup/playlist.m3u8
I'm a beginner in ffmpeg and linux services, so please indulge me if it is a simple matter.
Thanks in advance
When using vcodec copy, you are at the mercy of the key frame interval of the incoming media. Nothing you can do server side unless you transcode the video stream.
I want to convert a mkv formatted video to mp4, using the ffmpeg application.
and for that I ran below command in terminal:
ffmpeg -y -i c38a4990774b3c23.mkv -c:v libx264 -c:a aac -r 25 -strict -2 -map_metadata -1 -movflags faststart -vf "crop=1920:800:0:4, scale=iw*min(426/iw\,240/ih):ih*min(426/iw\,240/ih), pad=426:240:(426-iw*min(426/iw\,240/ih))/2:(240-ih*min(426/iw\,240/ih))/2, setsar=sar=1" output.mp4
I have compiled ffmpeg with --enable-pthread configuration
when I run this command on my personal PC with a 3.2GHz quad core cpu, it uses 60% of overall cpu process and encode video with 150fps; but when I run this command on a production server with 8 2.4GHz dual core cpu (16 core) it only uses up to 20% of overall cpu process and encode video with 97fps.
I have also tried ramdisk but I got no performance improvement.
I'm trying to setup a media processing server. I've done a lot of research for FFMPEG and wrote a command. The command is as follows.
ffmpeg -y -i "bbb_sunflower_2160p_60fps_normal.mp4" -c:v libx264 \
-threads 7 -profile:v main -preset ultrafast -vf scale=1920:-1 \
"process/video/1080p.mp4" -c:v libx264 -threads 7 -profile:v main \
-preset ultrafast -vf scale=1280:-1 "process/video/720p.mp4" -c:v \
libx264 -threads 7 -profile:v main -preset ultrafast -vf \
scale=854:-1 "process/video/480p.mp4" -vf fps=5/60 \
process/image/thumb_%d.jpg
This command works and runs perfectly, but it is dirt slow. My server, which is dedicated to just running ffmpeg has the following specs:
12 core intel Xeon X5650 (Hyperthreading enabled)
64 GB ECC DDR3 RAM
250 GB SSD Drive
But when I use this command, the server CPU load hangs around 250-300%, which I would like it to hang around 2,000% while processing the video. Currently when processing the video, the server is rendering around 17 frames per second. This would take a very long time to process a 10 minute video that's 60fps.
It's the scaler. The scaler in ffmpeg is single threaded, it is a bottleneck on a system with that many threads. Try running a different process for each output.
If you are running windows, try again with defender (and any other virus checker) disabled. It can make a huge difference.
Let us know the outcome please...
This worked for me on a windows 10 machine (which then processed up to ten times faster ) and is therefore a possible answer to the above problem. Clarification (of any sort) is not requested, but it would be good to know if it helped.
This is a very complicated commandline with little to no useful information. For example, you're not providing FFmpeg stdout/stderr (which contains lots of useful information). Possible causes:
video encoding is simply too slow (try 1 encode instead of 3, w/o screenshots)
maybe your bottleneck is audio (test with -an)
something else?
I'd encourage you to test simpler versions and provide stdout/stderr.