I am streaming a static png file with ffmpeg and it uses basically all my CPU. It seems a bit greedy to me, and even though I limited the fps on the input and output size, I am seeing a huge fps printed out.
w:\ffmpeg\bin>ffmpeg.exe -loop 1 -framerate 1 -i w:\colorbar2.png -r 10 -vcodec libx264 -pix_fmt yuv420p -r 10 -f mpegts udp://127.0.0.1:10001?pkt_size=1316
ffmpeg version N-68778-g5c7227b Copyright (c) 2000-2014 the FFmpeg developers
built on Dec 29 2014 22:12:54 with gcc 4.9.2 (GCC)
Input #0, png_pipe, from 'w:\colorbar2.png':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: png, pal8, 320x240 [SAR 3779:3779 DAR 4:3], 1 fps, 1 tbr, 1 tbn, 1 tbc
[libx264 # 00000000002fb320] using SAR=1/1
[libx264 # 00000000002fb320] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 # 00000000002fb320] profile High, level 1.2
Output #0, mpegts, to 'udp://127.0.0.1:10001?pkt_size=1316':
Metadata:
encoder : Lavf56.16.102
Stream #0:0: Video: h264 (libx264), yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=-1--1, 10 fps, 90k tbn, 10 tbc
Metadata:
encoder : Lavc56.19.100 libx264
Stream mapping:
Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
Press [q] to stop, [?] for help
frame=561310 fps=579 q=25.0 size= 144960kB time=15:35:25.80 bitrate= 21.2kbits/s dup=505179 drop=0
As you can see the frame counter goes up quickly and fps=579 is reported on the last line. I am confused now, what does that fps mean, if above the low frame per secs are also mentioned (output 10fps, input 1 fps)
What am I doing wrong and how could I reduce CPU load more given that it's a static file that is being streamed.
Thanks!
ffmpeg attempts to decode and encode as fast as it can. Just because you set the output to be 10 frames per second does not mean that it will (de|en)code realtime at 10 frames per second.
Try the -re input option. From the ffmpeg cli-tool documentation:
Read input at native frame rate. Mainly used to simulate a grab device
or live input stream (e.g. when reading from a file). Should not be
used with actual grab devices or live input streams (where it can
cause packet loss). By default ffmpeg attempts to read the input(s)
as fast as possible. This option will slow down the reading of the
input(s) to the native frame rate of the input(s). It is useful for
real-time output (e.g. live streaming).
Example:
ffmpeg.exe -re -loop 1 -framerate 10 -i w:\colorbar2.png -c:v libx264 \
-tune stillimage -pix_fmt yuv420p -f mpegts udp://127.0.0.1:10001?pkt_size=1316
Related
I'm trying to use Ffmpeg for creating a hevc realtime stream from a Decklink input. The goal is high quality HDR stream usage with 10 bits.
The Decklink SDI input is fed RGB 10 bits, which is well handled by ffmpeg with the decklink option -raw_format rgb10, which gets recognized by ffmpeg as 'gbrp10le'.
I have a Nvidia pascal-based card, which supports yuv444 10 bit (as 'yuv444p16le') and when when using '-c:v hevc_nvenc' the auto_scaler kicks in and converts to 'yuv444p16le', which I guess is the same conversion as giving '-pix_fmt yuv444p16le'.
This is working very well in 1920x1080 resolution, but in 4096x2160 resolution ffmpeg can't keep up realtime 24 or 25 fps, and I get input buffer overruns.
The culprit seems to be the RGB->YUV conversion in ffmpeg swscale because;
When piping the Decklink 4K RGB input with '-c:v copy' straight to /dev/null, there's is no problems with buffer underruns,
And when feeding the Decklink YUV and giving '-raw_format yuv422p10’ (no YUV444 input for decklink seems available for decklink in ffmpeg) I get no underrun and everything works well in 4K. Even if I set '-pix_fmt yuv444p16le'.
Any ideas how I could accomplish a 4K hevc in NVENC with the 10-bit RGB signal from the Decklink? Is there a way to make NVENC accept and use the RGB data without first converting to YUV? Or is there maybe a way to convert gbrp10le->yuv444p16le with cuda or scale_npp filter? I have compiled ffmpeg with npp and cuda, but I cannot figure out if I can get it to work with RGB? Whenever I try to do '-vf "hwupload_cuda"', auto_scaler kicks in and tries to convert to yuv on the cpu, which again creates underruns.
Another thing I guess could help is if there was a way to make the swscale cpu filter(or if there is another suitable filter?) use multiple threads? Right now it seems to only use one thread at a time, maxing out at 99% on my Ryzen 3950x (3,5GHz, 32 threads).
Example ffmpeg output:
$ ffmpeg -loglevel verbose -f decklink -raw_format rgb10 -i "Blackmagic Card 1" -c:v hevc_nvenc -preset medium -profile:v main10 -cbr 1 -b:v 20M -f nut - > /dev/null
--
Stream #0:1: Video: r210, 1 reference frame, gbrp10le(progressive), 4096x2160, 6635520 kb/s, 25 tbr, 1000k tbn, 1000k tbc
--
[graph 0 input from stream 0:1 # 0x4166180] w:4096 h:2160 pixfmt:gbrp10le tb:1/1000000 fr:25000/1000 sar:0/1
[auto_scaler_0 # 0x4168480] w:iw h:ih flags:'bicubic' interl:0
[format # 0x4166080] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
[auto_scaler_0 # 0x4168480] w:4096 h:2160 fmt:gbrp10le sar:0/1 -> w:4096 h:2160 fmt:yuv444p16le sar:0/1 flags:0x4
[hevc_nvenc # 0x4139640] Loaded Nvenc version 11.0
--
Stream #0:0: Video: hevc (Rext), 1 reference frame (HEVC / 0x43564548), yuv444p16le(tv, progressive), 4096x2160 (0x0), q=2-31, 2000 kb/s, 25 fps, 51200 tbn
--
[decklink # 0x40f0900] Decklink input buffer overrun!:02.52 bitrate= 30471.3kbits/s speed=0.627x
goal: In my script I try to check if nvdec on my graphics card is available/functional.
I don't have any source video (H.264 / H.265) to use as input at this time intentionally, so I want to generate it.
It is also not necessary to use an encoder, because I do not need the output file.
I'm testing the exit code of command ffmpeg ($?).
I use nvidia-smi for check dec/enc load.
My attempt:
ffmpeg -y -hwaccel cuda -hwaccel_output_format cuda -c:v h264_cuvid -f lavfi -i testsrc="duration=5:size=1920x1080:rate=25" -c:v copy test.ts
output of my commands:
Input #0, lavfi, from 'testsrc=duration=5:size=1920x1080:rate=25':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264, rgb24, 1920x1080 [SAR 1:1 DAR 16:9], 25 tbn
Stream mapping:
Stream #0:0 -> #0:0 (h264 (h264_cuvid) -> wrapped_avframe (native))
Press [q] to stop, [?] for help
No information about the input framerate is available. Falling back to a default value of 25fps for output stream #0:0. Use the -r option if you want a different framerate.
Output #0, null, to 'pipe:':
Metadata:
encoder : Lavf58.65.101
Stream #0:0: Video: wrapped_avframe, rgb24, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 25 fps, 25 tbn
Metadata:
encoder : Lavc58.119.100 wrapped_avframe
frame= 0 fps=0.0 q=0.0 Lsize=N/A time=00:00:00.00 bitrate=N/A speed= 0x
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Output file is empty, nothing was encoded (check -ss / -t / -frames parameters if used)
I triead add -t 5 before test.ts but nothing changed.
Output ts file has zero size.
If I debug the command, I expect to add it to the end "-f null - 2>/dev/null". Output file is only for debug purposes.
Thank you.
You need to first generate the video with a H.264 encoder and then try decoding it separately afterwards.
ffmpeg -y -f lavfi -i "testsrc2=duration=5:size=1920x1080:rate=25" -c:v h264 test.ts
ffmpeg -c:v h264_cuvid -i test.ts -f null -
I would like to test my streaming infrastructure by generating an RTMP test video with a timestamp. This could look like that screen. The image doesn't matter. I'm after the working stream generated on-the-fly and timestamp only. I intend to use the ffmpeg tool for that purpose. The command could look something like
$ ffmpeg -i image.png \
-vf drawtext="fontfile=/Library/Fonts/Arial.ttf: \
timecode='00\:00\:00\:00': r=1: fontcolor=white: \
fontsize=24: box=1: boxcolor=black#0.5: \
boxborderw=5: x=(w-text_w)/2: y=(h-text_h)/2" \
-f flv rtmp://localhost/live/test
I do run locally a streaming server based on NGINX and its RTMP module.
However, the above command gives me the following error:
Input #0, png_pipe, from 'image.png':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: png, rgb24(pc), 768x576 [SAR 7874:7874 DAR 4:3], 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (png (native) -> flv1 (flv))
Press [q] to stop, [?] for help
[Parsed_drawtext_0 # 0x7fb78450ece0] Using non-standard frame rate 1/1
Output #0, flv, to 'rtmp://localhost/live/test':
Metadata:
encoder : Lavf57.71.100
Stream #0:0: Video: flv1 (flv) ([2][0][0][0] / 0x0002), yuv420p, 768x576 [SAR 1:1 DAR 4:3], q=2-31, 200 kb/s, 25 fps, 1k tbn, 25 tbc
Metadata:
encoder : Lavc57.89.100 flv
Side data:
cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
[flv # 0x7fb785812a00] Failed to update header with correct duration.
[flv # 0x7fb785812a00] Failed to update header with correct filesize.
frame= 1 fps=0.0 q=8.6 Lsize= 50kB time=00:00:00.00 bitrate=406016.0kbits/s speed=0.019x
video:49kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.451271%
The streaming server operates as expected. The problem is with the command. Would anyone be able to help me?
ffmpeg has testsrc you can use as a test source input stream:
ffmpeg -r 30 -f lavfi -i testsrc -vf scale=1280:960 -vcodec libx264 -profile:v baseline -pix_fmt yuv420p -f flv rtmp://localhost/live/test
-r, scaling, profile, etc are just an example and can be ommited/played with. The point is using -i testsrc
I am using ffmpeg to convert high quality videos to gif, most of the videos are 60fps and over 720p, but when I use the code below, to convert the video to gif, I get very low fps for the gif output,
#!/usr/bin/env
palette=/tmp/pallete.png
filter="fps=50,scale=480:-1:flags=lanczos"
ffmpeg -y -i test.mov -vf $filter,palettegen=stats_mode=diff $palette
ffmpeg -y -i test.mov -i $palette -lavfi "$filter [x]; [x][1:v] paletteuse" test.gif
another issue I have noted is - as the width increases e.g 720 instead of 480 I get even lower fps.
here is output log example, the output fps is lower than the assigned 50fps
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/tmp/201631203815.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.36.100
Duration: 00:00:05.48, start: 0.016000, bitrate: 1579 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1334x1334, 1576 kb/s, 60.18 fps, 60 tbr, 1000k tbn, 50 tbc (default)
Metadata:
handler_name : VideoHandler
Input #1, png_pipe, from '/tmp/pallete.png':
Duration: N/A, bitrate: N/A
Stream #1:0: Video: png, rgba(pc), 16x16 [SAR 1:1 DAR 1:1], 25 tbr, 25 tbn, 25 tbc
Output #0, gif, to '/tmp/201631203815.gif':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf56.40.101
Stream #0:0: Video: gif, pal8, 480x480, q=2-31, 200 kb/s, 50 fps, 100 tbn, 50 tbc (default)
Metadata:
encoder : Lavc56.60.100 gif
Stream mapping:
Stream #0:0 (h264) -> fps
Stream #1:0 (png) -> paletteuse:palette
paletteuse -> Stream #0:0 (gif)
Press [q] to stop, [?] for help
frame= 275 fps= 32 q=-0.0 Lsize= 2480kB time=00:00:05.50 bitrate=3693.5kbits/s
How do I ensure that the output fps is always whats set by the user?
Any resource on this is highly appreciated.
UPDATE
i have also noticed that the use of a higher fps eg filter="fps=90,scale=480:-1:flags=lanczos" has the effect of slowing down the gif,like a slow motion effect, the output fps is still lower around 15fps,
setting the fps value explicitly gave the same lower fps output
results frame= 346 fps= 24 q=-0.0 Lsize= 6506kB time=00:00:06.92
bitrate=7701.8kbits/s
This is not the output fps! It's the encoding speed. Most players don't properly play GIFs with a fps higher than 50. See the demo showing this behaviour.
I'm not experienced in making GIF files with FFmpeg, but as far as I know, the fps filter has an idividual "fps" parameter for the actual framerate value, so I think it may not work correctly if you omit that.
Just to make sure the filter gets the correct value, you should explicitly set the fps value:
filter="fps=fps=50,scale=480:-1:flags=lanczos"
If it doesn't working, I'd try the regular "rate" option too:
ffmpeg -y -i test.mov -i $palette -lavfi "$filter [x]; [x][1:v] paletteuse" -r 50 test.gif
Otherways, your console output looks good (it indicates the output will be 50fps), so the phenomena is a little bit mysterious.
Working Solution:
All you need to do is to break the process into three individual steps, and use the "-framerate" demux-option.
First, let's generate the palette file:
ffmpeg -i <input_file> -filter_complex "scale=w=480:h=-1:flags=lanczos, palettegen=stats_mode=diff" palette.png
Secondly, break the video frames into image files:
ffmpeg -i <input_file> -r 50 -f image2 image_%06d.png
And finally, join said images into one GIF sequence:
(the important part here is the image2 demuxer's framerate option!)
ffmpeg -framerate 50 -i image_%06d.png -i palette.png -filter_complex "[0]scale=w=400:h=-1[x];[x][1:v] paletteuse" -pix_fmt rgb24 output.gif
Edit: Finally find the answer!
You need to use image2 demuxer's -framerate option! (answer edited accordingly)
Alternative methods:
gifsickle - convert images to gif, can set frame delay
ImageMagic - can convert video to gif directly, excellent gif quality control options.
i am trying to convert some different video formats to flv using ffmpeg. But it seems that only some videos go through.
ffmpeg -i /var/www/tmp/91640.avi -ar 22050 -ab 32 -f flv /var/www/videos/91640.flv
here is some debug info:
Seems stream 0 codec frame rate differs from container frame rate: 23.98 (65535/2733) -> 23.98 (5000000/208541)
Input #0, avi, from '/var/www/tmp/91640.avi':
Duration: 00:01:12.82, start: 0.000000, bitrate: 5022 kb/s
Stream #0.0: Video: mpeg4, yuv420p, 1280x528 [PAR 1:1 DAR 80:33], 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s
WARNING: The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s
Output #0, flv, to '/var/www/videos/91640.flv':
Stream #0.0: Video: flv, yuv420p, 1280x528 [PAR 1:1 DAR 80:33], q=2-31, 200 kb/s, 90k tbn, 23.98 tbc
Stream #0.1: Audio: adpcm_swf, 22050 Hz, 5.1, s16, 0 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Error while opening codec for output stream #0.1 - maybe incorrect parameters such as bit_rate, rate, width or height
also, if i try to grab one frame ad convert it to jpeg i get an error as well
ffmpeg -i /var/www/tmp/91640.avi -an -ss 00:00:03 -t 00:00:01 -r 1 -y /var/www/videos/91640.jpg
debug info
...
[mpeg4 # 0x1d7d810]Invalid and inefficient vfw-avi packed B frames detected
av_interleaved_write_frame(): I/O error occurred
Usually that means that input file is truncated and/or corrupted.
im thinking that the image fails because the video conversion failed in the first place, not sure though
any ideas what goes wrong?
Bits, not kbits
From your console output:
WARNING: The bitrate parameter is set too low. It takes bits/s as argument, not kbits/s
Use 32k, not just 32.
Only stereo or mono is supported
The encoder adpcm_swf ony supports mono or stereo, so add -ac 2 as an output option. The console output would have suggested this if you were using a recent ffmpeg build.
Use -vframes 1 for single image outputs
Instead of -t 00:00:01 -r 1 use -vframes 1.
A better encoder
Instead of using the encoders flv and adpcm_swf, I recommend libx264 and libmp3lame:
ffmpeg -i input -vcodec libx264 -preset medium -crf 23 -acodec libmp3lame -ar 44100 -q:a 5 output.flv
-preset – Controls the encoding speed to compression ratio. Use the slowest preset you have patience for: ultrafast,superfast, veryfast, faster, fast, medium, slow, slower, veryslow.
-crf – Constant Rate Factor. A lower value is a higher quality. Range is 0-51 for this encoder. 0 is lossless, 18 is roughly "visually lossless", 23 is default, and 51 is worst quality. Use the highest value that still gives an acceptable quality.
-q:a – Audio quality for libmp3lame. Range is 0-9 for this encoder. A lower value is a higher quality.
Also see
FFmpeg and x264 Encoding Guide
Encoding VBR (Variable Bit Rate) mp3 audio