ffmpeg and ffserver, rc buffer underflow? - ffmpeg

I am attempting to write a simple streaming server for a project. I have an AWS Linux machine that will be running ffserver. Curently, as it stands, my config file looks like the following:
#Server Configs
HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 1000
CustomLog -
#Create a Status Page
<Stream stat.html>
Format status
ACL allow localhost
ACL allow 255.255.255.255 #Allow everyone to view status, for now
</Stream>
#Creates feed, only allow from self
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 50M
ACL allow 127.0.0.1
ACL allow <MY_PERSONAL_COMPUTER'S_PUBLIC_IP_HERE>
</Feed>
#Creates stream, allow everyone
<Stream tagLive.mpg>
Format mpeg
Feed feed1.ffm
VideoFrameRate 30
VideoSize 640x480
AudioSampleRate 44100
</Stream>
I then am capturing my Webcam and sending it up to the server using the following command:
ffmpeg -f dshow
-i video="Webcam C170":audio="Microphone (Webcam C170)"
-b:v 1400k
-maxrate 2400k
-bufsize 1200k
-ab 64k
-s 640x480
-ac 1
-ar 44100
-y http://<AWS_SERVER_PUBLIC_DNS>:8090/feed1.ffm
When I run this however, I get the following output from my console:
Guessed Channel Layout for Input Stream #0.1 : stereo
Input #0, dshow, from 'video=Webcam C170:audio=Microphone (Webcam C170)':
Duration: N/A, start: 12547.408000, bitrate: N/A
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 30 tbr, 10000k tbn, 30 tbc
Stream #0:1: Audio: pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
Output #0, ffm, to '<AWS_SERVER_PUBLIC_DNS>:8090/feed1.ffm':
Metadata:
creation_time : 2017-04-26 14:55:27
encoder : Lavf57.25.100
Stream #0:0: Audio: mp2, 44100 Hz, mono, s16, 64 kb/s
Metadata:
encoder : Lavc57.24.102 mp2
Stream #0:1: Video: mpeg1video, yuv420p, 640x480, q=2-31, 64 kb/s, 30 fps, 1000k tbn, 30 tbc
Metadata:
encoder : Lavc57.24.102 mpeg1video
Side data:
unknown side data type 10 (24 bytes)
Stream mapping:
Stream #0:1 -> #0:0 (pcm_s16le (native) -> mp2 (native))
Stream #0:0 -> #0:1 (rawvideo (native) -> mpeg1video (native))
Press [q] to stop, [?] for help
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflowtime=00:00:01.13 bitrate= 404.8kbits/s dup=13 drop=0 speed=2.22x
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflowtime=00:00:01.63 bitrate= 361.1kbits/s dup=13 drop=0 speed=1.61x
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflowtime=00:00:02.13 bitrate= 368.6kbits/s dup=13 drop=0 speed= 1.4x
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflowtime=00:00:02.66 bitrate= 344.1kbits/s dup=13 drop=0 speed=1.32x
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflowtime=00:00:03.16 bitrate= 331.1kbits/s dup=13 drop=0 speed=1.25x
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
[mpeg1video # 02e95180] rc buffer underflow
[mpeg1video # 02e95180] max bitrate possibly too small or try trellis with large lmax or increase qmax
frame= 117 fps= 36 q=31.0 Lsize= 156kB time=00:00:03.86 bitrate= 330.5kbits/s dup=13 drop=0 speed= 1.2x
video:118kB audio:27kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 7.659440%
Exiting normally, received signal 2.
And on my viewer, I just get a black screen.
Is there something I'm missing? Searching lead to nothing on "increasing qmax" or anything similar to what ffmpeg complained about. There have been questions asked here, but nothing has been done/answered.
Thanks in advance

You can set qmax and qmin in the server config
<Stream test_3840.flv>
...
VideoQMin 1
VideoQMax 15
...
</Stream>
More details can be found in that answer https://stackoverflow.com/a/18566361/4010173

Related

Ffmpeg: 4K RGB->YUV realtime conversion

I'm trying to use Ffmpeg for creating a hevc realtime stream from a Decklink input. The goal is high quality HDR stream usage with 10 bits.
The Decklink SDI input is fed RGB 10 bits, which is well handled by ffmpeg with the decklink option -raw_format rgb10, which gets recognized by ffmpeg as 'gbrp10le'.
I have a Nvidia pascal-based card, which supports yuv444 10 bit (as 'yuv444p16le') and when when using '-c:v hevc_nvenc' the auto_scaler kicks in and converts to 'yuv444p16le', which I guess is the same conversion as giving '-pix_fmt yuv444p16le'.
This is working very well in 1920x1080 resolution, but in 4096x2160 resolution ffmpeg can't keep up realtime 24 or 25 fps, and I get input buffer overruns.
The culprit seems to be the RGB->YUV conversion in ffmpeg swscale because;
When piping the Decklink 4K RGB input with '-c:v copy' straight to /dev/null, there's is no problems with buffer underruns,
And when feeding the Decklink YUV and giving '-raw_format yuv422p10’ (no YUV444 input for decklink seems available for decklink in ffmpeg) I get no underrun and everything works well in 4K. Even if I set '-pix_fmt yuv444p16le'.
Any ideas how I could accomplish a 4K hevc in NVENC with the 10-bit RGB signal from the Decklink? Is there a way to make NVENC accept and use the RGB data without first converting to YUV? Or is there maybe a way to convert gbrp10le->yuv444p16le with cuda or scale_npp filter? I have compiled ffmpeg with npp and cuda, but I cannot figure out if I can get it to work with RGB? Whenever I try to do '-vf "hwupload_cuda"', auto_scaler kicks in and tries to convert to yuv on the cpu, which again creates underruns.
Another thing I guess could help is if there was a way to make the swscale cpu filter(or if there is another suitable filter?) use multiple threads? Right now it seems to only use one thread at a time, maxing out at 99% on my Ryzen 3950x (3,5GHz, 32 threads).
Example ffmpeg output:
$ ffmpeg -loglevel verbose -f decklink -raw_format rgb10 -i "Blackmagic Card 1" -c:v hevc_nvenc -preset medium -profile:v main10 -cbr 1 -b:v 20M -f nut - > /dev/null
--
Stream #0:1: Video: r210, 1 reference frame, gbrp10le(progressive), 4096x2160, 6635520 kb/s, 25 tbr, 1000k tbn, 1000k tbc
--
[graph 0 input from stream 0:1 # 0x4166180] w:4096 h:2160 pixfmt:gbrp10le tb:1/1000000 fr:25000/1000 sar:0/1
[auto_scaler_0 # 0x4168480] w:iw h:ih flags:'bicubic' interl:0
[format # 0x4166080] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
[auto_scaler_0 # 0x4168480] w:4096 h:2160 fmt:gbrp10le sar:0/1 -> w:4096 h:2160 fmt:yuv444p16le sar:0/1 flags:0x4
[hevc_nvenc # 0x4139640] Loaded Nvenc version 11.0
--
Stream #0:0: Video: hevc (Rext), 1 reference frame (HEVC / 0x43564548), yuv444p16le(tv, progressive), 4096x2160 (0x0), q=2-31, 2000 kb/s, 25 fps, 51200 tbn
--
[decklink # 0x40f0900] Decklink input buffer overrun!:02.52 bitrate= 30471.3kbits/s speed=0.627x

FFmpeg changes framerate to Variable when remuxing

I'm trying to change .MKV container to .MP4 using FFmpeg without re-encoding video stream:
ffmpeg -i input.mkv -c copy output.mp4
The input file has Constant framerate:
Frame rate mode: Constant
Frame rate : 30.000 fps
However, the output file got a variable framerate according to Mediainfo:
Frame rate mode : Variable
Frame rate : 30.000 fps
Minimum frame rate : 29.412 fps
Maximum frame rate : 30.303 fps
Total number of frames stays the same.
The output from ffmpeg:
Output #0, mp4, to 'input.mp4':
Metadata:
encoder : Lavf56.40.101
Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 2560x1440 [SAR 1:1 DAR 16:9], q=2-31, 30 fps, 30 tbr, 16k tbn, 1k tbc (default)
Metadata:
DURATION : 00:05:00.766000000
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
frame= 9023 fps=0.0 q=-1.0 Lsize= 209201kB time=00:05:00.66 bitrate=5699.9kbits/s
video:209045kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.074948%
Is it possible to set constant bitrate for the output? I've tried -vsync and -r but they seem to be ignored when -c copy is set.

Can't get ffmpeg to stream webcam [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have a rapsberry with ffmpeg installed and a microsoft hd3000 cam installed
I run the following command:
ffserver -f /etc/ffserver.conf & ffmpeg -framerate 21 -re -f video4linux2 -i /dev/video0 -f alsa -i sysdefault:CARD=HD3000 http://localhost:8090/feed1.ffm
and i get the following:
/etc/ffserver.conf:164: Setting default value for video bit rate tolerance = 16000. Use NoDefaults to disable it.
/etc/ffserver.conf:164: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
/etc/ffserver.conf:164: Setting default value for video max rate = 6229744. Use NoDefaults to disable it.
/etc/ffserver.conf:219: Setting default value for audio sample rate = 22050. Use NoDefaults to disable it.
/etc/ffserver.conf:219: Setting default value for audio channel count = 1. Use NoDefaults to disable it.
/etc/ffserver.conf:219: Setting default value for video bit rate tolerance = 64000. Use NoDefaults to disable it.
/etc/ffserver.conf:219: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
/etc/ffserver.conf:219: Setting default value for video max rate = 6369328. Use NoDefaults to disable it.
bind(port 8090): Address already in use
Wed Nov 29 13:17:49 2017 Could not start server
[video4linux2,v4l2 # 0x1a35630] The driver changed the time per frame from 1/21 to 1/10
Input #0, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 12116.079136, bitrate: 147456 kb/s
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 1280x720, 147456 kb/s, 10 fps, 10 tbr, 1000k tbn, 1000k tbc
Guessed Channel Layout for Input Stream #1.0 : stereo
Input #1, alsa, from 'sysdefault:CARD=HD3000':
Duration: N/A, start: 1511961469.424072, bitrate: 1536 kb/s
Stream #1:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
[tcp # 0x1a44160] Connection to tcp://localhost:8090 failed (Connection refused), trying next address
Wed Nov 29 13:17:49 2017 127.0.0.1 - - [GET] "/feed1.ffm HTTP/1.1" 200 4175
[tcp # 0x1a63ca0] Connection to tcp://localhost:8090 failed (Connection refused), trying next address
[mpeg1video # 0x1a6ecb0] bitrate tolerance 21333 too small for bitrate 64000, overriding
[mpeg1video # 0x1a6ecb0] MPEG-1/2 does not support 3/1 fps
Stream mapping:
Stream #1:0 -> #0:0 (pcm_s16le (native) -> mp2 (native))
Stream #0:0 -> #0:1 (rawvideo (native) -> mpeg1video (native))
Stream #1:0 -> #0:2 (pcm_s16le (native) -> wmav2 (native))
Stream #0:0 -> #0:3 (rawvideo (native) -> msmpeg4v3 (msmpeg4))
Error while opening encoder for output stream #0:1 - maybe incorrect parameters such as bit_rate, rate, width or height
Wed Nov 29 13:17:49 2017 127.0.0.1 - - [POST] "/feed1.ffm HTTP/1.1" 200 0
[2]+ Exit 1 ffserver -f /etc/ffserver.conf
I use the default /etc/ffserver.conf.
I can't seem to figur out what is the problem.
MPEG-1/2 does not support 3/1 fps
This is the error that is causing the failure. Although you set a frame rate of 21 it appears the webcam is changing it.
From the ffmpeg side you have two options:
Use a different encoder other than mpeg1video/mpeg2video that can support arbitrary frame rates, or
If you want to keep using mpeg1video see the -r and/or -vsync options to properly deal with the output frame rate.
Note that ffserver is being planned for removal soon from FFmpeg. You may want to find an alternative.

ffmpeg streams video to local file but doesn't stream it to remote ffserver

I'm using ffmpeg to stream video from my webcam. I managed to save video to a file on my hard drive by typing:
ffmpeg -rtbufsize 1500M -f dshow -i video="SF Camera":audio="Microphone (Realtek High Definition Audio)" test1.webm
and it works, I see the test1.webm output and can play it later on.
However, when I type:
ffmpeg -rtbufsize 1500M -f dshow -i video="SF Camera":audio="Microphone (Realtek High Definition Audio)" http://10.172.180.195:8090/feed1.ffm
I get the following error:
Input #0, dshow, from 'video=SF Camera:audio=Microphone (Realtek High Definition
Audio)':
Duration: N/A, start: 739862.012000, bitrate: N/A
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 30 tbr,
10000k tbn, 30 tbc
Stream #0:1: Audio: pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
http://10.172.180.195:8090/feed1.ffm: Invalid data found when processing input
My ffserver.conf file looks like this:
HTTPPort 8090 # Port to bind the server to
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 10000 # Maximum bandwidth per client
# set this high enough to exceed stream bitrate
CustomLog -
NoDaemon # Remove this if you want FFserver to daemonize after start
<Feed feed1.ffm> # This is the input feed where FFmpeg will send
File ./feed1.ffm # video stream.
FileMaxSize 1G # Maximum file size for buffering video
ACL allow 10.172.180.109 # Allowed IPs
</Feed>
<Stream test.webm> # Output stream URL definition
Feed feed1.ffm # Feed from which to receive video
Format webm
# Audio settings
AudioCodec vorbis
AudioBitRate 64 # Audio bitrate
# Video settings
VideoCodec libvpx
VideoSize 720x576 # Video resolution
VideoFrameRate 25 # Video FPS
AVOptionVideo flags +global_header # Parameters passed to encoder
# (same as ffmpeg command-line parameters)
AVOptionVideo cpu-used 0
AVOptionVideo qmin 10
AVOptionVideo qmax 42
AVOptionVideo quality good
AVOptionAudio flags +global_header
PreRoll 15
StartSendOnKey
VideoBitRate 400 # Video bitrate
</Stream>
<Stream status.html> # Server status URL
Format status
# Only allow local people to get the status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Stream>
<Redirect index.html> # Just an URL redirect for index
# Redirect index.html to the appropriate site
URL http://www.ffmpeg.org/
</Redirect>
and that's my problem - does anyone know what might be wrong?

What does the fps mean in the ffmpeg output?

I am streaming a static png file with ffmpeg and it uses basically all my CPU. It seems a bit greedy to me, and even though I limited the fps on the input and output size, I am seeing a huge fps printed out.
w:\ffmpeg\bin>ffmpeg.exe -loop 1 -framerate 1 -i w:\colorbar2.png -r 10 -vcodec libx264 -pix_fmt yuv420p -r 10 -f mpegts udp://127.0.0.1:10001?pkt_size=1316
ffmpeg version N-68778-g5c7227b Copyright (c) 2000-2014 the FFmpeg developers
built on Dec 29 2014 22:12:54 with gcc 4.9.2 (GCC)
Input #0, png_pipe, from 'w:\colorbar2.png':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: png, pal8, 320x240 [SAR 3779:3779 DAR 4:3], 1 fps, 1 tbr, 1 tbn, 1 tbc
[libx264 # 00000000002fb320] using SAR=1/1
[libx264 # 00000000002fb320] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 # 00000000002fb320] profile High, level 1.2
Output #0, mpegts, to 'udp://127.0.0.1:10001?pkt_size=1316':
Metadata:
encoder : Lavf56.16.102
Stream #0:0: Video: h264 (libx264), yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=-1--1, 10 fps, 90k tbn, 10 tbc
Metadata:
encoder : Lavc56.19.100 libx264
Stream mapping:
Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
Press [q] to stop, [?] for help
frame=561310 fps=579 q=25.0 size= 144960kB time=15:35:25.80 bitrate= 21.2kbits/s dup=505179 drop=0
As you can see the frame counter goes up quickly and fps=579 is reported on the last line. I am confused now, what does that fps mean, if above the low frame per secs are also mentioned (output 10fps, input 1 fps)
What am I doing wrong and how could I reduce CPU load more given that it's a static file that is being streamed.
Thanks!
ffmpeg attempts to decode and encode as fast as it can. Just because you set the output to be 10 frames per second does not mean that it will (de|en)code realtime at 10 frames per second.
Try the -re input option. From the ffmpeg cli-tool documentation:
Read input at native frame rate. Mainly used to simulate a grab device
or live input stream (e.g. when reading from a file). Should not be
used with actual grab devices or live input streams (where it can
cause packet loss). By default ffmpeg attempts to read the input(s)
as fast as possible. This option will slow down the reading of the
input(s) to the native frame rate of the input(s). It is useful for
real-time output (e.g. live streaming).
Example:
ffmpeg.exe -re -loop 1 -framerate 10 -i w:\colorbar2.png -c:v libx264 \
-tune stillimage -pix_fmt yuv420p -f mpegts udp://127.0.0.1:10001?pkt_size=1316

Resources