I'm working on a project that requires taking rtsp links from youtube, and using ffmpeg to stream those videos to an rtmp server. My solution works, however it is having some issues.
I'm using these settings:
-max_delay 0 -initial_pause 0 -rtsp_transport udp -i " + inputLink + " -vcodec libx264 -acodec mp3 -ab 48k -bit_rate 450 -r 25 -s 640x480 -f flv " + stream
inputLink is replaced with the rtsp link, and stream is replaced with the rtmp server link
So this works but here are the issues I'm having:
At the beginning of each video, there is a big lag spike/lots of frames dropped, and then the video resyncs and plays normally
Some videos would crash ffmpeg, with a "Conversion failed" message and many frames dropped during the conversion/stream.
At the end of each video it would start lagging/ dropping frame, right near the end of the video, in other words it doesn't end normally, every video ends by lagging out / dropping frames
I've been struggling for a long time just to get this working, and now I finally did, I just need to perfect it by taking care of those two issues, if anyone has useful information about the rtsp_transport protocol and how to make it stream with no issues, I would greatly appreciate it. Thanks!
You got some settings wrong.
-bit_rate 450: you asked for a 450 bits per second, it's no wonder it drops a lot of frames! It should be 450k.
If you want a 450 kbps stream then use -ab 48k -vb 402k, where 402 = 450 - 48.
The flv format only supports certain audio rates. You need to also use -ar with one of the following values: 44100, 22050 or 11025.
ffmpeg -i rtsp://... -c:v libx264 -c:a mp3 -ab 48k -ar 44100 -vb 402k -r 25 -s 640x480 -f flv test.flv
Related
I am trying to re-stream an MJPEG stream over dash using ffmpeg.
I have an ESP32 camera module that outputs an MJPEG livestream at 192.168.2.128:81/stream (Arduino code here).
I can open this stream directly in the browser and see the video in realtime, but the camera will only allow for a single client at a time while I am in need of a multi client solution.
What doesn't work
A solution I am currently trying to implement is to use a seperate server (Raspberry Pi) running apache and ffmpeg to re-stream the MJPEG content using DASH:
ffmpeg -re -i http://192.168.2.128:81/stream -strict -2 -an -c:v copy -b:v 2000k -f dash -window_size 4 -extra_window_size 0 -min_seg_duration 2000000 -remove_at_exit 1 /var/www/html/out.mpd
I get no errors when executing this command on the server.
I then use this ffmpeg-dash.html to display the video in the browser.
This code unfortunately fails, in Firefox the console reports the error:
[72][Stream] No streams to play.
followed by:
Cannot play media. No decoders for requested formats: video/mp4;codecs="mp4v.6c";width="640";height="480"
What does work
What is puzzling me is that the above code works fine if I replace the MJPEG livestream url with a sample .mkv file, so if I use
ffmpeg -re -i /var/www/html/video.mkv -strict -2 -an -c:v copy -b:v 2000k -f dash -window_size 4 -extra_window_size 0 -min_seg_duration 2000000 -remove_at_exit 1 /var/www/html/out.mpd
I can view the livestreamed sample video (video.mkv) without problems using the previously mentioned ffmpeg-dash.html file.
Furthermore, it seems that ffmpeg can read the MJPEG livestream without problems, since
ffmpeg -t 10 -i http://192.168.2.128:81/stream -filter:v fps=15 -c:v flv test.flv
returns a 10 second clip of the livestream succesfully.
So to me it seems that the problem lies in how I combine the two. What am I missing? Is it even possible to stream MJPEG content over MPEG-DASH?
(I am new to this, sorry in advance for my ignorance)
I am trying to make a reliable stream from my Icecast/Shoutcast servers to Youtube live. The command that I use is:
ffmpeg -v verbose -framerate 30 -loop 1 -i /var/image.jpg -re -i http://127.0.0.1:4700/radio -c:v libx264 -preset ultrafast -b:v 2250k -maxrate 6000k -bufsize 6000k -c:a copy -ab 128k -s 1920x1080 -framerate 30 -g 60 -keyint_min 60 -f flv rtmp://a.rtmp.youtube.com/live2/xxx
As you can see I am using recommended bitrate for Youtube, insert keyframes every 2 seconds and streaming at 30 frames per second.
The stream is working but after running for some time two thing are happening:
FFMPEG speed falls from 1x to something like 0.998x
Youtube starts complaining that video stream speed is slow, markes the quality as bad and sometimes video starts buffering.
Why is this happening? CPU load is normal, connectivity is ok (the stream is running on a 1Gg/s dedicated server).
Since in my example above I am streaming a single image as a logo of the stream I also tried to generate a short 30 seconds video with that image and broadcast that video instead of an image, but that did not help as well.
The command I used for conversion:
ffmpeg -framerate 30 -loop 1 -i /var/image.jpg -c:v libx264 -preset ultrafast -tune stillimage -b:v 2250k -minrate 2250k -maxrate 6000k -bufsize 6000k -framerate 30 -g 60 -keyint_min 60 -t 30 out4.mp4
And broadcast with
ffmpeg -stream_loop -1 -i out4.mp4 -re -i http://127.0.0.1:4700/radio -c:v copy -c:a copy -framerate 30 -g 60 -keyint_min 60 -f flv rtmp://a.rtmp.youtube.com/live2/xxx
ffmpeg version is 4.1.1
Are you sure that your original stream is really keeping up with the wall-clock?
Depending on how it's encoded there are possibilities that it gets heavily skewed. This ultimately leads to buffer under (or overruns if it's too fast) and the player complaining/skipping.
Can you try and dump several hours worth of stream to a file and then stream that with FFmpeg? If that works, then it's a strong indication that your original stream timing (sample rate) is off.
Getting the sample rate right is why professional/expensive sound cards use high precision Quartz-Crystal controlled oscillators. Purely virtual processing (e.g. files get encoded into a stream) can easily get skewed, especially inside virtual machines. Also, cheap USB sound cards are often among the worst offenders in terms of frequency accuracy and stability.
FFmpeg might have an option to deal with too slow input. Keywords could be 'padding' or 'missing samples'.
Youtube's error saying "...buffer....." is not a buffer issue on your PC, but simply data you are sending to youtube is too small.
1)note that [-preset ultrafast] and [-preset fast] does not make big difference.
2) change your ffmpeg comannd for broadcast one. like, [-b:v 2250k] to [-b:v 15000k],and set fps to 12→[-r 12] option.
I's gonna be.
ffmpeg -stream_loop -1 -i out4.mp4 -re -i http://127.0.0.1:4700/radio -preset fast -r 12 -framerate 30 -g 60 -video_track_timescale 1000 -b:v 15000k -f flv rtmp://a.rtmp.youtube.com/live2/xxx
I hope this will be good for you !!(^v^)Y
everyone.
I'm trying to use FFmpeg to record video and 3 audio sources and use it to generate 3 different video files - each file should contain the same video stream but the different audio stream. The problem is that I got audio sync issues. The first audio stream is synced perfectly, but the second one has 1 sec lag, and the third one has like 2 sec lag.
I've made a few tests so far and it seems that root cause of the issue is initialization time of video/audio devices. So, one device is already recording something but the second is still being opened and so on. I've tried to change input devices order and after that audio streams still have the same issue BUT if before 2nd and 3rd audio streams were some time ahead of video, after reordering they became to lag after the audio (audio for the same event appears with some delay). So this test confirms my version about device initialization times.
But the question still, why the first audio stream is synchronized properly, while other 2 are not. and also, how could I overcome this issues? Any workarounds and ideas are highly appreciated.
Here is FFmpeg command I'm using and it's output.
ffmpeg.exe -f dshow -video_size 1920x1080 -i video="Logitech HD Webcam
C615" -f dshow -i audio="Microphone (HD Webcam C615)" -f dshow -i
audio="Microphone Array (Realtek High Definition Audio)"
-filter_complex "[1:a]volume=1[a1];[2:a]volume=1[a2]" -vf scale=h=1080:force_original_aspect_ratio=decrease -vcodec libx264
-pix_fmt yuv420p -crf 23 -preset ultrafast -acodec aac -vbr 5 -threads 0 -map v:0 -map [a1] -map [a2] -f tee
"[select=\'v,a:0\']C:/Users/vshevchu/Desktop/123/111/111_jjj1.avi|
[select=\'v,a:1\']C:/Users/vshevchu/Desktop/123/111/111_jjj2.avi"
OUTPUT
PS. Actually, the issue is exactly the same when I'm not using "tee" muxer but writing all the audio streams to one container. So, "tee" isn't a suspect.
I am trying to implement HLS using FFmpeg for transcoding + segmenting but have been facing a couple of issues that have been bugging me for the past week.
Issue
Webserver currently receives live MP4 fragments being recorded on-the-go and needs to take care of transcoding and segmentation.
As mp4 fragments are being received, they need to be encoded. Then segmented. If i run a segmenter (be it ffmpeg or apple mediastreamsegmenter), every mp4 fragment is being treated as a VOD by itself and I'm not being able to integrate them as part of a larger live event implementation.
I thought of a solution where every time I receive an mp4 fragment, I first use fmpeg to concatenate it with previous ones to form the larger mp4 that I then pass to be segmented for HLS. That did not work either because the entire stream has to be re-segmented each and every time and existing TS fragments replaced by new ones that are similar yet shifted in time.
Implementation 1
ffmpeg -re -i fragmentX.mp4 -b:v 118k -b:a 32k -vcodec copy -preset:v veryfast -acodec aac -strict -2 -ac 2 -f mpegts -y fragmentX.ts
I manage the m3u8 manifest on my own, deleting old fragments and appending new ones.
When validating the stream, I find it stacked with EXT-X-DISCONTINUITY tags making the stream unwatchable.
Implementation 2
First combine latest fragment with overall.mp4
ffmpeg -i "concat:newfragment.mp4|existing.mp4" -c copy overall.mp4
Then pass the combination to ffmpeg for HLS segmentation
ffmpeg -re -i overall.mp4 -ac 2 -r 20 -vcodec libx264 -b:v 318k -preset:v veryfast -acodec aac -strict -2 -b:a 32k -hls_time 2 -hls_list_size 3 -hls_allow_cache 0 -hls_base_url /Users/JosephKalash/Desktop/test/350/ -hls_segment_filename '350/fragment%03d.ts' -hls_flags delete_segments 350/index.m3u8
Concatenation is not perfect and there are noticeable glitches where the fragments are supposed to be stitched. Segmentation replaces older fragments and the manifest is rewritten as if it's a new HLS stream every time ffmpeg is called.
I cannot figure out how to get this to work properly.
Any ideas?
Solved by relying on nginx rtmp module which turned out to be suited for the above implementation.
I am trying to support the recording of webcam video on our website, which I then need to transcode to MP4 and WebM to support HTML5 playback. I have ffmpeg 1.2 installed on our server, and have the whole process running fairly well.
The one problem I do have though is transcoding FLV to MP4. it is unacceptably slow, e.g. an 8 second FLV takes about 2.5 mins to transcode!
The ffmpeg command I am using is:
ffmpeg -y -i webcam.flv -c:a libfaac -ac 2 -b:a 64k -ar 44100 -c:v libx264 \
-b:v 350k webcam.mp4
There are so many ffmpeg params, I am a bit lost as to the best way forward with this issue. You can download a test flv from here:
dropbox.com/s/hhd6uhdiuhk800w/webcam.flv
By comparison, transcoding to WebM takes about 5 seconds:
ffmpeg -y -i webcam.flv -c:a libvorbis -ac 2 -b:a 64k -ar 44100 -c:v libvpx \
-b:v 350k -metadata:s:v:0 rotate=0 webcam.webm
ok i found the answer. i had a closer look at the ffmpeg output, and noticed:
[mp4 # 0xa0060c0] Frame rate very high for a muxer not efficiently supporting it.
Please consider specifying a lower framerate, a different muxer or -vsync 2
doh. so i added "-vsync 2" as the last parameter before the output file and it worked a charm, took transcoding time down to about 10 secs! very happy.
working out "generalised" ffmpeg settings for all types of a/v input still seems like black magic to me...