I'm having a hard finding a simple solution to showcase the srt streaming protocol with FFmpeg. The only article that I've found, is either going over multiple hoops to setup a stream. Is there no way to do a simple receiver/sender principle like in the old days with udp?
Sender:
ffmpeg -i myfile.mp4 -vcodec libx264 -crf 12 -f mpegts udp://192.168.1.5:1234
Receiver:
ffplay udp://192.168.1.5:1234
Your ffmpeg needs to be compiled with --enable-libsrt to support the SRT protocol. See the output of ffmpeg -protocols to determine if it supports SRT.
Untested examples:
# stream copy
ffmpeg -re -i input.mp4 -c copy -f mpegts srt://192.168.1.5:1234
# re-encode
ffmpeg -re -i input.mp4 -c:v libx264 -b:v 4000k -maxrate 4000k -bufsize 8000k -g 50 -f mpegts srt://192.168.1.5:1234
See FFmpeg Protocols Documentation: SRT.
I also miss comprehensive documentation and explanation on how to use SRT. However here is a minimum example I use:
Sender which acts as a listener and waits for connections:
ffmpeg -i test.mp4 -c:v libx264 -f mpegts 'srt://:40052?mode=listener&latency=20000000'
Receiver which acts as a caller:
ffmpeg -i 'srt://192.168.1.345:40052?mode=caller' -c copy output.mkv
Related
Dears experts of the wonderful ffmpeg utility! Tell me please who knows this:
I want to make a 24/7 stream on YouTube of music from looped video and audio tracks.
I do it like this:
ffmpeg -loglevel info -stream_loop -1 -y -re \
-i video.mp4 \
-f concat -safe 0 -i playlist.txt \
-c:v libx264 -preset veryfast -b:v 3000k -maxrate 3000k -bufsize 6000k \
-framerate 25 -video_size 1280x720 -vf "format=yuv420p" -g 50 -shortest -strict experimental \
-c:a aac -b:a 128k -ar 44100 \
-f flv rtmp://localhost/live/my-stream
i.e. video.mp4 is spinning in a loop and from the playlist.txt file I play mp3 in turn.
With this everything is ok, everything works. But I also want to show the title of the playing track.
As for example on some YouTube radios:
With cover is perfect!
Any ideas how this can be implemented?
I know that it is possible to display text through drawtext. You can output text from a file, which you can separately update yourself. But how to get the data of the currently playing file? ffmpeg does not give such information, only stream parameters: fps, framerate... Or is it still possible to get it?
Or are there better and easier ways?
Thanks in advance for your help!
you can use FFprobe to extract metadata from files.
ffmpeg -f avfoundation -i "1:0" -vf "crop=1920:1080:0:0" -pix_fmt yuv420p -y -r 30 -c:a aac -b:a 128k -f flv rtmp://RTMP_SERVER:RTMP_PORT/STREAM_KEY
Hello guys, the above command works pretty well. It records the audio/video of the computer. But what I want to do is pipe a repeating video or image(png/jpeg/gif), so that there is no live video feed from the computer, but just the image on the stream with the audio.
How would you go about doing this?
Also, if you know any programming interfaces that can do this same thing, please give suggestions. Because I would rather not use a CLI.
I think you should be able to achieve this by using -loop and some -map:ing. I can't test with avfoundation myself but something like this works for me:
ffmpeg -loop 1 -i image.png -i file_to_take_audio_from.mp4 -vf "scale=1920:1080:0:0" -pix_fmt yuv420p -r 30 -c:a aac -b:a 128k -map 0:v -map 1:a output.mp4
Replace -i file_to_take_audio_from.mp4 with -f avfoundation -i "1:0" and output.mp4 with -f flv rtmp://RTMP_SERVER:RTMP_PORT/STREAM_KEY.
Also you might be able to skip -vf if the image has correct resolution.
Hope that helps!
Use none or no value at all (:0) for the video device index and provide a secondary input:
ffmpeg -f avfoundation -i :0 -i image.png ...
There's a loop option for images such as animated GIFs and -stream_loop for input streams.
You can use the FFmpeg APIs directly instead of CLI.
I want to capture video+audio from directshow device like webcam and stream it to RTMP server. This part no problem. But the problem is that I want to be able to see the preview of it. After a lot of search someone said pipe the input using tee muxer to ffplay. but I couldn't make it work. Here is my code for streaming to rtmp server. how should I change it?
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -b:v 1024k -b:a 128k -ar 48000 -s 720x576 -f flv "rtmp://ip-address-of-my-server/live/out"
Here is the final code I used and it works.
ffmpeg -rtbufsize 8196k -framerate 25 -f dshow -i video="Microsoft® LifeCam Studio(TM)":audio="Desktop Microphone (Microsoft® LifeCam Studio(TM))" -vcodec libx264 -acodec aac -strict -2 -f tee -map 0:v -map 0:a "[f=flv]rtmp://ip-address-and-path|[f=nut]pipe:" | ffplay pipe:
The core command for those running ffmpeg on a Unix-compatible system (e.g. MacOS, BSD and GNU-Linux) is really quite simple. It's to redirect or to "pipe" one of the outputs of ffmpeg to ffplay. The main problem here is that ffmpeg cannot autodetect the media format (or container) if the output doesn't have a recognizable file extension such as .avi or .mkv.
Therefore you should specify the format with the option -f. You can list the available choices for option -f with the ffmpeg -formats command.
In the following GNU/Linux command example, we record from an input source named /dev/video0 (possibly a webcam). The input source can also be a regular file.
ffmpeg -i /dev/video0 -f matroska - filename.mkv | ffplay -i -
A less ambiguous way of writing this for non-Unix users would be to use the special output specifier pipe.
ffmpeg -i /dev/video0 -f matroska pipe:1 filename.mkv | ffplay -i pipe:0
The above commands should be enough to produce a preview. But to make sure that you get the video and audio quality you want, you also need to specify, among other things, the audio and video codecs.
ffmpeg -i /dev/video -c:v copy -c:a copy -f matroska - filename.mkv | ffplay -i -
If you choose a slow codec like Google's AV1, you'd still get a preview, but one that stutters.
I have a trouble with ffmpeg
I receive a rtsp stream from a grabbing device (camera) and I stream-out it to rtmp (Youtube Live)
I want to have a copy of the stream in my computer so I write at the same time in a local file
I use this command :
ffmpeg -y -i 'RTSP_SOURCE' -c:v copy -c:a libvo_aacenc -map 0:v -bsf:v dump_extra -fflags +genpts -flags +global_header -movflags +faststart
-map_metadata 0 -metadata title= -f tee -filter_complex aevalsrc=0 '[f=mp4]/tmp/backup.mp4|[f=mpegts]/tmp/backup.ts|[f=flv]rtmp://a.rtmp.youtube.com/live2/STREAM_ID'
The problem is when I have some disconnections, ffmpeg exits and stop to recording
Is there any flag or option for telling to ffmpeg to continue recording in local files even there is not internet ?
Thank you very much for your help =)
You can try:
ffmpeg -f tee "[onfail=ignore] ...
More description is available here.
I am using ffmpeg to convert any avi/wmv videos to flv.
My trouble is that the flv result is quite poor: it gives me big pixelitaed boxes.
I tried to use some -b parameters with no good results:
ffmpeg -i 1268459654.wmv -ar 22050 -ab 32 -f flv -s 640x480 x.flv
ffmpeg -i 1268459654.wmv -ar 22050 -ab 32 -f flv -s 640x480 -b 500k x.f4v
I also tried
ffmpeg -i 1268459654.wmv -vcodec libx264 -s 360x240 x.mp4
ans got: "Unknown encoder 'libx264'"
Any solution for that ?
libx264 does not come pre-installed (licensing issues I believe) if you've downloaded it via yum/RPM. You'll need to download the source and compile it yourself and specify libx264. Here's a command line I've used in the past with decent results, and I would consider the MP4 Container over the dated, FLV format personally.
ffmpeg -i (file) -acodec libfaac -ab 44k -vcodec libx264 -vpre normal -crf 30 -threads 0
Make note of the "-vpre normal", as you should have some presets available under:
/usr/share/ffmpeg/libx264-normal.ffpreset or similar.
More details on compiling from source.