I am very new to ffmpeg, and want to ask in ffmpeg -f sdl, what is the force format means and what is sdl?
I'm asking this because, when I try
ffmpeg -i udp://192.168.1.124:12345 -f h264 "test"
ffmpeg -i udp://192.168.1.124:12345 -f sdl "test"
if I use -f h264, it seems just receiving the stream and save to file "test"
however if I use -f sdl, it seems it will not store to a file, instead, it brings up a video player view and start playing the video directly.
however, this didn't make me think it has anything to do with force format(-f)?
I couldn't find the doc on https://ffmpeg.org/ffmpeg-formats.html. Can someone help on this? Thanks.
FFmpeg can write data to structured containers (AVI/MP4/MKV..) or raw streams (H264,M4V..) or devices (Decklink/SDL..). All of these are specified using -f. For file-based output, they are usually inferred via the extension, so -f can be omitted.
SDL2 is the library used by ffplay to render textures to display and also catch input keyboard/mouse events. If ffmpeg was compiled alongwith ffplay, SDL2 is linked with ffmpeg binary as well. But with ffmpeg, only video is rendered.
Related
I'm trying to download a live stream (not a file) coming from a live camera feed available at the following website: http://www.dot.ca.gov/video/.
I used Wireshark for sniffing the TCP packets and was able to extract the RTMP parameters, but wasn't able to use them with FFMPEG/VLC for downloading / playing the stream on VLC (I guess I didn't construct the URL correctly).
for example, for the camera feed available here, I got the following parameters:
swfUrl: http://www.dot.ca.gov/research/its/StrobeMediaPlayback.swf
pageUrl: http://www.dot.ca.gov/d4/d4cameras/ct-cam-pop- N17_at_Saratoga_Rd.html
tcUrl: rtmp://wzmedia.dot.ca.gov:1935/D4
Play : E37_at_Lakeville_Rd.stream.
Is there a chance someone is familiar with this and can help with understanding how I can use the above for downloading the stream?
Thanks a lot! Yaniv
ffmpeg -re -i "rtmp://wzmedia.dot.ca.gov:1935/D4" -acodec copy -vcodec libx264 -f flv -y ~/save_stream.flv
"-i " means infile and "-y" means overwrite output files.
you can use ffmpeg -h to see it.
How my stream is working right now:
Input:
Switcher program that captures the camera and screen shots and make a different layouts. One of the windows from the software is the one used as Input in the ffmpeg command line.
Output:
- Facebook (example)
- Youtube (example)
At the beginning, i thought that maybe could be better create two different ffmpeg processes to stream independently to each output. The problem was it uses too much CPU.
The answer for it, was to encode one time and copy it to different outputs. Ok, great, it solves the problem, but what if one of the output fails? Both fail.
I'm trying to make one encoding to two outputs and if one of these outputs is not available, the other keep going well.
Anybody have any idea to solve it?
Thanks!
I found the solution following what #LordNeckbeard said.
Here is a sample code to:
Save a local file
Stream to your server
Stream to Facebook server
Every stream is independent from the other and will try to recover itself independently every one second if something happened like internet connection–will save locally and try to recover when internet access came back–or destination server is not available yet and when it came back it will restart the streaming process):
-i ... -f tee "[onfail=ignore]'C:\Users\blablabla.mp4'|
[f=fifo:fifo_format=flv:drop_pkts_on_overflow=1:attempt_recovery=1:recovery_wait_time=1]rtmp://yourServer...|
[f=fifo:fifo_format=flv:drop_pkts_on_overflow=1:attempt_recovery=1:recovery_wait_time=1]"rtmp://facebook..."
Example using the tee muxer with the onfail option and also output a local file:
ffmpeg -i input -map 0 -c:v libx264 -c:a aac -maxrate 1000k -bufsize 2000k -g 50 -f tee "[f=flv:onfail=ignore]rtmp://facebook|[f=flv:onfail=ignore]rtmp://youtube|[f=segment:strftime=1:segment_time=60]local_%F_%H-%M-%S.mkv"
Also see:
FFmpeg Documentation: tee muxer
FFmpeg Documentation: segment muxer
FFmpeg Wiki: Encoding for Streaming Sites
I have some h264 video in mpeg transport stream, and I suspect at certain points in the video it switches from 1080i/50Hz to 1080p/25Hz. I'd like to prove that using some video analysis tool. Can ffmpeg (or similar) print out such detailed decoding info? I've tried ffmpeg setting "-loglevel debug" but it prints no more info about the actual decoding.
ffprobe is a far easier solution and is included with FFmpeg:
$ ffprobe -show_frames -i input.mp4
Sorted it. In the end I put a few printfs into ffmpeg source to get the info I needed.
ffmpeg handles RTMP streaming as input or output, and it's working well.
I want to stream some videos (a dynamic playlist managed by a python script) to a RTMP server, and i'm currently doing something quite simple: streaming my videos one by one with FFMPEG to the RTMP server, however this causes a connection break every time a video end, and the stream is ready to go when the next video begins.
I would like to stream those videos without any connection breaks continuously, then the stream could be correctly viewed.
I use this command to stream my videos one by one to the server
ffmpeg -re -y -i myvideo.mp4 -vcodec libx264 -b:v 600k -r 25 -s 640x360 \
-filter:v yadif -ab 64k -ac 1 -ar 44100 -f flv \
"rtmp://mystreamingserver/app/streamName"
I looked for some workarounds over the internet for many days, and i found some people talking about using a named pipe as input in ffmpeg, I've tried it and it didn't work well since ffmpeg does not only close the RTMP stream when a new video comes but also closes itself.
Is there any way to do this ? (stream a dynamic playlist of videos with ffmpeg to RTMP server without connection breaks
Update (as I can't delete the accepted answer): the proper solution is to implement a custom demuxer, similar to the concat one. There's currently no other clean way. You have to get your hands dirty and code!
Below is an ugly hack. This is a very bad way to do it, just don't!
The solution uses the concat demuxer and assumes all your source media files use the same codec. The example is based on MPEG-TS but the same can be done for RTMP.
Make a playlist file holding a huge list of entry points for you dynamic playlist with the following format:
file 'item_1.ts'
file 'item_2.ts'
file 'item_3.ts'
[...]
file 'item_[ENOUGH_FOR_A_LIFETIME].ts'
These files are just placeholders.
Make a script that keeps track of you current playlist index and creates symbolic links on-the-fly for current_index + 1
ln -s /path/to/what/to/play/next.ts item_1.ts
ln -s /path/to/what/to/play/next.ts item_2.ts
ln -s /path/to/what/to/play/next.ts item_3.ts
[...]
Start playing
ffmpeg -f concat -i playlist.txt -c copy output -f mpegts udp://<ip>:<port>
Get chased and called names by an angry system administrator
Need to create two playlist files and at the end of each file specify a link to another file.
list_1.txt
ffconcat version 1.0
file 'item_1.mp4'
file 'list_2.txt'
list_2.txt
ffconcat version 1.0
file 'item_2.mp4'
file 'list_1.txt'
Now all you need is to dynamically change the contents of the next playlist file.
You can pipe your loop to a buffer, and from this buffer you pipe to your streaming instance.
In shell it would look like:
#!/bin/bash
for i in *.mp4; do
ffmpeg -hide_banner -nostats -i "$i" -c:v mpeg2video \
[proper settings] -f mpegts -
done | mbuffer -q -c -m 20000k | ffmpeg -hide_banner \
-nostats -re -fflags +igndts \
-thread_queue_size 512 -i pipe:0 -fflags +genpts \
[proper codec setting] -f flv rtmp://127.0.0.1/live/stream
Of course you can use any kind of loop, also looping through a playlist.
I figure out that mpeg is a bit more stabile, then x264 for the input stream.
I don't know why, but minimum 2 threads for the mpeg compression works better.
the input compression need to be faster then the output frame rate, so we get fast enough new input.
Because of the non-continuing timestamp we have to skip them and generate a new one in the output.
The buffer size needs to be big enough for the loop to have enough time to get the new clip.
Here is a Rust based solution, which uses this technique: ffplayout
This uses a JSON playlist format. The Playlist is dynamic, in that way that you can edit always the current playlist and change tracks or add new ones.
Very Late Answer, but I recently ran into the exact same issue as the poster above.
I solved this problem by using OBS and the OBS websockets plugin.
First, set your RTMP streaming app as you have it now. but stream to a LOCAL RTMP stream.
Then have OBS load this RTMP stream as a VLC source layer with the local RTMP as the source.
then (in your app), using the OBS websockets plugin, have your VLC source switch to a static black video or PNG file when the video ends. Then switch back to the RTMP stream once the next video starts. This will prevent the RTMP stream from stopping when the video ends. OBS will go black durring the short transition, but the final OBS RTMP output will never stop.
There is surely a way to do this with manually setting up a intermediate RTMP server that pushes to a final RTMP server, but I find using OBS to be easier, with little overhead.
I hope this helps others, this solutions has been working incredible for me.
I've got a Sony network camera (SNC-RZ25N) that I am trying desperately to get data from in some meaningful format. The documentation says it sends MPEG-4 raw data, but is not more specific than than. I can capture a segment of the stream using curl ( http://techhead.biz/media/tsv.m4v ) and it will play using VLC and ffplay (though it plays too fast in ffplay).
After a day and a half of tinkering, I just discovered that I cannot use ffmpeg to convert this stream directly. For one, the only way ffmpeg accepts piped data as input (that I'm aware of) is in the 'yuv4mpegpipe' format.
I tried piping to ffmpeg using 'm4v' as the specified format, but it seems to want to read the entire stream before it begins processing.
Anyone know how I can do this? Using commandline tools? Open source libraries in ANY programming language? Simpler solutions are preferred, but any working solution would be great.
It appears mplayer can play your m4v file over HTTP, and at least with your sample file this works:
mkfifo /tmp/fifo
mplayer -benchmark -vo yuv4mpeg:file=/tmp/fifo http://techhead.biz/media/tsv.m4v
ffmpeg -f yuv4mpegpipe -i /tmp/fifo -vcodec libx264 -vpre libx264-hq /tmp/foo.mp4
(-benchmark tells mplayer to ignore frame duration, might or might not be needed)
Alternatively, with just mencoder:
mencoder -o /tmp/foo.avi -of avi -ovc x264 -x264encopts bitrate=250 http://techhead.biz/media/tsv.m4v
Finally, if you don't actually need H.264, you could just put the existing MPEG-ES data in whatever container format you need; MP4Box might be able to do this, and ffmpeg and mencoder can if they support the output format.