I have 1-5 input streams, each uploading on a slightly different time offset.
With rtmp and ffmpeg, I can reliably encode a single stream into an HLS playlist that plays seamlessly on iOS, my target delivery platform.
I know that you can accept multiple input streams into ffmpeg, and I want to switch between the input streams to create a consistent, single, seamless output.
So I want to switch between
rtmp://localhost/live/stream1 .. rtmp://localhost/live/stream5 on a regular interval. Sometimes there will be multiple streams, and sometimes there won't.
Is there any way for ffmpeg to rotate between input streams while generating an HLS playlist? My goal is to avoid running duplicate instances of ffmpeg for server cost reasons, and I think connecting disparately encoded input streams for playback would be difficult if not impossible.
Switching on each segment is the ideal behavior, but I also need to keep the streams in time sync. Is this possible?
Switching live stream inputs can cause delays due to the initial connection time and buffering (rtmp_buffer).
There's no straight-forward way to do it with ffmpeg. Being an open source project you can add the functionality yourself. It shouldn't be very complicated if all all your inputs share the same codecs, number of tracks, frame sizes etc.
Some people suggested using another software to do the switch such as MLT or using filters such as zmq (ZeroMQ) to make ffmpeg accept commands.
One way to do it would be to re-stream the source as mpgets on a local port and use the local address as input in the command that outputs the HLS:
Stream switcher (60s of each stream, one at a time) - you can make a script with your own logic, this is for illustrative purposes:
ffmpeg -re -i rtmp://.../stream1 -t 60 -f mpegts udp://127.0.0.1:10000
ffmpeg -re -i rtmp://.../stream2 -t 60 -f mpegts udp://127.0.0.1:10000
[...]
ffmpeg -re -i rtmp://.../stream5 -t 60 -f mpegts udp://127.0.0.1:10000
Use the local address as source for the HLS stream - it'll wait for input if there's none and fix your DTS/PTS but you will probably introduce some delays on switching:
ffmpeg -re -i udp://127.0.0.1:10000 /path/to/playlist.m3u8
Related
I am making a datamoshing program in C++, and I need to find a way to remove one frame from a video (specifically, the p-frame right after a sequence jump) without re-encoding the video. I am currently using h.264 but would like to be able to do this with VP9 and AV1 as well.
I have one way of going about it, but it doesn't work for one frustrating reason (mentioned later). I can turn the original video into two intermediate videos - one with just the i-frame before the sequence jump, and one with the p-frame that was two frames later. I then create a concat.txt file with the following contents:
file video.mkv
file video1.mkv
And run ffmpeg -y -f concat -i concat.txt -c copy output.mp4. This produces the expected output, although is of course not as efficient as I would like since it requires creating intermediate files and reading the .txt file from disk (performance is very important in this project).
But worse yet, I couldn't generate the intermediate videos with ffmpeg, I had to use avidemux. I tried all sorts of variations on ffmpeg -y -ss 00:00:00 -i video.mp4 -t 0.04 -codec copy video.mkv, but that command seems to really bug out with videos of length 1-2 frames - while it works for longer videos no problem. My best guess is that there is some internal checker to ensure the output video is not corrupt (which, unfortunately, is exactly what I want it to be!).
Maybe there's a way to do it this way that gets around that problem, or better yet, a more elegant solution to the problem in the first place.
Thanks!
If you know the PTS or data offset or packet index of the target frame, then you can use the noise bitstream filter. This is codec-agnostic.
ffmpeg -copyts -i input -c copy -enc_time_base -1 -bsf:v:0 noise=drop=eq(pos\,11291) out
This will drop the packet from the first video stream stored at offset 11291 in the input file. See other available variables at http://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise
I have Live stream of HLS [https://82-80-192-30.vidnt.com/ipbc_IPBCchannel11LVMRepeat/definst/IPBCchannel11LVM_3.stream/playlist.m3u8] and I want to convert it to MPEG-DASH.
What is the best practice?
The stream is already h264 aac therefore I understand I do not need to reencode and I just need to transmux.
What should I use?
ffmpeg? mp4box?
Notes:
I used nginx-rtmp-module (https://github.com/ut0mt8/nginx-rtmp-module/) in order to create DASH from RTMP stream according to this tutorial: https://isrv.pw/html5-live-streaming-with-mpeg-dash
But nginx-rtmp-module can get as input just rtmp streams and it did not work for me with HLS stream.
I used ffmpeg in order to create dash from m3u8 as following:
ffmpeg -i https://82-80-192-30.vidnt.com/ipbc_IPBCchannel11LVMRepeat/_definst_/IPBCchannel11LVM_3.stream/playlist.m3u8 -strict -2 -min_seg_duration 2000 -window_size 5 -extra_window_size 5 -use_template 1 -use_timeline 1 -f dash out.mpd
But this is very limited. I can't control the segment duration.
The min_seg_duration parameter of ffmpeg does not work very well for me, and also it can set the minimum duration while I want to limit the maximum duration of each segment (the segment comes out with ~10 seconds, while I need it to be ~2-4 seconds as I'm playing live).
Firstly it is worth saying that if you can avoid doing this you will be saving yourself a whole lot of work!
Most devices and clients these days can play both HLS and DASH streams, so the usual approach is to add any extra functionality needed in your app or client.
If you do have to convert server side, then its worth being aware that while HLS streams typically used TS segments in the past, recently support for fragmented MP4 has become available within the HLS ecosystem.
If you have TS video streams then you will need to do a conversion along the lines you outline above with ffmpeg.
If you have fragmented MP4 then you should actually have the correct format already and may find you just have to create the manifest file so DASH can access the fragmented mp4 streams.
All the above assumes that your content is not encrypted or that you don't have to support encryption - if it is then you may not be able to convert the media, or you may have to also encrypt the media differently for some streams than others, as currently most deployed windows and chrome devices and browsers use a slightly different encryption approach (a different AES mode) than Apple devices.
I am attempting to use ffmpeg to record an HLS Livestream, described by input.m3u8. input.m3u8 contains a number of different bitrate streams: input_01.m3u8, input_02.m3u8, ...; which contain the actual mpeg-ts segmented video files. Frequently the number and quality of the available streams varies. I am trying to make this an automated process so that my co-workers can use it, but I need ffmpeg to always select the best available stream from the input.m3u8 file. Can anybody point me in the right direction on this?
Currently I use:
ffmpeg -n -i "http://path/input_0x.m3u8" -c copy "%path%\%FileName%
where %path% and %filename% are defined by the batch file calling ffmpeg and I manually look up the best bitrate stream.
Can someone tell me what server-side technology (perhaps ffmpeg), one could use in order to:
1) display this full-screen live-streaming video:
http://aolhdshls-lh.akamaihd.net/i/gould_1#134793/master.m3u8
2) and overlay it in the lower-right corner with a live video coming from a webRTC video-chat stream?
3) and send that combined stream into a new m3u8 live-stream
4) Note that it needs to be a server-side solution - - - cannot launch multiple video players in this case (needs to pass the resulting stream to SmartTV's which only have one video-decoder at a time)
The closest example I've found so far is this article:
https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
Which isn't really live, nor is it really doing overlays.
any advice is greatly appreciated.
Let me clear what you want in this case :
Input video is HLS streaming from webRTC : What about delay? is dealy important thing in your work?
Overlay image into video : This will need that decoding input video, and filtering it, encoding again. so it needs a lot of cpu resource and even more if input video is 1080p.
Re-struct new HLS format : You must put it lot of encoding option to make sure that ts fragment works well. most important thing is GOP size and ts duration.
You need a web server to provide m3u8 index file. you can use nginx, apache.
What i tell you now in this answer is that ffmpeg command line, which making overlay from input HLS streaming and re-make ts segments.
Following command-line will do what you want in step 1 to step 3 :
ffmpeg \
-re -i "http://aolhdshls-lh.akamaihd.net/i/gould_1#134793/master.m3u8" \
-i "[OVERLAY_IMAGE].png" \
-filter_complex "[0:v][1:v]overlay=main_w:main_h[output]" \
-map [output] -0:a -c:v libx264 -c:a aac -strict -2 \
-f ssegment -segment_list out.list out%03d.ts
This is basic command line that overlay image from your input HLS streaming then creating ts segment and index file.
I don't have any further experience with HLS so it can be done without any tuning option but maybe you should tune it for you work. and also you should search a little bit for web server to provide m3u8 but it won't be hard.
GOP size(-g) and its duration(segment_tim), as i said, will be key point of your tuning.
I'm using FFmpeg to stream raw PCM data from an internet radio stream, which I then run through some processing.
FFmpeg buffers around 10 seconds before sending any output to stdout, and I've been trying to get it to send data in more frequent intervals, so I can process it in smaller chunks.
I've looked at various FFmpeg command line options, but could not find one that will decrease the internal buffering used.
Looking at the various format options, I've tried -fflags nobuffer and -avioflags direct on both input and output, to no avail.
Thanks.