FFmpeg live streaming for media source extensions(MSE) - ffmpeg

I try implement video live streaming from RTSP stream to webpage with media source extensions(MSE) with using FFmpeg
Expected system diagram.
I know that this task can realize with HLS or WebRTC, but HLS have large delay and WebRTC very hard to implementation.
I want catch RTSP stream with FFMPEG split it to ISO BMMF(ISO/IEC 14496-12) chuncks in "live mode" and send it to my web server by TCP in which i restream this chunks to webpage by websocket. In webpage i append chunck to buffer sourceBuffer.appendBuffer(new Uint8Array(chunck)) and video play in streaming mode.
Problem in first step with ffmpeg i can easy split RTSP stream to segments with this
ffmpeg -i test.mp4 -map 0 -c copy -f segment -segment_time 2 -reset_timestamps 1 output_%03d.mp4
but i cant redirect output to tcp://127.0.0.1 or pipe:1, if i correctly understood segment not work with pipes. For example i can easy send video frames in jpg by TCP with image2 catch ff d9 bytes in TCP stream and split stream to jpg images.
ffmpeg -i rtsp://127.0.0.1:8554 -f image2pipe tcp://127.0.0.1:7400
How i can split RTSP stream to ISO BMMF chunks for sending to webpage for playing with media source extensions? Or other way to prepare RTSP stream with FFmpeg for playing in MSE. Maybe i not correctly understood how working MSE and how prepare video for playing.

...in which i restream this chunks to webpage by websocket.
You don't need Web Sockets. It's easier than that. In fact, you don't need MediaSource Extensions either.
Your server should stream the data from FFmpeg over a regular HTTP response. Then, you can do something like this in your web page:
<video src="https://stream.example.com/output-from-ffmpeg" preload="none"></video>
How i can split RTSP stream to ISO BMMF chunks for sending to webpage for playing with media source extensions?
You need to implement a thin application server-side to receive the data piped from FFmpeg's STDOUT, and then relay it to the client. I've found it easier to use WebM/Matroska for this, because you won't have to deal with the moov atom and what not.

Related

Convert m3u8 (HLS) to mpd (MPEG-DASH)

I have Live stream of HLS [https://82-80-192-30.vidnt.com/ipbc_IPBCchannel11LVMRepeat/definst/IPBCchannel11LVM_3.stream/playlist.m3u8] and I want to convert it to MPEG-DASH.
What is the best practice?
The stream is already h264 aac therefore I understand I do not need to reencode and I just need to transmux.
What should I use?
ffmpeg? mp4box?
Notes:
I used nginx-rtmp-module (https://github.com/ut0mt8/nginx-rtmp-module/) in order to create DASH from RTMP stream according to this tutorial: https://isrv.pw/html5-live-streaming-with-mpeg-dash
But nginx-rtmp-module can get as input just rtmp streams and it did not work for me with HLS stream.
I used ffmpeg in order to create dash from m3u8 as following:
ffmpeg -i https://82-80-192-30.vidnt.com/ipbc_IPBCchannel11LVMRepeat/_definst_/IPBCchannel11LVM_3.stream/playlist.m3u8 -strict -2 -min_seg_duration 2000 -window_size 5 -extra_window_size 5 -use_template 1 -use_timeline 1 -f dash out.mpd
But this is very limited. I can't control the segment duration.
The min_seg_duration parameter of ffmpeg does not work very well for me, and also it can set the minimum duration while I want to limit the maximum duration of each segment (the segment comes out with ~10 seconds, while I need it to be ~2-4 seconds as I'm playing live).
Firstly it is worth saying that if you can avoid doing this you will be saving yourself a whole lot of work!
Most devices and clients these days can play both HLS and DASH streams, so the usual approach is to add any extra functionality needed in your app or client.
If you do have to convert server side, then its worth being aware that while HLS streams typically used TS segments in the past, recently support for fragmented MP4 has become available within the HLS ecosystem.
If you have TS video streams then you will need to do a conversion along the lines you outline above with ffmpeg.
If you have fragmented MP4 then you should actually have the correct format already and may find you just have to create the manifest file so DASH can access the fragmented mp4 streams.
All the above assumes that your content is not encrypted or that you don't have to support encryption - if it is then you may not be able to convert the media, or you may have to also encrypt the media differently for some streams than others, as currently most deployed windows and chrome devices and browsers use a slightly different encryption approach (a different AES mode) than Apple devices.

Extract frames as images from an RTMP stream in real-time

I am streaming short videos (4 or 5 seconds) encoded in H264 at 15 fps in VGA quality from different clients to a server using RTMP which produced an FLV file. I need to analyse the frames from the video as images as soon as possible so I need the frames to be written as PNG images as they are received.
Currently I use Wowza to receive the streams and I have tried using the transcoder API to access the individual frames and write them to PNGs. This partially works but there is about a second delay before the transcoder starts processing and when the stream ends Wowza flushes its buffers causing the last second not to get transcoded meaning I can lose the last 25% of the video frames. I have tried to find a workaround but Wowza say that it is not possible to prevent the buffer getting flushed. It is also not the ideal solution because there is a 1 second delay before I start getting frames and I have to re-encode the video when using the transcoder which is computationally expensive and unnecessarily for my needs.
I have also tried piping a video in real-time to FFmpeg and getting it to produce the PNG images but unfortunately it waits until it receives the entire video before producing the PNG frames.
How can I extract all of the frames from the stream as close to real-time as possible? I don’t mind what language or technology is used as long as it can run on a Linux server. I would be happy to use FFmpeg if I can find a way to get it to write the images while it is still receiving the video or even Wowza if I can find a way not to lose frames and not to re-encode.
Thanks for any help or suggestions.
Since you linked this question from the red5 user list, I'll add my two cents. You may certainly grab the video frames on the server side, but the issue you'll run into is transcoding from h.264 into PNG. The easiest was would be to use ffmpeg / avconv after getting the VideoData object. Here is a post that gives some details about getting the VideoData: http://red5.5842.n7.nabble.com/Snapshot-Image-from-VideoData-td44603.html
Another option is on the player side using one of Dan Rossi's FlowPlayer plugins: http://flowplayer.electroteque.org/snapshot
I finally found a way to do this with FFmpeg. The trick was to disable audio, use a different flv meta data analyser and to reduce the duration that FFmpeg waits for before processing. My FFmpeg command now starts like this:
ffmpeg -an -flv_metadata 1 -analyzeduration 1 ...
This starts producing frames within a second of receiving an input from a pipe so writes the streamed frames pretty close to real-time.

I cannot use AV_CODEC_ID_MPEG2TS in ffmpeg

I wanna use ffmpeg to convert yuv raw video file into ts stream video file.So I do this in my code:
avcodec_find_encoder(AV_CODEC_ID_MPEG2TS);
But when I run it ,it occurs that:
[NULL # 0x8832020] No codec provided to avcodec_open2()
I change the "AV_CODEC_ID_MPEG2TS" into "AV_CODEC_ID_MPEG2VIDEO", it works well ,and generate a mpg file running well too.So I wanna ask why I cannot use "AV_CODEC_ID_MPEG2TS"?
I'm also looking for streaming a file with ffmpeg so I'm not sure about that but it is what I understand....
Mpeg TS (Transport Stream) is not a codec, it is an encapsulation method, so you have to encode the stream with some code (I'm not sure if you can chose any codec) and then you can encapsulate it with mpeg ts before transmit over the network.
If you don't need to transmit the stream over the network maybe you don't need mpeg ts.
I hope this will helpful....!
Look here: ffmpeg doxygen

ffmpeg live stream overlay issues,while any one of the stream is lost other streams are getting stuck

So far we have done
We have a video chat client which has a set of 9 video streams (users) with
h.264 codec using Adobe FMS. Now, using ffmpeg we are able to combine these
streams into one stream using the overlay (video) and amix (audio) filters.
We are able to send the single combined stream to a live streaming service.
The stream of the active speaker is shown in a bigger size using the scale
property of ffmpeg.
Code as follows:
ffmpeg -i "rtmp://localhost/live/mystream" -i "rtmp://localhost/live/mystream2 " -i "rtmp://localhost/live/mystream3 "-filter_complex"nullsrc=size=300x300 [b1];[0:v] setpts=PTS-STARTPTS,scale=100x100 [s1];[1:v] setpts=PTS-STARTPTS,scale=200x200 [s2];[2:v]setpts=PTS-STARTPTS,scale=100x100 [s3];[b1][s1] overlay=shortest=1 [b1+s1];[b1+s1][s2] overlay=shortest=1 [b1+s2];
[b1+s2][s3] overlay=shortest=1:x=100" out.mp4
Help Needed in the following 2 major issues. Any help would be appreciated.
Whenever the active speaker changes, the stream of that user should be shown in a bigger
size. is this possible to do without restarting the ffmpeg process?
Right now, if one of the 9 streams stops, the ffmpeg process crashes.

Save Live Video Stream To Local Storage

Problem:
I have to save live video streams data which come as an RTP packets from RTSP Server.
The data come in two formats : MPEG4 and h264.
I do not want to encode/decode input stream.
Just write to a file which is playable with proper codecs.
Any advice?
Best Wishes
History:
My Solutions and their problems:
Firt attempt: FFmpeg
I use FFmpeg libary to get audio and video rtp packets.
But in order to write packets i have to use av_write_frame :
which seems that decode /encode takes place.
Also, when i give output format as mp4 ( av_guess_format("mp4", NULL, NULL)
the output file is unplayable.
[ any way ffmpeg has bad doc. hard to find what is wrong]
Second attempth: DirectShow
Then i decide to use DirectShow. I found a RTSP Source Filter.
Then a Mux and File Writer.
Cretae Single graph:
RTSP Source --> MPEG MUX ---> File Writer
It worked but the problem is that the output file is not playable
if graph is not stoped. If something happens, graph crashed for example
the output file is not playable
Also i can able to write H264 data, but the video is completely unplayable.
The MP4 file format has an index that is required for correct playback, and the index can only be created once you've finished recording. So any solution using MP4 container files (and other indexed files) is going to suffer from the same problem. You need to stop the recording to finalise the file, or it will not be playable.
One solution that might help is to break the graph up into two parts, so that you can keep recording to a new file while stopping the current one. There's an example of this at www.gdcl.co.uk/gmfbridge.
If you try the GDCL MP4 multiplexor, and you are having problems with H264 streams, see the related question GDCL Mpeg-4 Multiplexor Problem

Resources