How to use ffmpeg for streaming mp4 via websocket - websocket

I've written a sample in nodejs which streams some input to the client via websocket connection in mp4 format. On the client side, the mp4 packages are added to a MediaSourceBuffer.
This runs fine, but only if the client gets the stream from the beginning with the first package. So another client can't play the current Stream, because he won't get the Stream from the beginning.
I tried (try&error) to save the first package ffmpeg sends and send this at the beginning of a new connection, then the current stream. Then the MediaSourceBuffer breaks because of encoding error..
Here is the ffmpeg command :
-i someInput -g 59
-vcodec libx264 -profile:v baseline
-f mp4 -movflags empty_moov+omit_tfhd_offset+frag_keyframe+default_base_moof
-reset_timestamps 1
-
The part "empty_moov+omit_tfhd_offset+frag_keyframe+default_base_moof" should make the Streampackages independent in putting the moovatom at the beginning of each part and sizing the parts in 59 frames each by keyframe, so I don't get it why I can't view the Stream beginning after the start.

The output of that command is not a 'stream' per se. It is series of concatenated fragments. Each fragments must be received in its entirety. If a partial fragment is received it will confuse the parser to the point where it can not identify the start of the next fragment. In addition, the first fragment output is called an initialization fragment. This initialization fragment must be sent to the client first. After that any fragment can be played. Hence it must be cached by the server.

Related

FFmpeg live streaming for media source extensions(MSE)

I try implement video live streaming from RTSP stream to webpage with media source extensions(MSE) with using FFmpeg
Expected system diagram.
I know that this task can realize with HLS or WebRTC, but HLS have large delay and WebRTC very hard to implementation.
I want catch RTSP stream with FFMPEG split it to ISO BMMF(ISO/IEC 14496-12) chuncks in "live mode" and send it to my web server by TCP in which i restream this chunks to webpage by websocket. In webpage i append chunck to buffer sourceBuffer.appendBuffer(new Uint8Array(chunck)) and video play in streaming mode.
Problem in first step with ffmpeg i can easy split RTSP stream to segments with this
ffmpeg -i test.mp4 -map 0 -c copy -f segment -segment_time 2 -reset_timestamps 1 output_%03d.mp4
but i cant redirect output to tcp://127.0.0.1 or pipe:1, if i correctly understood segment not work with pipes. For example i can easy send video frames in jpg by TCP with image2 catch ff d9 bytes in TCP stream and split stream to jpg images.
ffmpeg -i rtsp://127.0.0.1:8554 -f image2pipe tcp://127.0.0.1:7400
How i can split RTSP stream to ISO BMMF chunks for sending to webpage for playing with media source extensions? Or other way to prepare RTSP stream with FFmpeg for playing in MSE. Maybe i not correctly understood how working MSE and how prepare video for playing.
...in which i restream this chunks to webpage by websocket.
You don't need Web Sockets. It's easier than that. In fact, you don't need MediaSource Extensions either.
Your server should stream the data from FFmpeg over a regular HTTP response. Then, you can do something like this in your web page:
<video src="https://stream.example.com/output-from-ffmpeg" preload="none"></video>
How i can split RTSP stream to ISO BMMF chunks for sending to webpage for playing with media source extensions?
You need to implement a thin application server-side to receive the data piped from FFmpeg's STDOUT, and then relay it to the client. I've found it easier to use WebM/Matroska for this, because you won't have to deal with the moov atom and what not.

Keep FFMPEG processing if input fails

I'm trying to save a stream to a video file. If the input stream goes down, FFMPEG automatically stops encoding, but I want to somehow still display those seconds in which the input is down (as a black frame or freezing the last frame).
What I have tried:
ffmpeg -i udp://x.x.x.x:y -c:v copy output.mp4
I wonder if it is possible to keep writing the mp4 file even if the input goes down.
You need to code a special application for this.
It will take the input (will re-encode it if necessary) and will output to ffmpeg.
In the special app, you can check whether is the source is offline or not and act accordingly.
Crucial thing here is PCR values must be continuous, this is why this kind of thing is hard to do or code in general. But it can be done.

How to merge multiple audio/video stream

I have recordings of each side of a video call. Each side records only it's own audio/video. I want to merge/sync them so that they look like recording of a complete call.
I have time when each side start it's recording. e.g first from 0, second from 50 seconds after the first one started.
I found a lib ffmpeg and trying the same using it. So far, I am only able to map them into a single file in which they both start at same time. The problem is that when both input streams have different duration, lib triggers error related memory.
ffmpeg -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][1:v]hstack[t]; [0:a][1:a]amerge=inputs=2[b]" -map "[t]" -map "[b]" out.mp4
I see following error,
Error while filtering=28.0 size= 166kB time=00:00:06.58 bitrate= 207.1kbits/s speed=3.26x
Failed to inject frame into filter network: Cannot allocate memory
Error while processing the decoded data for stream #1:1
Since both streams start at same time so their audio/video don't sync.
Cutting the long stream and then merging with shortest one then finally concatenating doesn't look like a good option for me.
Can you please suggest me how to achieve this without loosing audio/video?
Thanks,
R.

ffmpeg - switch rtmp streams into a single encoded output?

I have 1-5 input streams, each uploading on a slightly different time offset.
With rtmp and ffmpeg, I can reliably encode a single stream into an HLS playlist that plays seamlessly on iOS, my target delivery platform.
I know that you can accept multiple input streams into ffmpeg, and I want to switch between the input streams to create a consistent, single, seamless output.
So I want to switch between
rtmp://localhost/live/stream1 .. rtmp://localhost/live/stream5 on a regular interval. Sometimes there will be multiple streams, and sometimes there won't.
Is there any way for ffmpeg to rotate between input streams while generating an HLS playlist? My goal is to avoid running duplicate instances of ffmpeg for server cost reasons, and I think connecting disparately encoded input streams for playback would be difficult if not impossible.
Switching on each segment is the ideal behavior, but I also need to keep the streams in time sync. Is this possible?
Switching live stream inputs can cause delays due to the initial connection time and buffering (rtmp_buffer).
There's no straight-forward way to do it with ffmpeg. Being an open source project you can add the functionality yourself. It shouldn't be very complicated if all all your inputs share the same codecs, number of tracks, frame sizes etc.
Some people suggested using another software to do the switch such as MLT or using filters such as zmq (ZeroMQ) to make ffmpeg accept commands.
One way to do it would be to re-stream the source as mpgets on a local port and use the local address as input in the command that outputs the HLS:
Stream switcher (60s of each stream, one at a time) - you can make a script with your own logic, this is for illustrative purposes:
ffmpeg -re -i rtmp://.../stream1 -t 60 -f mpegts udp://127.0.0.1:10000
ffmpeg -re -i rtmp://.../stream2 -t 60 -f mpegts udp://127.0.0.1:10000
[...]
ffmpeg -re -i rtmp://.../stream5 -t 60 -f mpegts udp://127.0.0.1:10000
Use the local address as source for the HLS stream - it'll wait for input if there's none and fix your DTS/PTS but you will probably introduce some delays on switching:
ffmpeg -re -i udp://127.0.0.1:10000 /path/to/playlist.m3u8

Extract frames as images from an RTMP stream in real-time

I am streaming short videos (4 or 5 seconds) encoded in H264 at 15 fps in VGA quality from different clients to a server using RTMP which produced an FLV file. I need to analyse the frames from the video as images as soon as possible so I need the frames to be written as PNG images as they are received.
Currently I use Wowza to receive the streams and I have tried using the transcoder API to access the individual frames and write them to PNGs. This partially works but there is about a second delay before the transcoder starts processing and when the stream ends Wowza flushes its buffers causing the last second not to get transcoded meaning I can lose the last 25% of the video frames. I have tried to find a workaround but Wowza say that it is not possible to prevent the buffer getting flushed. It is also not the ideal solution because there is a 1 second delay before I start getting frames and I have to re-encode the video when using the transcoder which is computationally expensive and unnecessarily for my needs.
I have also tried piping a video in real-time to FFmpeg and getting it to produce the PNG images but unfortunately it waits until it receives the entire video before producing the PNG frames.
How can I extract all of the frames from the stream as close to real-time as possible? I don’t mind what language or technology is used as long as it can run on a Linux server. I would be happy to use FFmpeg if I can find a way to get it to write the images while it is still receiving the video or even Wowza if I can find a way not to lose frames and not to re-encode.
Thanks for any help or suggestions.
Since you linked this question from the red5 user list, I'll add my two cents. You may certainly grab the video frames on the server side, but the issue you'll run into is transcoding from h.264 into PNG. The easiest was would be to use ffmpeg / avconv after getting the VideoData object. Here is a post that gives some details about getting the VideoData: http://red5.5842.n7.nabble.com/Snapshot-Image-from-VideoData-td44603.html
Another option is on the player side using one of Dan Rossi's FlowPlayer plugins: http://flowplayer.electroteque.org/snapshot
I finally found a way to do this with FFmpeg. The trick was to disable audio, use a different flv meta data analyser and to reduce the duration that FFmpeg waits for before processing. My FFmpeg command now starts like this:
ffmpeg -an -flv_metadata 1 -analyzeduration 1 ...
This starts producing frames within a second of receiving an input from a pipe so writes the streamed frames pretty close to real-time.

Resources