ffmpeg how to mix 2 RTP streams that individually start and stop randomly? - ffmpeg

I want to do this:
ffmpeg listens to 2 incoming rtp streams, and continuously send those mixed together to a single outgoing rtp stream
However, the 2 incoming streams are going to be starting and stopping randomly and independently of each other, as they have audio. They don't send audio silence, they stop sending when there is silence and then they start sending again when there is audio. ffmpeg seems to error on this situation.
I have this:
ffmpeg -i rtp://0.0.0.0:11000 -i rtp://0.0.0.0:12000 -filter_complex amix -f rtp rtp://10.10.10.10:13000
This is what happens with ffmpeg:
It wait for the audio to start, and then it sends to the output. But, when the input stop sending, I get this error:
rtp://0.0.0.0:11000 Unknown error
rtp://0.0.0.0:12000 Unknown error
and it crashes.
How can I keep it active even when one or the other input isn't sending, and how can I prevent it from crashing?
If ffmpeg outputs silence all the time when it doesn't receive anything, that would be acceptable too.

Related

How to decouple between ffmpeg and rtp server?

I have a ffmpeg-based worker that handles video-generation jobs at very high throughput.
Long videos need to be streamed while being generated.
For that purpose, I have introduced a WebRTC server named Janus-Gateway with its streaming plugin, and set the application's output to an rtp:// endpoint at that server (ffmpeg can stream a single stream using the ​RTP protocol).
In order to avoid buffering problems on the other hand, the streaming is done through the ffmpeg's -re option, which means that the stream will be streamed in real-time, i.e. it slows it down to simulate live streaming.
[ffmpeg-based app] (#1)--> [rtp://janus:port # webrtc server] (#2)--> [webrtc subscribers]
How can I continue processing video jobs at high throughput while streaming the results at real-time speed? I need somehow to decouple ffmpeg output (stage #1) so that consumers at stage #2 get streams at natural playback speed.

AWS Lambda: Stream a response body?

I would like to create a Lambda function that converts a video on the fly and, and streams the output to the user as it's being converted. This is to prevent any initial wait times.
I know how to do the video conversion part using ffmpeg and stdout, but am unclear if Lambda can even stream output, or if it always needs the full response body to be complete.
Is it possible to stream out a Response Body in Lambda? Any Python examples?

YouTube API liveBroadcastContent field incorrect for stream

When I hit the video list endpoint on the YouTube API (https://www.googleapis.com/youtube/v3/videos), and pass a stream's id for in the "id" field, I always get back the following result for the liveBroadcastContent field:
"liveBroadcastContent": "none"
I am relying on this field to determine whether or not a video is a stream. But if this field does not return "live" for a stream which is live, I can't determine if the video is a stream. It is worth noting that the request is sent within a few minutes of the stream starting, so that may have something to do with it.
Is there a more reliable way to find out whether a YouTube video is a stream?

HTTP2 push after serving content

Is it possible to push content to the client after the requested content has already been served?
This Wikipedia article explains the sequence of frames as follows:
Server receives HEADERS frame asking for index.html in stream 3...
Server sends a PUSH_PROMISE for styles.css and a PUSH_PROMISE for script.js, again in stream 3...
Server sends a HEADERS frame in stream 3 for responding to the request for index.html.
Server sends DATA frame(s) with the contents of index.html, still in stream 3.
Server sends HEADERS frame for the response to styles.css in stream 4
Server sends HEADERS frame for the response to script.js in stream 6.
Server sends DATA frames for the contents of styles.css and script.js, using their respective stream numbers.
I was wondering if, for example, I could keep open stream 3 and after I sent the DATA frame(s) for index.html and afterwards send PUSH_PROMISE frames.
Thanks for any responses :)
Is it possible to push content to the client after the requested content has already been served?
I believe the answer is 'no', based on 6.6. PUSH_PROMISE in RFC 7540. Here's the relevant quote (emphasis mine):
PUSH_PROMISE frames MUST only be sent on a peer-initiated stream
that is in either the "open" or "half-closed (remote)" state. The
stream identifier of a PUSH_PROMISE frame indicates the stream it
is associated with. If the stream identifier field specifies the
value 0x0, a recipient MUST respond with a connection error
(Section 5.4.1) of type PROTOCOL_ERROR.
Back to your question:
I was wondering if, for example, I could keep open stream 3 and after I sent the DATA frame(s) for index.html and afterwards send PUSH_PROMISE frames.
Here's something that I believe you could do, along those lines: you could send all DATA frames for stream 3 but withhold the END_STREAM flag thus keeping the (which means that the client would still be waiting for content). Then send the PUSH_PROMISE, then send an empty (zero length) DATA frame with END_STREAM set on stream 3. I can't think of a scenario where that would be useful, however.

why http/2 stream id must be ascending?

in RFC 7540 section 5.1.1. (https://www.rfc-editor.org/rfc/rfc7540#section-5.1.1), it specifies as following:
The identifier of a newly established stream MUST be numerically greater than all streams that the initiating endpoint has opened or reserved.
I searched a lot on Google, but still no one explained why the stream ID must be in an ascending order. I don't see any benefit from making this rule to the protocol. From my point of view, out of order stream IDs should also work well if the server just consider the "stream ID" as an ID and use it to distinguish HTTP2 request.
So could anyone can help out explaining the exact reason for this specification?
Thanks a lot!
Strictly ascending stream IDs are an easy way to make them unique (per connection), and it's super-easy to implement.
Choosing - like you say - "out of order" stream IDs is potentially more complicated, as it requires to avoid clashes, and potentially consumes more resources, as you have to remember all the stream IDs that are in use.
I don't think there is any particular reason to specify that stream IDs must be ascending apart simplicity.
6.8. GOAWAY
The GOAWAY frame (type=0x7) is used to initiate shutdown of a
connection or to signal serious error conditions. GOAWAY allows an
endpoint to gracefully stop accepting new streams while still
finishing processing of previously established streams. This enables
administrative actions, like server maintenance.
There is an inherent race condition between an endpoint starting new
streams and the remote sending a GOAWAY frame. To deal with this
case, the GOAWAY contains the stream identifier of the last peer-
initiated stream that was or might be processed on the sending
endpoint in this connection. For instance, if the server sends a
GOAWAY frame, the identified stream is the highest-numbered stream
initiated by the client.
Once sent, the sender will ignore frames sent on streams initiated by
the receiver if the stream has an identifier higher than the included
last stream identifier. Receivers of a GOAWAY frame MUST NOT open
additional streams on the connection, although a new connection can
be established for new streams.
If the receiver of the GOAWAY has sent data on streams with a higher
stream identifier than what is indicated in the GOAWAY frame, those
streams are not or will not be processed. The receiver of the GOAWAY
frame can treat the streams as though they had never been created at
all, thereby allowing those streams to be retried later on a new
connection.

Resources