TokBox/OpenTok/Vonage Streaming mixing Howto - opentok

Is it possible to combine multiple audio stream with the TokBox/Vonage API?
I have two or three WebRTC streams sending to OpenTok cloud and would like to have the streams combined and have the listeners to receive the combined stream.
Thanks.

OpenTok Developer Advocate here.
It sounds like you're looking for our Live Streaming Broadcasts feature. This type of broadcast lets you share an HTTP live streaming (HLS) stream or an RTMP stream with a large numbers of viewers. The HLS or RTMP stream is a single video composed of the individual streams published to the OpenTok session.

Related

Can I read an encoded stream from a URL with WebRTC

I'm trying to stream the video of my C++ 3D application (similar to streaming a game).
I have encoded an H.264 video stream with the ffmpeg library (i.e. internally to my application) and can push it to a local address, e.g. rtp://127.0.0.1:6666, which can be played by VLC or other player (locally).
I'm not particularly wedded to h.264 at this point, or rtp. I could send as srtp if that would help.
I'd like to use WebRTC to set up a connection across different machines, but can't see in the examples how to make use of this pre-existing stream - the video and audio examples are understandably focused on getting data from devices like connected web cams, or the display.
Is what I'm thinking feasible? I.e. ideally I'd just point webRTC at my rtp://127.0.0.1:6666 address and that would be the video stream source.
I am writing out an sdp file as well which can be read by VLC, could I use this in a similar way?
As noted in the comment below there is an example out there using go to weave some magic that enables an rtp stream to be shown in a browser via webRTC.
I am trying to find a more "standard" way to be able to set the source of a video track in webRTC to be the URL of an encoded stream. If there isn't one, that is valuable information to me too, as I can change tack and use a webrtc library to send frames directly.
Unfortunately FFMPEG doesn't support WebRTC output. It lacks support for ICE and DTLS-SRTP.
You will need to use a RTP -> WebRTC bridge. I wrote rtp-to-webrtc that can do this. You can do this with lots of different WebRTC clients/servers!
If you have a particular language/paradigm that you prefer happy to provide examples for those.

HTML5 live streaming with alpha channel

We are trying to live stream a video file with alpha channel. Adaptive streaming would be great, but it's not a must. Streaming is pretty new territory for us.
We found out that WebM (VP9) seems to be the only format in the web that supports alpha channel. We tried using the nginx-rtmp-module as streaming server (MPEG-DASH) and broadcast the file with ffmpeg. But the alpha channel was lost. Probably because RTMP required us to broadcast the video as a FLV which doesn't support alpha.
Is anyone having experience in streaming rgba videos on the web? Getting to know compatible commercial solutions would be interesting tee. My next approaches would be trying to use Icecast. There is not much information online but this article where streaming a webm directly seems to be possible.

HTTP Live Streaming: Fragmented MP4 or MPEG-TS?

I have an IP camera which sends out a live stream in RTSP over UDP and I want to display this stream in the browser, and I want it to work on the major browsers and on mobile (both iOs and Android). To achieve this I want to convert the stream to HTTP Live Streaming (HLS) on the server before sending it to the client. Now I've read that not very long ago Apple added support for fragmented MP4 (fMP4) as format for the stream, whereas normally the stream would be sent in MPEG-TS format. And fMP4 is also the format that MPEG-DASH supports, and MPEG-DASH might be the industry standard in a few years.
Now my question is, what the advantages and disadvantages of both fMP4 and MPEG-TS are?
EDIT: According to the technical notes for HLS from Apple, live streams must be encoded as MPEG-TS streams (https://developer.apple.com/library/content/technotes/tn2224/_index.html#//apple_ref/doc/uid/DTS40009745-CH1-ENCODEYOURVARIANTS). Is there a reason for this or is this information outdated?
fMP4 is likely to replace TS as a standard. It has less overhead and is required for HEVC, but the main advantage is compatibility with DASH - i.e. you can generate both HLS and DASH using the same files, which helps with compute and storage costs. For your particular use case, HLS TS probably has the more coverage (due to old devices and players) than HLS fMP4, but HLS+DASH fMP4 is what I would choose.

Live Transcoding & Streaming

My client has a requirement where he needs me to transcode a source file into a proxy with a unique burn in on it per playback.
For the proxy I will be using ffmpeg, nothing fancy, but ideally the users can play back the file as it is being transcoded since it may take up to several minutes to complete the transcoding.
Another restriction is that the player does not support HLS and other live streaming options and can only accept MP4s as a source.
Any ideas/suggestions would be great.
It seems you have conflicting requirements. mp4 is VERY poorly suited for live streaming. It is 'possible' to create a fake moov and have the player perform byte ranges. But it is very inefficient. You really need a player or platform that supports streaming formats such as fmp4 (fragmented mp4/dash) hls, ts, flv, rtmp, rtc, etc.

Live WebRTC streams (getUserMedia) to DASH using WebM

I'm trying to understand the feasibility of a live streaming solution.
I want to grab WebRTC streams (audio and video), send them to a server and transform them in chunks to send to a html5 video tag or a DASH player using WebM container (VP8 and Opus codecs).
I also looked into ffmpeg, ffserver and gstreamer but...
My question is how to feed the WebRTC streams (live) and transform them in HTTP chunks (live DASH compatible)?
Anyone achieved something like this?

Resources