Is there an application-agnostic signaling protocol? - session

Is there an application-agnostic signaling protocol?
The use case is this. We have an open-source library for a multi-agent system that supports several protocols of the application layer of the OSI model. On the moment HTTP, XMPP, and ZeroMQ are supported for example. We would like to add high-bandwidth real-time streaming possibilities. It is logical to use RTP for that.
So, to recapitulate, we already have a connection to the other party that we can use for signalling. We want to negotiate only a new channel for data communication.
However, regarding the current standards, with respect to signaling all of them seem to be tied to their application. These current "standards" seem to be SIP, RTSP, and Jingle. They all seem to use RTP or SRTP on the application layer, and UDP on the transport layer. See e.g. XEP-0167.
The only thing we want to negotiate is another connection to that party that can be used for data transmission. In the Session Description Protocol all kind of stuff about media shows up, optional phone numbers, etc. If someone can point at a signaling protocol that is meant to be application-agnostic, that would be great!

I'm a big fan of XMPP and I think you'll get what you need with it. However since you already have HTTP as well, I want to mention that PubSubHubbub can also be used for that!
The current version of the protocol applies to any mime type that can be transported with HTTP so that would work.
In practice it's just a webhooks API which makes it easy to use and scale via load balancing.

Is there an application-agnostic signaling protocol?
Yes there are lots and you already mention a number of them such as XMPP, SIP and RTSP. You could also add the brand new WebRTC protocol to the list.
We would like to add high-bandwidth real-time streaming possibilities. It is logical to use RTP for that.
Yes. RTP is lightweight and as its name suggest was designed for carrying real-time traffic. It's also popular so you will be able to find numerous existing implementations.
The only thing we want to negotiate is another connection to that
party that can be used for data transmission. In the Session
Description Protocol all kind of stuff about media shows up, optional
phone numbers, etc. If someone can point at a signaling protocol that
is meant to be application-agnostic, that would be great!
I'm not sure what you mean here. Session Description Protocol (SDP) is a standard way to describe the media capabilities of a device. It's commonly used in SIP and RTSP (and XMPP has something equivalent) however it's separate from those protocols and if you don't want to use it you are free to come up with your own way of describing media.
You may be getting overwhelmed by some of the SDP examples, and they can indeed get very complicated when there are multiple streams and codecs offered. However an SDP payload can also be very simple; below is an SDP example for an RTSP server offering a single MJPEG video stream.
v=0
o=- - 0 IN IP4 0.0.0.0
s=-
t=0 0
m=video 0 RTP/AVP 26

If you just need a signalling protocol that is system and application agnostic, XMPP is the way to go.

Related

SIP communication with Web socket (Web RTC)

Sip (session initiation protocol) does not understand websocket so we need sip proxy which is basically a translator between sip and websocket.
i am following this architecture for sip handshaking with web socket. I have few questions
which sip proxy must be used to make audio and video call. and in the Gateway to SIP module i am using ASTERISK. how asterisk can be used for video call is there any codec available for video call? Please share some useful links.
Your kind answers will be highly appreciated.
Check out http://jssip.net. They provide a javascript API which uses SIP over WebSocket for client-side and they also have a SIP proxy and server (also works with Asterisk,Kamailio). They are the authors of RFC7118 "The WebSocket Protocol as a Transport for the Session Initiation Protocol (SIP)".
that s only one way to do it. There are many ways.
you have to distinguish between the signaling path and the media path
on the signaling path, you have to choose a signalling protocol and corresponding transport protocol. A browser can use web socket for transport and sip for the protocol as far as signaling is concerned. On the legacy SIP side, you need SID over UDP, there is a need to change the transport of the signaling, not the protocol of the signaling.
On the media path, you have two problems, the encryption and the codec. The encryption is mandatory in webrtc and not in SIP. You need a B2BUA to make the transition between both words.
on the codec side, you either choose an overlapping codec between both words, or you have to transcode. The use of a media server seems mandatory here. If you have multiple parties in a conference, you will need to mix the audio and compose the video to send it to legacy SIP, in which case your media server should be an MCU.
Eventually, you also have a discovery and identity problem. During the original handshake, SIP is expecting a user ID and a domain (which is either a DNS entry or a fixed IP) while webRTC is using ICE. Here again, it is very likely that you need to use a B2BUA to bridge both world.
Asterisk/kamailio/freeswitch are likely to handle most of the above for the simple cases (1 to 1, audio). For anything complicated, you're on your own. You might want to look at respoke.io that was made by digium, the company behind asterisk.

WebRTC vs Websockets: If WebRTC can do Video, Audio, and Data, why do I need Websockets? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
So I'm looking to build a chat app that will allow video, audio, and text. I spent some time researching into Websockets and WebRTC to decide which to use. Since there are plenty of video and audio apps with WebRTC, this sounds like a reasonable choice, but are there other things I should consider?
Feel free to share your thoughts.
Things like:
Due to being new WebRTC is available only on some browsers, while WebSockets seems to be in more browsers.
Scalability - Websockets uses a server for session and WebRTC seems to be p2p.
Multiplexing/multiple chatrooms - Used in Google+ Hangouts, and I'm still viewing demo apps on how to implement.
Server - Websockets needs RedisSessionStore or RabbitMQ to scale across multiple machines.
WebRTC is designed for high-performance, high quality communication of video, audio and arbitrary data. In other words, for apps exactly like what you describe.
WebRTC apps need a service via which they can exchange network and media metadata, a process known as signaling. However, once signaling has taken place, video/audio/data is streamed directly between clients, avoiding the performance cost of streaming via an intermediary server.
WebSocket on the other hand is designed for bi-directional communication between client and server. It is possible to stream audio and video over WebSocket (see here for example), but the technology and APIs are not inherently designed for efficient, robust streaming in the way that WebRTC is.
As other replies have said, WebSocket can be used for signaling.
I maintain a list of WebRTC resources: strongly recommend you start by looking at the 2013 Google I/O presentation about WebRTC.
Websockets use TCP protocol.
WebRTC is mainly UDP.
Thus main reason of using WebRTC instead of Websocket is latency.
With websocket streaming you will have either high latency or choppy playback with low latency. With WebRTC you may achive low-latency and smooth playback which is crucial stuff for VoIP communications.
Just try to test these technology with a network loss, i.e. 2%. You will see high delays in the Websocket stream.
WebSockets:
Ratified IETF standard (6455) with support across all modern browsers and even legacy browsers using web-socket-js polyfill.
Uses HTTP compatible handshake and default ports making it much easier to use with existing firewall, proxy and web server infrastructure.
Much simpler browser API. Basically one constructor with a couple of callbacks.
Client/browser to server only.
Only supports reliable, in-order transport because it is built On TCP. This means packet drops can delay all subsequent packets.
WebRTC:
Just beginning to be supported by Chrome and Firefox. MS has proposed an incompatible variant. The DataChannel component is not yet compatible between Firefox and Chrome.
WebRTC is browser to browser in ideal circumstances but even then almost always requires a signaling server to setup the connections. The most common signaling server solutions right now use WebSockets.
Transport layer is configurable with application able to choose if connection is in-order and/or reliable.
Complex and multilayered browser API. There are JS libs to provide a simpler API but these are young and rapidly changing (just like WebRTC itself).
webRTC or websockets? Why not use both.
When building a video/audio/text chat, webRTC is definitely a good choice since it uses peer to peer technology and once the connection is up and running, you do not need to pass the communication via a server (unless using TURN).
When setting up the webRTC communication you have to involve some sort of signaling mechanism. Websockets could be a good choice here, but webRTC is the way to go for the video/audio/text info. Chat rooms is accomplished in the signaling.
But, as you mention, not every browser supports webRTC, so websockets can sometimes be a good fallback for those browsers.
Security is one aspect you missed.
With Websockets the data has to go via a central webserver which typically sees all the traffic and can access it.
With WebRTC the data is end-to-end encrypted and does not pass through a server (except sometimes TURN servers are needed, but they have no access to the body of the messages they forward).
Depending on your application this may or may not matter.
If you are sending large amounts of data, the saving in cloud bandwidth costs due to webRTC's P2P architecture may be worth considering too.
Comparing websocket and webrtc is unfair.
Websocket is based on top of TCP. Packet's boundary can be detected from header information of a websocket packet unlike tcp.
Typically, webrtc makes use of websocket. The signalling for webrtc is not defined, it is upto the service provider what kind of signalling he wants to use. It may be SIP, HTTP, JSON or any text / binary message.
The signalling messages can be send / received using websocket.
Webrtc is a part of peer to peer connection.
We all know that before creating peer to peer connection, it requires handshaking process to establish peer to peer connection.
And websockets play the role of handshaking process.
Websocket and WebRTC can be used together, Websocket as a signal channel of WebRTC, and webrtc is a video/audio/text channel, also WebRTC can be in UDP also in TURN relay, TURN relay support TCP HTTP also HTTPS.
Many projects use Websocket and WebRTC together.

Interoperability of SIP/H.323/IAX2

I am curious to know if interoperability exists between those three protocols. Like if a call originated from a SIP protocol can go through a H.323 protocol? An article or book link about this topic will be much appreciated.Thanks.
SIP, H.323 and IAX2 are all different protocols and are not directly interoperable. That is, you cannot connect a SIP phone to an H.323 device and make a call.
The problems these protocols solve are all similar (e.g. Make a voice or video call). Protocol converters and other devices (like gateways) are available and can do the conversion.
You may also have to transcode the audio and video data from one codec to another, but you may also have to do that on a SIP-SIP or H.323-H.323 call.
Many PBXes and SoftSwitches support both SIP and H.323: asterisk supports all 3 (SIP, H.323 and IAX2).

WebSockets, UDP, and benchmarks

HTML5 websockets currently use a form of TCP communication. However, for real-time games, TCP just won't cut it (and is great reason to use some other platform, like native). As I probably need UDP to continue a project, I'd like to know if the specs for HTML6 or whatever will support UDP?
Also, are there any reliable benchmarks for WebSockets that would compare the WS protocol to a low-level, direct socket protocol?
On a LAN, you can get Round-trip times for messages over WebSocket of 200 microsec (from browser JS to WebSocket server and back), which is similar to raw ICMP pings. On MAN, it's around 10ms, WAN (over residential ADSL to server in same country) around 30ms, and so on up to around 120-200ms via 3.5G. The point is: WebSocket does add virtually no latency to the one you will get anyway, based on the network.
The wire level overhead of WebSocket (compared to raw TCP) is between 2 octets (unmasked payload of length < 126 octets) and 14 octets (masked payload of length > 64k) per message (the former numbers assume the message is not fragmented into multiple WebSocket frames). Very low.
For a more detailed analysis of WebSocket wire-level overhead, please see this blog post - this includes analysis covering layers beyond WebSocket also.
More so: with a WebSocket implementation capable of streaming processing, you can (after the initial WebSocket handshake), start a single WebSocket message and frame in each direction and then send up to 2^63 octets with no overhead at all. Essentially this renders WebSocket a fancy prelude for raw TCP. Caveat: intermediaries may fragment the traffic at their own decision. However, if you run WSS (that is secure WS = TLS), no intermediaries can interfere, and there you are: raw TCP, with a HTTP compatible prelude (WS handshake).
WebRTC uses RTP (= UDP based) for media transport but needs a signaling channel in addition (which can be WebSocket i.e.). RTP is optimized for loss-tolerant real-time media transport. "Real-time games" often means transferring not media, but things like player positions. WebSocket will work for that.
Note: WebRTC transport can be over RTP or secured when over SRTP. See "RTP profiles" here.
I would recommend developing your game using WebSockets on a local wired network and then moving to the WebRTC Data Channel API once it is available. As #oberstet correctly notes, WebSocket average latencies are basically equivalent to raw TCP or UDP, especially on a local network, so it should be fine for you development phase. The WebRTC Data Channel API is designed to be very similar to WebSockets (once the connection is established) so it should be fairly simple to integrate once it is widely available.
Your question implies that UDP is probably what you want for a low latency game and there is truth to that. You may be aware of this already since you are writing a game, but for those that aren't, here is a quick primer on TCP vs UDP for real-time games:
TCP is an in-order, reliable transport mechanism and UDP is best-effort. TCP will deliver all the data that is sent and in the order that it was sent. UDP packets are sent as they arrive, may be out of order, and may have gaps (on a congested network, UDP packets are dropped before TCP packets). TCP sounds like a big improvement, and it is for most types of network traffic, but those features come at a cost: a delayed or dropped packet causes all the following packets to be delayed as well (to guarantee in-order delivery).
Real-time games generally can't tolerate the type of delays that can result from TCP sockets so they use UDP for most of the game traffic and have mechanisms to deal with dropped and out-of-order data (e.g. adding sequence numbers to the payload data). It's not such a big deal if you miss one position update of the enemy player because a couple of milliseconds later you will receive another position update (and probably won't even notice). But if you don't get position updates for 500ms and then suddenly get them all out once, that results in terrible game play.
All that said, on a local wired network, packets are almost never delayed or dropped and so TCP is perfectly fine as an initial development target. Once the WebRTC Data Channel API is available then you might consider moving to that. The current proposal has configurable reliability based on retries or timers.
Here are some references:
WebRTC Introduction
WebRTC FAQ
WebRTC Data Channel Proposal
To make a long story short, if you want to use TCP for multiplayer games, you need to use what we call adaptive streaming techniques. In other words, you need to make sure that the amount of real-time data sent to synchronize the game world among the clients is governed by the currently available bandwidth and latency for each client.
Dynamic throttling, conflation, delta delivery, and other mechanisms are adaptive streaming techniques, which don't magically make TCP as efficient as UDP, but make it usable enough for several types of games.
I tried to explain these techniques in an article: Optimizing Multiplayer 3D Game Synchronization Over the Web (http://blog.lightstreamer.com/2013/10/optimizing-multiplayer-3d-game.html).
I also gave a talk on this topic last month at HTML5 Developer Conference in San Francisco. The video has just been made available on YouTube: http://www.youtube.com/watch?v=cSEx3mhsoHg
There's no UDP support for Websockets (there really should be), however you can apparently use WebRTC's RTCDataChannel API for UDP-like communication. There's a good article here:
http://www.html5rocks.com/en/tutorials/webrtc/datachannels/
RTCDataChannel actually uses SCTP which has configurable reliability and ordered delivery. You can get it to act like UDP by telling it to deliver messages unordered, and setting the maximum number of retransmits to 0.
I haven't tried any of this though.
I'd like to know if the specs for HTML6 or whatever will support UDP?
WebSockets won't. One of the benefits of WebSockets is that it piggybacks the existing HTTP connection. This means that to proxies and firewalls WebSockets looks like HTTP so they don't get blocked.
It's likely arbitrary UDP connections will never be part of any web specification because of security concerns. The closest thing to what you're after will likely come as part of WebRTC and it's associated JSEP protocol.
are there any reliable benchmarks ... that .. compare the WS protocol to a low-level, direct socket protocol?
Not that I'm aware of. I'm going to go out on a limb and predict WebSockets will be slower ;)

What is the difference between WebRTC and WebSockets for low level data communication

I am trying to understand the difference between WebRTC and WebSockets so that I can better understand which scenario calls for what. I am curious about the broad idea of two parties (mainly web based, but potentially one being a dedicated server application) talking to each other.
Assumption:
Clearly in regards to ad-hoc networks, WebRTC wins as it natively supports the ICE protocol/method.
Questions:
Regarding direct communication between two known parties in-browser, if I am not relying on sending multimedia data, and I am only interested in sending integer data, does WebRTC give me any advantages over webSockets other than data encryption?
Regarding a dedicated server speaking to a browser based client, which platform gives me an advantage? I would need to code a WebRTC server (is this possible out of browser?), or I would need to code a WebSocket server (a quick google search makes me think this is possible).
There is one significant difference: WebSockets works via TCP, WebRTC works via UDP.
In fact, WebRTC is SRTP protocol with some additional features like STUN, ICE, DTLS etc. and internal VoIP features such as Adaptive Jitter Buffer, AEC, AGC etc.
So, WebSockets is designed for reliable communication. It is a good choice if you want to send any data that must be sent reliably.
When you use WebRTC, the transmitted stream is unreliable. Some packets can get lost in the network. It is bad if you send critical data, for example for financial processing, the same issue is ideally suitable when you send audio or video stream where some frames can be lost without any noticeable quality issues.
If you want to send data channel via WebRTC, you should have some forward error correction algorithm to restore data if a data frame was lost in the network.
WebRTC specifies media transport over RTP .. which can work P2P under certain circumstances. In any case to establish a webRTC session you will need a signaling protocol also .. and for that WebSocket is a likely choice. In other words: unless you want to stream real-time media, WebSocket is probably a better fit.
Question 1: Yes. The DataChannel part of WebRTC gives you advantages in this case, because it allows you to create a peer to peer channel between browsers to send and receive any raw data you want. Websockets forces you to use a server to connect both parties.
Question 2 Like I said in the previous response, Websockets are better if you want a server-client communication, and there are many implementations to do this (i.e. jWebSocket). To add support in a server to establish a connection with a WebRTC DataChannel, it may take you some days of life and health. :)

Resources