How do Wayland clients communicate with the server? - x11

I know that the X window system protocol was based over a network, and this is how clients communicate with the X server. Now Wayland seeks to remove this network reliance.
My question is, how are Wayland clients supposed to communicate with the compositor? What is the medium for the protocol messages?

https://wayland.freedesktop.org/docs/html/ch04.html#sect-Protocol-Wire-Format
The protocol is sent over a UNIX domain stream socket

Related

Stream real time video from local IP to browser in an external network using websocket/webRTC with raspberry pi 3b+

Anybody here with some experience in websockets and webRTC using TURN/STUN servers?
Requirement:
Send real-time video feed from local IP to browser in an external network and I need some help implementing via raspberry pi 3b+. My camera source is android device, and using 3rd party apps I am able to generate the video feed over local network. Using the same app I can stream via Youtube Live,but getting a latency of about 2 secs in ultra low latency mode and dvr enabled. And I am trying to reduce the latency of the stream.
Q1. Do the semi-public TURN server provide a one to one peer. Or anyone can just jump into the URL and view and override what I am streaming?Please provide few list of service providers.
Just for information there would be 1-2 users browser connected at max.
Q2. Do I need Janus gateway to send webRTC/websockets data into the TURN/STUN server? Since my raspberry is connected to a different network and I cannot port forward due to carrier constraints.
Q3. Do I need both STUN/TURN servers or do I even need webRTC instead of websockets to send my video stream over the internet. Is websockets not sufficient?
Q4. Since we are not implementing over local network do we need to install coTURN too on raspberry pi?
Q5. Is there any android app that can publish the data from camera to websocket/werRTC server with a public ws URL?
Any help would be really helpful.
Q1. TURN servers relay media. They do this by allocating for every connecting peer a relay port between 49152–65535. This relay port will then be used to transmit the media to the second peer. The peers will know which relay ports to use automatically since this is part of the ice gathering process. To get back to your question: Other Peers cannot write to that relay port, it is 1 to 1 with handshakes, there is no chance of someone else overwriting it.
Q2. You definitely do not need a Janus Gateway to use TURN. TURN and STUN will probably work fine for NAT-Traversal without port forwarding.
Q3. You need at least a TURN server (but you ideally want to use 1 STUN server and 1 TURN server). STUN will work in most cases, but will fail if there are firewalls or complicated NATs, which block inbound udp connections. TURN is just the fallback for those cases.
Needing WebRTC? For just streaming videos, it depends on the use case. A sequence of images can be transmitted over websockets, they can handle Blobs fine. But you won't have a very fluent, high fps AND high resolution video stream this way. And of course, I know of no usable way to transmit audio over websocket.
Q4. The raspberry pi is a Peer that transmits media? Peers do not need a local TURN server installation, you will only need 1 TURN server (which should not be behind a NAT, probably running on some web server). The TURN server is a separate instance.
EDIT
For your private testing and development purposes, you may use https://numb.viagenie.ca/ . I don't know much about commercial turn server hosters, except that some exist. For someone who owns a v-server or root server, installing coTURN may be an option, this Tutorial might be helpful. To check if the server is working, I also found this snippet to be very useful.
END EDIT
Q5. There is no android app that publishes webRTC streams to a ws URL since websocket
messages are used by webrtc only for signalling (that is, telling peers their host candidates, those are the IP adresses and ports learned by the ice gathering process, this includes the TURN and STUN ip and port combinations).

What exactly is X11 Channel

In all the documentations of X11 that I've found so far something like this is written
Communication between server and clients is done by exchanging packets over a channel. The connection is established by the client (how the client is started is not specified in the protocol). (from wikipedia)
I haven't been able to find what is this channel exactly? A network channel for example? Is it on a port? Is it a memory map? Any help is appreciated.
The phrasing of 'channel' is intentionally vague as it can be either over a local socket, a remote connection (such as SSH), a named pipe, or another method that allows client/server bidirectional communication. Which is to say, a 'channel' is simply a connection between two points that facilitates exchange of data.
When perform X11 forwarding over SSH, the channel is the SSH connection. See the SSH man page for example:
$ man ssh
X11 connections and arbitrary TCP/IP ports can also be forwarded over the secure channel.
or per the x.org documentation:
The communications channel between an X client and server is full-duplex: either side can send a message to the other at any time. This is canonically implemented over a TCP/IP socket interface, though other communications channels are often used, including Unix domain sockets, named pipes and shared memory. The channel must provide a reliable, ordered byte stream---the X protocol provides no mechanism for reordering or resending packets.
X11 support multiple forms of communication between client and server. These so called channels can be TCP sockets, UNIX sockets, and a bunch of other network mechanisms, such as DECnet, token ring etc. TCP and UNIX sockets are really the only ones used today.
The X server is a process that has access to the graphics hardware, keyboard, and mouse. Any application that produces graphics on the computer screen is called a client. Usually, a workstation has on X server running, and multiple X clients. The applications (clients) need to connect to the X-Server via a TCP socket (identified by IP address and port number) or via a UNIX socket (identified by a file name, e.g. /tmp/X0)
If both, server and clients, run on the same system they usually connect through the UNIX socket. However, one of great features of X11 is that server and clients do not have the reside on the same system, but rather connect through the network via TCP sockets. This allows us to run applications on different computers on the network, and bring their graphics output on a single screen. (A single application may also connect to multiple X server and distribute graphics content on multiple screens.)

SIP communication with Web socket (Web RTC)

Sip (session initiation protocol) does not understand websocket so we need sip proxy which is basically a translator between sip and websocket.
i am following this architecture for sip handshaking with web socket. I have few questions
which sip proxy must be used to make audio and video call. and in the Gateway to SIP module i am using ASTERISK. how asterisk can be used for video call is there any codec available for video call? Please share some useful links.
Your kind answers will be highly appreciated.
Check out http://jssip.net. They provide a javascript API which uses SIP over WebSocket for client-side and they also have a SIP proxy and server (also works with Asterisk,Kamailio). They are the authors of RFC7118 "The WebSocket Protocol as a Transport for the Session Initiation Protocol (SIP)".
that s only one way to do it. There are many ways.
you have to distinguish between the signaling path and the media path
on the signaling path, you have to choose a signalling protocol and corresponding transport protocol. A browser can use web socket for transport and sip for the protocol as far as signaling is concerned. On the legacy SIP side, you need SID over UDP, there is a need to change the transport of the signaling, not the protocol of the signaling.
On the media path, you have two problems, the encryption and the codec. The encryption is mandatory in webrtc and not in SIP. You need a B2BUA to make the transition between both words.
on the codec side, you either choose an overlapping codec between both words, or you have to transcode. The use of a media server seems mandatory here. If you have multiple parties in a conference, you will need to mix the audio and compose the video to send it to legacy SIP, in which case your media server should be an MCU.
Eventually, you also have a discovery and identity problem. During the original handshake, SIP is expecting a user ID and a domain (which is either a DNS entry or a fixed IP) while webRTC is using ICE. Here again, it is very likely that you need to use a B2BUA to bridge both world.
Asterisk/kamailio/freeswitch are likely to handle most of the above for the simple cases (1 to 1, audio). For anything complicated, you're on your own. You might want to look at respoke.io that was made by digium, the company behind asterisk.

Connecting to MindWave through COM port?

Has anyone here tried connecting to the MindWave device through the COM port associated with the bluetooth device? Is it better to just connect through the Thinkgear Connector? My target language is Ruby and I was thinking of using the serial-port library.
If you connect to the serial port directly you will have to parse binary data which, actually, ThinkGear Connect socket protocol is for (all related is here) TGSP is JSON-based protocol so dealing with it in Ruby is piece of cake (below is an extract from documentation at the link above)
ThinkGear Socket Protocol (TGSP) is a JSON-based protocol for the
transmission and receipt of ThinkGear brainwave data between a client
and a server. TGSP was designed to allow languages and/or frameworks
without a standard serial port API (e.g. Flash and most scripting
languages) to easily integrate brainwave-sensing functionality through
socket APIs.
So I'd suggest you deal with TG socket protocol (I assume you don't run Ruby on an embedded device, for if you do, you will definitely need to parse data from TGAM chip by hand) rather than parse on your own, but don't forget you will have to have ThinkGear Connector software up and running everywhere you wish your code to run at.

Does websocket only broadcasts the data to all clients connected instead of sending to a particular client?

I am new to Websockets. While reading about websockets, I am not been able to find answers to some of my doubts. I would like if someone clarifies it.
Does websocket only broadcasts the data to all clients connected instead of sending to a particular client? Whatever example (mainly chat apps) I tried they sends data to all the clients. Is it possible to alter this?
How it works on clients located on NAT (behind router).
Since client server connection will always remain open, how will it affect server performance for large number of connections?
Since I want all my clients to get real time updates, it is required to connect all my clients to server, so how should I handele the client connection limit?
NOTE:- My client is not a Web browser but a desktop application.
No, websocket is not only for broadcasting. You send messages to specific clients, when you broadcast you just send the same message to all connected clients, but you can send different messages to different clients, for example a game session.
The clients connect to the server and initialise the connections, so NAT is not a problem.
It's good to use a scalable server, e.g. an event driven server (e.g. Node.js) that doesn't use a seperate thread for each connection, or an erlang server with lightweight processes (a good choice for a game server).
This should not be a problem if you use a good server OS (e.g. Linux), but may be a limitation if your server uses a desktop version of Windows (e.g. may be limited to 200 connections).

Resources