Is there any limitation on Sinch instant messaging, e.g. total size, frequency? Can I send json?
Besides, how instant is it? is it implemented in websocket, or webRTC, or hybrid? thanks!
We recommend that you only send messages that are at most a couple of kilobytes in size. You can send JSON in the message itself. There's no frequency limit.
How instant it is depends on network connection on each side - it's transmitted over HTTP.
Related
I am having below requirements with respect to http2.
1) While intiating client side http connection, I should be able to set Max_concurrent_streams supported by http2 server, and handle failure conditions accordingly.
2) Get stream ID of stream and assign priorities.
I checked okhttp client and Java11 http client. But couldnt find any way to achieve the same.
Please let me know is there any way to achieve these.
If you need to deal with low-level details of the HTTP/2 protocol, you can use Jetty's HTTP2Client.
Note that it's the server that decides the max number of concurrent streams it can support, and the client cannot modify that value.
The client can send to the server the max number of concurrent streams it supports, but that number refers to the pushed streams that the server can send to the client.
Using the HTTP2Client APIs you will have easy access to the stream id and will be able to send to the server PRIORITY frames to assign (and modify) priorities to requests.
This is a simple example of how to use HTTP2Client.
You can find more examples in this directory.
When a server broadcast the same info to multiple clients via websocket connection my idea is that some clients will receive the information faster (supposing the transmit time is the same for all clients) because after all the data going out the server is "serial".
Or is there something I'm missing? Can it be dependent on the implementation of the ws broadcast?
How can for example FOREX server be sure that all the clients receive the information about exchanges done at the same time?
There's never a guaranteed way that all the clients will receive the data at the same time.
Even if the data was sent at the same time (for example, using UDP broadcasting rather than a WebSocket connection), clients suffer from different network latency and routing, the data will still arrive at different times.
For WebSockets, the server itself will always send the data to some clients before it's sent to other clients...
...but this doesn't mean the data will arrive in the same order. Network latency, connectivity issues, intermediary performance and other uncontrollable concerns might make it so the data that was sent first arrives last. It's impossible to control.
I'm building a Web socket server, however, for testing purposes, I'd like Chrome or Firefox or any other browser to send the message fragmented so I can test my implementation.
I've tried even sending 100K text data and the FIN flag is always set to 1 and the opcode is TEXT.
Is there a way to manually trigger fragmented frames? Any client out there with more flexibility?
The Javascript WebSocket API does not expose this option. I recently ran into the same frustration when some more modern browsers (A Chromium derivative) was unpredictably sending fragmented WebSocket frames.
For testing I rolled my own TCP client sending pre-calculated fragmented WebSocket frames. Not ideal, but it got the job done, and AFAIK there's no alternative yet.
How does Twillio get to send so many messages via SMS? I am thinking about making my own service for my company for internal use, but I am trying to discover how they managed to do that in such a large quantity while still remaning afloat? Are they using some sort of connection with a large set of phones, and automagically sending the messages from their actual devices? Wouldn't their service provider frown upon that kind of volume?
They are at most using SMPP protocol to send SMS messages directly to their service. SMPP is a protocol widely used for sending mass (bulk) SMS messages between third-party and operator.
Excerpt from Wikipedia:
The protocol is based on pairs of request/response PDUs (protocol data
units, or packets) exchanged over OSI layer 4 (TCP session or X.25
SVC3) connections. PDUs are binary encoded for efficiency. Data
exchange may be synchronous, where each peer waits for a response for
each PDU being sent, and asynchronous,
See full Wikipedia article: Short Message Peer-to-Peer
I am trying to understand the difference between WebRTC and WebSockets so that I can better understand which scenario calls for what. I am curious about the broad idea of two parties (mainly web based, but potentially one being a dedicated server application) talking to each other.
Assumption:
Clearly in regards to ad-hoc networks, WebRTC wins as it natively supports the ICE protocol/method.
Questions:
Regarding direct communication between two known parties in-browser, if I am not relying on sending multimedia data, and I am only interested in sending integer data, does WebRTC give me any advantages over webSockets other than data encryption?
Regarding a dedicated server speaking to a browser based client, which platform gives me an advantage? I would need to code a WebRTC server (is this possible out of browser?), or I would need to code a WebSocket server (a quick google search makes me think this is possible).
There is one significant difference: WebSockets works via TCP, WebRTC works via UDP.
In fact, WebRTC is SRTP protocol with some additional features like STUN, ICE, DTLS etc. and internal VoIP features such as Adaptive Jitter Buffer, AEC, AGC etc.
So, WebSockets is designed for reliable communication. It is a good choice if you want to send any data that must be sent reliably.
When you use WebRTC, the transmitted stream is unreliable. Some packets can get lost in the network. It is bad if you send critical data, for example for financial processing, the same issue is ideally suitable when you send audio or video stream where some frames can be lost without any noticeable quality issues.
If you want to send data channel via WebRTC, you should have some forward error correction algorithm to restore data if a data frame was lost in the network.
WebRTC specifies media transport over RTP .. which can work P2P under certain circumstances. In any case to establish a webRTC session you will need a signaling protocol also .. and for that WebSocket is a likely choice. In other words: unless you want to stream real-time media, WebSocket is probably a better fit.
Question 1: Yes. The DataChannel part of WebRTC gives you advantages in this case, because it allows you to create a peer to peer channel between browsers to send and receive any raw data you want. Websockets forces you to use a server to connect both parties.
Question 2 Like I said in the previous response, Websockets are better if you want a server-client communication, and there are many implementations to do this (i.e. jWebSocket). To add support in a server to establish a connection with a WebRTC DataChannel, it may take you some days of life and health. :)