I am trying to understand why webocket protocol is faster than http. Some of the points that I came across were that 1) because the over head of headers is handshake is reduces by ensuring minimu headers 2) websockets maintain a full duplex connection with the server (meaning they can send and receive data simultaneously)
I don't quite get the second point. I only get jargons on internet. Can someone explain what does having a true duplex connection mean?
It means support for data transmission between two points in both directions at exactly the same time. As opposed to half duplex, or to simplex. Did you try Wikipedia?
Related
When performing AJAX requests, I have always tried to do as few as possible since there is an overhead to each request having to open the http connection to send the data. Since a websocket connection is constantly open, is there any cost outside of the obvious packet bandwidth to sending a request?
For example. Over the space of 1 minute, a client will send 100kb of data to the server. Assuming the client does not need a response to any of these requests, is there any advantage to queuing packets and sending them in one big burst vs sending them as they are ready?
In other words, is there an overhead to the stopping and starting data transfer for a connection that is constantly open?
I want to make a multiplayer browser game as real time as possible, but I don't want to find that 100s of tiny requests per minute compared to a larger consolidated request is causing the server additional stress. I understand that if the client needs a response it will be slower as there is a lot of waiting from the back and forth. I will consider this and only consolidate when it is appropriate. The more smaller requests per minute, the better user experience, but I don't know what toll it will have on the server.
You are correct that a webSocket message will have lower overhead for a given message transmission than sending the same message via an Ajax call because the webSocket connection is already established and because a webSocket message has lower overhead than an HTTP request.
First off, there's always less overhead in sending one larger transmission vs. sending lots of smaller transmissions. That's just the nature of TCP. Every TCP packet gets separately processed and acknowledged so sending more of them costs a bit more overhead. Whether that difference is relevant or significant and worth writing extra code for or worth sacrificing some element of your user experience (because of the delay for batching) depends entirely upon the specifics of a given situation.
Since you've described a situation where your client gets the best experience if there is no delay and no batching of packets, then it seems that what you should do is not implement the batching and test out how your server handles the load with lots of smaller packets when it gets pretty busy. If that works just fine, then stay with the better user experience. If you have issues keeping up with the load, then seriously profile your server and find out where the main bottleneck to performance is (you will probably be surprised about where the bottleneck actually is as it is often not where you think it will be - that's why you have to profile and measure to know where to concentrate your energy for improving the scalability).
FYI, due to the implementation of Nagel's algorithm in most implementations of TCP, the TCP stack itself does small amounts of batching for you if you are sending multiple requests fairly closely spaced in time or if sending over a slower link.
It's also possible to implement a dynamic system where as long as your server is able to keep up, you keep with the smaller and more responsive packets, but if your server starts to get busy, you start batching in order to reduce the number of separate transmissions.
This question is for an indication/hunch. I realize that it may have been discussed before and that there is no good, scientific answer; nevertheless i seek for experienced/qualified opinions, as there are no definite answers to be found. An indication will be valuable as a clue, hence i ask the community to allow a bit of fuzzyness.
Background:
Consider a very-large-area 3D simulation
with n participants (peers, people behind NAT) distributed over multiple cities.
where each participant is seen as one "moving object" in the simulation (hence each moving object is owned by a peer).
where each peer shall see all other moving object correctly (ie. positional updates are needed).
(The entire simulation is larger, so we now focus on one single blob, and consider it to be the entire "world").
Scale:
World/blob size 10x10 kilometers (almost flat world).
Object size: Length max 10 meters
(We omit things like occlusion, optmisations, balancing etc. Assume that all there is needs to be seen and updated).
The nature of "moving object":
it is physically/positionally restless (compare to a boat in big
waves).
it's movement must be sync'ed to all peers (but individual sync does not need to be simultaneus with other syncs).
if X sees one, but does not own it, it will behave well (deterministically, by X's local physics calculation) for maybe 1 second, but after that it will diverge (due to different frame rates) and needs a positional update (a UDP packet) from it's owner.
From a peers point of view:
He needs to update n-1 other peers
He need to receive updates from n-1 other peers
The positional updates are the critical ones, so focus only on those. One update is ca 20-30 doubles, ca. 200 bytes. Consider UDP only.
As i see it, there are two options. The first one is serverless, where everything works solely on peer2peer communication. The second one is having a server (one, for now) in the middle.
1. Serverless, p2p
Each peer must talk with many other peers. One problem is that "Nagle'ing" is useless. First reason is that all endpoints are different, Second is that the local data changes from frame to frame, and there is no point in accumulating multiple frames' data, to send in a larger packet, more sparsely. The oldest frames' data would be outdated. An advantage is however not being dependant on a server.
2. Server-supported
Each peer sends it's info to a high-performance, high-bandwidth server which is able to better receive and distribute to all peers, at a fast rate. Similarly, any peer would receive all peers' data from one endpoint only, the server.
Naurally, each peer runs a game loop.
Question: Hopefully based on some kind of experience, what would you throw as a maximum functional number of peers for case 1, case 2? Thx.
It is difficult to quantify, but for such a level of all-to-all synchronization I would recomend centralized control.
In p2p mode each peer would send n-1 and receive n-1 packets each pseudo-round. In centralized mode they would receive n-1 packets, but would send only 1, spending less time in this task. So centralized mode seems to be more scallable.
A server can check if update messages are consistent before delivering them. In p2p, each peer would have to deal with instable or disconnected peers, which could be better managed by a server.
In centralized mode update-time has to be better carefully choosen, because clients are more susceptible to experience higher latencies, as each packet has to travel towards the server, and then back to the clients. Choosing the best server for clients is one thing to consider.
Combining packets could make the information traverse the network faster, but as outdating of data is an issue, try to make sure each packet is as small as possible, transmission time is smaller in this case.
I am aggregating connections by going through collected packet dump,collected using TCPDUMP. My code is in Ruby.
The code will differentiate between connections using the 4-tuple ( SrcIP,SrcPort,DstIP,DstPort)
Now if the connections are between the same machine, having the same IP and the same port then the connections are differentiated by the following method.
1. If the time between the connections is more than 2Hrs then its a new connection
2. If we see we have already seen a FIN or a RST then the new packet is from a new connection
3. If the No of SYNs are more than two ( One in each direction) then the connection is a new connection.
The situation I am not able to address is the following
If new connection between the same two hosts (having the same 4-tuple) happened within 2Hrs and TCPDUMP dropped the previous RST or FIN Packets and it also dropped 2 or more SYN Packets from both the connections. In that case none of the above conditions that I have set will work. And the only set of Information that remains is the time of the new set of packets ,Seq Nos, Ack Nos and data size. Just using this information could I figure out if the connection is a new one or an old one?
I tried to see if there is a pattern in the sequence No or between the SeqNo and the AckNo but none seem to definite.
Because TCP (primarily) uses a sliding acknowledgement window, the SeqNo and AckNo will be monotonically increasing fields -- until they wrap around due to integer overflow.
Also, the SeqNo from one direction of traffic corresponds to the AckNo of the other direction of traffic, providing another invariant that you can check.
One complicating factor is that the SeqNo are initially chosen to be random to reduce the likelihood of man in the middle attacks; so, a new session with otherwise identical parameters might pick initial sequence numbers that are larger than the previously visible sequence numbers, and confuse your algorithms.
How does one determine which peer you are connected to has the fastest connection(upload rate)?
Does the actual connection of the peer dominate who is fastest or will the peer who needs the most chunks cause him to upload the fastest as less people are downloading from him?
I want to write an algorithm which takes all the peers in the peer list returned from the tracker and determine wither which peers are closer using a ping and timing the response or some other way.
Thanks
A ping (ICMP echo request/reply) will give you the latency of a peer, but not the available bandwidth the peer has. You want the bandwidth since TCP is good at doing bandwidth*delay products and figuring out how to make a connection fast, even if it roundtrips a satellite.
What you do is to connect to all of them. Having 40 peers connected is not uncommon. And then you decide upon which to unchoke based on their current rates towards you (until you become a seeder). It also has to be fairly dynamic, since available bandwidth change over time. The best advice I can give is to read
http://www.bittorrent.org/bittorrentecon.pdf
which gives the general idea of how to implement the economics. But many clients do different things than the paper, so reading code is another option.
So: You want to measure bandwidth, not latency. Hence, ping is the wrong tool for the job. Measuring bandwidth is most easily done by tracking the rate at which you send packets to a peer.
I think that the choking/unchoking algorithm and selecting peers to unchoke is one of the hardest parts to get right in a client. It is best solved with pen, paper and brain, not by sitting in front of the computer writing code.
I've been doing some research on Nagle's algorithm out of idle curiousity. I understand the basic concept behind it (TCP packets contain a significant amount of overhead especially when dealing with small payloads), but I'm not sure I grok the implementation.
I was reading this article on Wikipedia, but I'm still unclear on how it works. Let's take the example of a Telnet connection. The connection is established and I begin typing. Let's say I type three characters (cat, for example) and hit return. Now we're talking cat\r\n which is still only 5 bytes. I'd think this would not get sent until we queue up enough bytes to send - and yet, it does get sent immediately (from a user perspective), since cat is immediately executed upon hitting return.
I think I have a fundamental misunderstanding here on how the algorithm works, specifically regarding the bit where "if there is unconfirmed data still in the pipe, enqueue, else send immediately."
The data gets sent immediately only if the server has already responded to any previous messages from you (or this is your first contact with it in this session). So, as the server gets busier and slower to respond, in order to avoid swamping it with too many packets, the data gets queued up to a maximum packet size before getting sent.
So whether data gets sent immediately or not only can be determined in the context of previous messages, if any.
Read this post, it is quite in-depth and clarified a lot of the things for me.