Reading the SPDY whitepaper at http://dev.chromium.org/spdy/spdy-whitepaper, it seems like supporting it will improve my HTTP latency. However, I'm not clear on a few of the claims.
1) "Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue), a server delay of 500 ms prevents reuse of the TCP channel for additional requests." -- Where did this 500ms number come from?
2) "We discovered that SPDY's latency savings increased proportionally with increases in packet loss rates, up to a 48% speedup at 2%." -- But doesn't putting all the requests on a single TCP connection mean that congestion control will slow down all your requests whereas is you had multiple connections, 1 TCP stream would slow down but others would not?
3) "[With pipelining] any delays in the processing of anything in the stream (either a long request at the head-of-line or packet loss) will delay the entire stream." -- This implies that packet loss would not delay the entire stream using SPDY. Why not?
The 500ms reference is simply an example, the number can be 50ms or 5s, but the point is still the same: HTTP forces FIFO processing, which results in inefficient use of the underlying TCP connection. As the paper notes, pipelining can help in theory, but in practice pipeline is not used due to many intermediaries which break when you turn it on. Hence, you're stuck with the worst case scenario: full RTT + server processing time, and FIFO ordering.
Re, packet loss. Yup, you're exactly right. One of the downsides of using a single connection is that in the case of packet loss, the throughput of the entire connection is cut in half, as opposed to 1/2 of one of the N connections in flight. Having said that, there are also some benefits! For example, when you saturate a single connection, you get much faster recovery due to triple ACK's + potentially much wider congestion windows to begin with.. Due to the fact that most HTTP transfers are relatively small (tens of KB's), it is not unusual for many connections to terminate even before they exit the slow-start phase!
Re, pipelining. Lost packet would delay the stream - that's TCP. The win is in eliminating head-of-line blocking, which enables a lot more and a lot smarter optimization by the browser, followed by some of the wins I described above.
#GroovyDotCom: Here's some hands-on proof of HTTP2's (SPDY's) performance benefits:
http://www.httpvshttps.com/
Related
As a test I wrote little tool to test the LAN connection between two PCs.
It is a client/server model that just sends as many UDP packets as it can and on the other side I read everything I can.
To max out my resources, I start a goroutine for every core my machine has.
Sending, receiving and measuring speed works, but when I get to high throughput (500+ Mb/s), the receiving end becomes completely unresponsive.
If I throttle the connection, I don't have any problems.
Also my CPU maxes out just one core (although i used runtime.GOMAXPROCS(0) and start to receive in runtime.NumCPU goroutines)
I uploaded the code to GitHub over here: https://github.com/femot/lanbench
If I change the client to run locally, the problem does not occur. It only happens, if I start the client from another PC (although the measured speed also tops out at 650 Mb/s)
Your server is limited first by the delta channel with a buffer of 100. I'm sure at any significant packet rate that you will be overwhelming that loop.
This isn't a very good benchmark, since your packet rate is going to be a limiting factor more so than bandwidth. You're specifically only trying to test how fast Go can send and receive 1024byte UDP datagrams.
Regardless of how many goroutines you start, the IO is all going through the network poller in a single thread. If you can't saturate your link with a single core, you're going to need multiple process or you need to do this in another language.
How to measure actual bandwidth between server and client to decide how much of real-time data to send?
My server sends read-time data to clients, 30 times per second. If server has too much data it prioritises data chunks and throws away anything that doesn't fit into available bandwidth because this data will be invalidated next tick anyway. Data is sent over reliable (20%) and unreliable channels (80%) (both UDP based but if TCP as a reliable channel can provide any benefit please let me know). Data is highly latency-sensitive. Server often (but not always!) has more data than available bandwidth. It's critical to send as much data as possible but not more than available bandwidth to avoid packets drop or higher latency.
Server and client are custom applications so can implement any algorithm/protocol.
My main problem is how to keep track of available bandwidth. Also any statistical info about typical bandwidth jitter would be helpful (servers are in a cloud, clients are home users, worldwide).
At the moment I'm thinking how to utilize:
latency info of reliable channel. It should correlate with bandwidth because if latency grows this can (!) mean retransmission is involved as result of packets drop and so server must lower data rate.
data amount received by client on unreliable channel during time frame. Especially if data amount is lower than what was sent from server.
if current latency is close to or below lowest recorded one, bandwidth can be increased
The problem is that this approach is too complicated and involves a lot of "heuristics" like what should be a step to increase/decrease bandwidth etc.
Looking for any advice from people who dealt with similar problem in the past or just any bright ideas
The first symptom of trying to use more bandwidth than you actually have will be increased latency, as you fill up the buffers between the sender and whatever the bottleneck is. See https://en.wikipedia.org/wiki/Bufferbloat. My guess is that if you can successfully detect increased latency as you start to fill up the bandwidth and back off then you can avoid packet loss.
I wouldn't underestimate TCP - people have spent a lot of time tuning its congestion avoidance to get a reasonable amount of the available bandwidth while still being a good network citizen. It may not be easy to do better.
On the other hand, a lot will depend on the attitude of the intermediate nodes, which may treat UDP differently from TCP. You may find that under load they either prioritize or discard UDP. Also some networks, especially with satellite links, may use https://en.wikipedia.org/wiki/TCP_acceleration without you even knowing about it. (This was a painful surprise for us - we relied on the TCP connection failing and keep-alive to detect loss of connectivity. Unfortunately the TCP accelerator in use maintained a connection to us, pretending to be the far end, even when connectivity to the far end had in fact been lost).
After some research, the problem has a name: Congestion Control, or Congestion Avoidance Algorithm. It's quite a complicated topic and there're lots of materials about it. TCP Congestion Control was evolving over time and is really good one. There're other protocols that implement it, e.g. UDT or SCTP
my (network) client sends 50 to 100 KB data packets every 200ms to my server. there're up to 300 clients. Server sends nothing to client. Server (dedicated) and clients are in LAN. How can I tune TCP configuration for better performance? Server on Windows Server 2003 or 2008, clients on Windows 2000 and up.
e.g. TCP window size. Does changing this parameter help? anything else? any special socket options?
[EDIT]: actually in different modes packets can be up to 5MB
I did a study on this a couple of years ago wth 1700 data points. The conclusion was that the single best thing you can do is configure an enormous socket receive buffer (e.g. 512k) at the receiver. Do that to the listening socket, so it will be inherited by the accepted sockets, so it will already be set while they are handshaking. That in turn allows TCP window scaling to be negotiated during the handshake, which allows the client to know about the window size > 64k. The enormous window size basically lets the client transmit at the maximum possible rate, subject only to congestion avoidance rather than closed receive windows.
What OS?
IPv4 or v6?
Why so large of a dump ; why can't it be broken down?
Assuming a solid, stable, low bandwidth:delay prod, you can adjust things like inflight sizing, initial window size, mtu (depending on the data, IP version, and mode[tcp/udp].
You could also round robin or balance inputs, so you have less interrupt time from the nic .. binding is an option as well..
5MB /packet/? That's a pretty poor design .. I would think it'd lead to a lot of segment retrans's , and a LOT of kernel/stack mem being used in sequence reconstruction / retransmits (accept wait time, etc)..
(Is that even possible?)
Since all clients are in LAN, you might try enabling "jumbo frames" (need to run a netsh command for that, would need to google for the precise command, but there are plenty of how-tos).
On the application layer, you could use TransmitFile, which is the Windows sendfile equivalent and which works very well under Windows Server 2003 (it is artificially rate-limited under "non server", but that won't be a problem for you). Note that you can use a memory mapped file if you generate the data on the fly.
As for tuning parameters, increasing the send buffer will likely not give you any benefit, though increasing the receive buffer may help in some cases because it reduces the likelihood of packets being dropped if the receiving application does not handle the incoming data fast enough. A bigger TCP window size (registry setting) may help, as this allows the sender to send out more data before having to block until ACKs arrive.
Yanking up the program's working set quota may be worth a consideration, it costs you nothing and may be an advantage, since the kernel needs to lock pages when sending them. Being allowed to have more pages locked might make things faster (or might not, but it won't hurt either, the defaults are ridiculously low anyway).
We are using a business Ethernet connection (3Mbit upload, 3Mbit download) and trying to understand issues with our tested bandwidth speeds. When uploading a large file we sustain 340 KB/s; downloading we sustain 340KB/s. However when we run these transfers simultaneously the two transfer speeds rise and fall erratically with a average speed for both at around 250 KB/s. We're using a Hatteras HN404 CPi and we've bypassed the router (plugged a machine directly into the Hatteras; set the NIC to full-duplex).
Is this expected? Should a max upload interfere with a max download on this type of Internet connection?
Are you sure the bottleneck is your connection?
Do you also see this behavior when the simultaneous upload and download are occurring on different systems, or only when one system is handling both the upload and download?
If the problem goes away when independent machines are doing the work, the bottleneck is likely closer to the hard drive.
This sounds expected from my experience with lower end lines. On a home line, I've found that traffic shaping and changing buffer sizes can be a huge help.
TCP/IP without any unusual traffic shaping will favor the most aggressive traffic at the expense of everything else. In your case, this means responses to the outgoing ACKs and such for the download will be delayed or maybe even dropped. See if your HN404 supports class based queuing or something similar and try it out.
Yes it is expected. This is symptomatic of any case in which you have a throttled or capped connection. If you saturate your uplink it will affect your downlink and vice versa.
This is because the your connection's rate-limiting impacts the TCP handshake acknowledgement packets (ACKs) and disrupts the normal "balance" of how these packets flow.
This is very thoroughly described on this page about Cable Modem Troubleshooting Tips, although it is not limited to cable modems:
If you saturate your cable modem's
upload cap with an upload, the ACK
packets of your download will have to
queue up waiting for a gap between the
congested upload data packets. So your
ACKs will be delayed getting back to
the remote download server, and it
will therefore believe you are on a
very slow link, and slow down the
transmission of further data to you.
So how do you avoid this? The best way is to implement some sort of traffic-shaping or QoS (Quality of Service) on individual sessions to limit them to a maximum throughput based on a percentage of your total available bandwidth.
For example on my home network I have it so that no outbound connection can utilize any more than 67% (2/3rd) of my 192Kbps uplink. That means any single outbound session can only utilized 128Kbps, therefore protecting my downlink speed by preventing the uplink from becoming saturated.
In most cases you are able to perform this kind of traffic-shaping based on any available criteria such as source ip, destination ip, protocol, port, time of day, etc.
It appears that I was wrong about the simultaneous transfer speeds. The 250KB/s speeds up and down were miscalculated by the transfer program (seemed to have been showing a high average speed). Apparently the Business Ethernet (in this case it is an XO circuit provisioned by Speakeasy) only supports 3Mb total, not up AND down (for 6Mbit total). So if I am transferring up and down at the same time in theory I should only have 1.5Mbit up and down or 187.5KB/s at the maximum (if there was zero overhead).
Suppose you have a program which reads from a socket. How do you keep the download rate below a certain given threshold?
At the application layer (using a Berkeley socket style API) you just watch the clock, and read or write data at the rate you want to limit at.
If you only read 10kbps on average, but the source is sending more than that, then eventually all the buffers between it and you will fill up. TCP/IP allows for this, and the protocol will arrange for the sender to slow down (at the application layer, probably all you need to know is that at the other end, blocking write calls will block, nonblocking writes will fail, and asynchronous writes won't complete, until you've read enough data to allow it).
At the application layer you can only be approximate - you can't guarantee hard limits such as "no more than 10 kb will pass a given point in the network in any one second". But if you keep track of what you've received, you can get the average right in the long run.
Assuming a network transport, a TCP/IP based one, Packets are sent in response to ACK/NACK packets going the other way.
By limiting the rate of packets acknowledging receipt of the incoming packets, you will in turn reduce the rate at which new packets are sent.
It can be a bit imprecise, so its possibly optimal to monitor the downstream rate and adjust the response rate adaptively untill it falls inside a comfortable threshold. ( This will happen really quick however, you send dosens of acks a second )
It is like when limiting a game to a certain number of FPS.
extern int FPS;
....
timePerFrameinMS = 1000/FPS;
while(1) {
time = getMilliseconds();
DrawScene();
time = getMilliseconds()-time;
if (time < timePerFrameinMS) {
sleep(timePerFrameinMS - time);
}
}
This way you make sure that the game refresh rate will be at most FPS.
In the same manner DrawScene can be the function used to pump bytes into the socket stream.
If you're reading from a socket, you have no control over the bandwidth used - you're reading the operating system's buffer of that socket, and nothing you say will make the person writing to the socket write less data (unless, of course, you've worked out a protocol for that).
All that reading slowly would do is fill up the buffer, and cause an eventual stall on the network end - but you have no control of how or when this happens.
If you really want to read only so much data at a time, you can do something like this:
ReadFixedRate() {
while(Data_Exists()) {
t = GetTime();
ReadBlock();
while(t + delay > GetTime()) {
Delay()'
}
}
}
wget seems to manage it with the --limit-rate option. Here's from the man page:
Note that Wget implements the limiting
by sleeping the appropriate amount of
time after a network read that took
less time than specified by the
rate. Eventually this strategy causes
the TCP transfer to slow down to
approximately the specified rate.
However, it may take some time for
this balance to be achieved, so don't
be surprised if limiting the rate
doesn't work well with very small
files.
As other have said, the OS kernel is managing the traffic and you are simply reading a copy of the data out of kernel memory. To roughly limit the rate of just one application, you need to delay your reads of the data and allow incoming packets to buffer up in the kernel, which will eventually slow the acknowledgment of incoming packets and reduce the rate on that one socket.
If you want to slow all traffic to the machine, you need to go and adjust the sizes of your incoming TCP buffers. In Linux, you would affect this change by altering the values in /proc/sys/net/ipv4/tcp_rmem (read memory buffer sizes) and other tcp_* files.
To add to Branan's answer:
If you voluntarily limit the read speed at the receiver end, eventually queues will fill up at both end. Then the sender will either block in its send() call or return from the send() call with a sent_length less than the expected length passed on to the send() call.
If the sender is not ready to deal with this case by sleeping and trying to resend what has not fit into OS buffers, you will ending up have connection issues (the sender may detect this as an error) or losing data (the sender may unknowingly discard data the did not fit into OS buffers).
Set small socket send and receive buffers, say 1k or 2k, such that the bandwidth*delay product = the buffer size. You may not be able to get it small enough over fast links.