Simulate slow speed for TCP sockets in Windows - windows

I'm building an application that uses TCP sockets to communicate. I want to test how it behaves under slow-speed conditions.
There are similar question on the site, but as I understand it, they deal with HTTP traffic, or are about Linux. My traffic is not HTTP, just ordinary TCP sockets, and the OS is Windows.
I tried using fiddler's setting for Modem Speed but it didn't work, it seems to work only for HTTP connections.

While it is true that you probably want to invest in an extensive set of unit tests, You can simulate various network conditions using VMWare Workstation:
You will have to install a virtual machine for testing, setup bridged networking (for the vm to access your real network) and upload your code to the vm.
After that you can start changing the settings and see how your application performs.
NetLimiter can also be used, but it has fewer options (in your case, packet loss is very interesting to test and is not available in netlimiter).

There is an excellent utility for Windows that can do throttling and much more:
https://jagt.github.io/clumsy/

I think you're taking the wrong approach here.
You can achieve everything that you need with some well designed unit tests. All of the things that a slow network link causes can be simulated in a unit test environment in controlled conditions.
Things that your code MUST handle to deal with "slow" links are just things that you should be dealing with anyway, including:
The correct handling of fragmented messages. All of your network reading code needs to correctly assume that each read will return between 1 byte and the size of your read buffer. You should never assume that you'll get complete 'messages' as TCP knows nothing of your concept of messages.
TCP flow control causing either your synchronous sends to fail with some form of 'try later' error or your async sends to succeed and potentially use an uncontrolled amount of resources (see here for more details). Note that this can happen even on 'fast' links if you are sending faster than the receiver is consuming.
Timeouts - again this isn't limited to "slow" links. All of your timeout handling code should be robust and tested. You may want to make sure that any read timeout is based on any read completing rather than reading a complete message in x time. You may be getting your data at a slow rate but whilst you're still getting data the link is alive.
Connection failure - again not something specific to "slow" links. You need to know how you deal with connections being reset at any time.
In summary nothing you can achieve by running your client and server on a simulated slow network cannot be achieved with a decent set of unit tests and everything that you would want to test on such a link is something that could affect any of your connections on any speed of link.

Related

How to improve RPC data throughput over high latency network

I am working on client-server software using Microsoft RPC (over TCP) as the communication method. We sometimes transfer files from the client to the server. This works fine in local networks. Unfortunately, when we have a high latency, even a very wide bandwidth does not give a decent transfer speed.
Based on a WireShark log, the RPC layer sends a bunch of fragments, then waits for an ACK from the server before sending more and this causes the latency to dominate the transfer time. I am looking for a way to tell RPC to send more packets before pausing.
The issue seems to be essentially the same as with a too small TCP window, but there might be an RPC specific fragment window at work here, since Wireshark does not show the TCP-level window being full. iPerf connection tests with a small window do give those warnings, and a speed similar to the RPC transfer. With larger windows sizes, the iPerf transfer is three times faster than the RPC, even with a reasonable (40ms) latency.
I did find some mentions of an RPC fragment window at microsoft's site (https://msdn.microsoft.com/en-us/library/gg604601.aspx) and in an RPC document (http://pubs.opengroup.org/onlinepubs/9629399/chap12.htm search for window_size), but these seem to concern only connectionless (UDP) RPC. Additionally, they mention an RPC "fack" message and I observed only regular TCP level ACK:s in the log.
My conclusion is that either the RPC layer is using a stupidly low TCP window, or it is limiting the number of fragment packages it sends at a time by some internal logic. Either way, I need to make it send more between ACKs. Is there some way to do this?
I could of course just transfer the file over multiple simultaneous connections, but that seems more like a work-around than a solution.
PS. I know RPC is not really designed for file transfer, but this is a legacy application and the RPC pipe deals with authentication and whatnot, so keeping the file transfer there would be best, at least for now.
PPS. I guess that if the answer to this question is a configuration option, this would be better suited for SuperUser, but an API setting would be ideal, which is why I posted this here.
I finally found a way to control this. This Microsoft documentation page: Configuring Computers for RPC over HTTP contains registry settings that set the windows RPC uses, at least when used in conjunction with RPC over HTTP.
The two most relevant settings were:
HKLM\Software\Microsoft\Rpc\ClientReceiveWindow: DWORD
Making this higher (some MB:s, in bytes) on the client machine made the download to the client much faster.
HKLM\Software\Microsoft\Rpc\InProxyReceiveWindow: DWORD
Making this higher on the server machine made the upload faster.
The downside of these options is that they are global. The first one will affect all RPC clients on the client machine and the latter will affect all RPC over HTTP proxying on the server. This may have serious caveats, but a tenfold speed increase is nothing to be scoffed at, either.
Still, setting these on a per-connection basis would be much better.

Getting specific errors when TCP connections disconnect in Windows

I'm trying to improve the usefulness of the error reporting in a server I am working on. The server uses TCP sockets, and it runs on Windows.
The problem is that when a TCP link drops due to some sort of network failure, the error code that I can get from WSARecv() (or the other Windows socket APIs) is not very descriptive. For most network hiccups, I get either WSAECONNRESET (10054) or WSAETIMEDOUT (10060). But there are about a million things that can cause both of these: the local machine is having a problem, the remote machine or process is having a problem, some intermediate router has a problem, etc. This is a problem because the server operator doesn't have a definitive way to investigate the problem, because they don't necessarily even know where the problem is, or who might be responsible.
At the IP level, it's a different story. If the server operator happens to have a network sniffer attached when something bad happens, it's usually pretty easy to sort of what went wrong. For instance, if an intermediate router sent an ICMP unreachable, the router that sent it will put its IP address in there, and that's usually enough to track it down. Put another way, Windows killed the connection for a reason, probably because it got a specific packet that had a specific problem.
However, a large number of failures are experienced in the field, unexpected. It is not realistic to always have a network sniffer attached to a production server. There needs to be a way to track down problems that happen only rarely, intermittently, or randomly.
How can I solve this problem programmatically?
Is there a way to get Windows to cough up a more specific error message? Is there some easy way to capture and mine recent Windows events (perhaps the one Microsoft Network Monitor uses)? One way I've "solved it" before is to keep dumpcap (from Wireshark) running in ring buffer mode, and force it to stop capturing when a bad event happens, that I can mine later.
I'm also open to the possibility that this is not the right way to solve this problem. For instance, perhaps there is some special Windows mode that can be turned on to cause it to log useful information, that a network administrator could use to track this down after-the-fact.

Monitoring office internet connection for drop outs in Ruby

I am looking for a simple way to monitor our office internet connection for drop outs. A secondary pipe dream is to also monitor for other 'dodgy' behaviour - packet loss, jitter etc. But the primary goal is to watch for dropped connections. Pinging Google every second is great to keep an eye on latency but we have had a few temporary blips which have caused hell with a few streaming services but have not affected connection latency. The IT department also sometimes decide to block outgoing ICMP traffic which doesn't help with the humble ping tool's efforts.
If this is not something available already via an open source, freeware or commercial tool, ideally I would like to be able to come up with something in Ruby (or, if forced, .NET) which will open a 'long' TCP connection to an arbitrary web server on port 80 (i.e. I don't want to have to write something keeping a socket open on a hosted server) and have the program detect and alert the guys in the office if the connection drops out in a "bad" way. With my attempts using Ruby Socket (http://www.ruby-doc.org/stdlib-1.9.3/libdoc/socket/rdoc/Socket.html) I've had trouble extracting an accurate error code here; ideally I want to isolate actual network connectivity issues from the usual connection timeouts. On a timeout, I'll want to restart the connection silently, but on a real drop out, I'll flash something big and obvious up on screen to alert the guys in the office.
I've spent most of the day googling for examples of this kind of monitoring and trying to hack something together but it seems that it is not a common request. 99% of results are forum posts ending with me being authoritatively informed that speedtest.net will do everything I need. My own attempts have all proven futile - no matter which way I've tried, whenever I seem to be getting somewhere even the most basic drop out test (unplugging the network cable from my laptop!) fails to be detected.
Is this something trivial, and if so could anyone point me in the right direction please? Or am I in for a world of pain? (This has been my general experience whenever I've tried to do anything with network programming in the past...)
Alternatively is there anything pre-written (free, commericial, open source all fine) which will do just this?
Thanks!
Smokeping might do what you want. Nagios might as well.
http://oss.oetiker.ch/smokeping/
http://www.nagios.org/

How slow are TCP sockets compared to named pipes on Windows for localhost IPC?

I am developing a TCP Proxy to be put in front of a TCP service that should handle between 500 and 1000 active connections from the wild Internet.
The proxy is running on the same machine as the service, and is mostly-transparent. The service is for the most part unaware of the proxy, the only exception being the notification of the real remote IP address of the clients.
This means that, for every inbound open TCP socket, there are two more sockets on the server: the secondth of the pair in the Proxy, and the one on the real service behind the proxy.
The send and recv window sizes on the two Proxy sockets are set to 1024 bytes.
What are the performance implications on this? How slow is this configuration? Should I put some effort on changing the service to use Named Pipes (or other IPC mechanism), or a localhost TCP socket is for the most part an efficient IPC?
The merge of the two apps is not an option. Right now we are stuck with the two process configuration.
EDIT: The reason for having two separate process on the same hardware is 100% economics. We have one server only, and we are not planning on getting more (no money).
The TCP service is a legacy software in Visual Basic 6 which grew beyond our expectations. The proxy is C++. We don't have the time, money nor manpower to rewrite and migrate the VB6 code to a modern programming environment.
The proxy is our attempt to mitigate a specific performance issue on the service, a DDoS attack we are getting from time to time.
The proxy is open source, and here is the project source code.
It will be the same (or at least not measurably different). Winsock is smart enough to know if it's talking to a socket on the same host and, in that case, it will short-circuit pretty much everything below IP and copy data directly buffer-to-buffer. In terms of named pipes vs. sockets, if you need to potentially be able to communicate to different machines ever in the future, choose sockets. If you know for a fact that you'll never need to do that, pick whichever one your developers are most familiar or most comfortable with.
For anyone that comes to read this later, I want to add some findings that answer the original question.
For a utility we are developing we have a networking class that can use named pipes, or TCP with the same calls.
Here is a typical loop back file transfer on our test system:
TCP/IP Transfer time: 2.5 Seconds
Named Pipes Transfer time: 3.1 Seconds
Now, if you go outside the machine and connect to a remote computer on your network the performance for named pipes is much worse:
TCP/IP Transfer time: 12 Seconds
Named Pipes Transfer time: 2.5 Minutes (Yes Minutes!)
I realize that this is just one system (Windows 7) But I think it is a good indicator of how slow named pipes can be...and it seems like TCP is the way to go.
I know this topic is very old, but it was still relevant for me, and maybe others will look at this in the future as well.
I implemented IPC between Excel (VBA) and another process on the same machine, both via a TCP connection as well as via Named Pipes.
In a quick performance test, I submitted a message than consisted of 26 bytes from client (Excel) to server (not Excel), and waited for the reply message from the other process (which consisted of 12 bytes in the example).
I executed this a ton of times in a loop and measured the average execution time.
With TCP on localhost (Windows 7, no fastpath), one "conversation" (request+reply) took around 300-350 microseconds. Especially sending data was quite slow (sending the 26 bytes took around 200microseconds via TCP).
With Named Pipes, one conversation took around 60 microseconds on average - so a LOT faster.
I'm not entirely sure why the difference was so large. The corporate environment I tested this in has a strict firewall, package inspections and what not, so I THINK this may have been caused as even the localhost-based TCP connection went through security measures significantly slowing it down, while named pipe ones likely did not.
TL:DR: In my case, Named Pipes were around 5-6 times faster than TCP for small packages (have not tested with bigger ones yet)
http://msdn.microsoft.com/en-us/library/aa178138(v=sql.80).aspx
Let me sum it up for you. If you are worried about performance then use TCP/IP. But if you have a really fast network and your not worried about performance then Named Pipes would be "neat" in that it might save you some code.
Not to mention, if you stick to TCP then you will have something that can be scaled, and even load balanced when the time comes.
Cheers,
In the scenario you describe, the local TCP connections are very unlikely to be a bottleneck. It will introduce some overhead, of course, but this should be negligible unless your CPU is already running hot.
At a guess, if your server's CPU usage is normally below 50% or so (with the proxy in place) it isn't worth worrying about minimizing the overhead associated with the local TCP connections.
If CPU usage is regularly above 80% you should probably be doing some profiling. I'd start by comparing the CPU load (or, better still, the performance, if you can measure it meaningfully) when the proxy is in place to when it isn't. Unless the proxy is doing some complicated processing, the overhead associated with the extra TCP connections is probably a significant fraction of the total overhead introduced by the proxy, so that should give you at least an order-of-magnitude estimate of how much you'd gain by using a more efficient form of IPC.
What is the reason to have a proxy on the SAME machine, just curious?
Anyway:
There are several methods for IPC, TCP/IP, named Pipes are comparable in speed and complexity. If you really want something that scales well and has almost no overhead: use shared memory. Best used in combination with a lock free algorithm for advancing the pointers (or use one buffer for each reader (the proxy/the service) and writer(the service/the proxy)).

Measure performance of a Web Server

Which tools can be used to measure performance of a webserver?
To test a webserver, you can use Apache Jmeter.
To see where is the bottleneck you have to flood your server application.
ApacheBench (ab) can do this. Here is a tool to get the server HTTP response code (ab) just says there is an HTTP error, and to automate test runs:
dsec.com/source/ab.c.txt
This program also gives useful tips about how to configure Linux and Windows (TCP/IP system options) to get the best possible performances.
It always depends on the setup.
Depending on the application there can be different bottlenecks.
Sometimes its the CPU, sometimtes the database connections, sometimes the sockets, sometimes the hard disc etc...
Most common practice is to use siege (simple command line tool) and increase the concurrent connections and see how many transactions per second go through.
It will increase per connection until an optimum is reached, then it will slowly decrase.
You can produce a set of urls that are randomly accessed, maby biased and/or send random data, request random ids etc to simulate more "real" clients.
Completely depends on your application whether this is relevant.

Resources