I have a high load application which spins in IIS in Windows Server 2012. How to determine how many simultaneius tcp connections can windows handle and how to increase it? I already change TcpNumConnections and MaxUserPort to maximum allowed values, and TcpTimedWaitDelay to minimum. It helps to increase throughput of VM, but is it possible to configure something for better perfomance? Does it depend on SynAttackProtection or something else? And what should be tcp autotuninglevel for maximum performance?
Related
In K6, I'm observing more fail request in my performance test execution with dial tcp : I/O timeout. Please suggest any fine tuning if I missed at K6.
With low concurrent let’s with 225 users no issues but when increase user to 300 am facing this issue and I'm using MacBook for the test execution
This error indicates that your server under test isn't able to keep up with TCP connection attempts from k6, which is usually a hint that you're reaching the performance limits of what your server is able to deliver.
At this point you would have to tweak your server settings or increase app performance to reach the levels you're aiming for. One sanity check you can do on the server side is confirm if the maximum number of file descriptors (which includes network sockets) is sufficient for your test. See ulimit.
It is a SpringBoot website and deployed in one Linux server. We use Jmeter to do the load test.
We mock 500 users to visit the webiste index page simultaneously. The index page is very simple html, no database connection,so it is a quite short connection.
After about 2 minutes, Jmeter starts to throw timeout exception as bleow
I guess this is because of website reaching its capacity and running out of connection.
I get one quesiton here, why does website reach its capacity 2 minutes later after Jemter starts. If its TCP connection capacity for this website is 1000, I guess it will reach 1000 very soon after the Jmeter starts, not 2 minutes.
Besides, I see many TCP connections are in TIME_WAIT status in Linux server. I guess this may be related with the connection timeout?
Edit: Someone thinks it is running of port. Someone thinks it is running out of connection. And someone thinks it is running out of processing thread(eg. What does this messge java.net.ConnectException/Connection timed out mean in log.jtl file of Jmeter?). I don't know which one is the exact reason...
Most probably this is due to underlying Linux TCP/IP kernel stack configuration, as per Linux TCP/IP tuning for scalability article:
By default, a connection is supposed to stay in the TIME_WAIT state for twice the msl. Its purpose is to make sure any lost packets that arrive after a connection is closed do not confuse the TCP subsystem (the full details of this are beyond the scope of this article, but ask me if you’d like details). The default msl is 60 seconds, which puts the default TIME_WAIT timeout value at 2 minutes. Which means you’ll run out of available ports if you receive more than about 400 requests a second, or if we look back to how nginx does proxies, this actually translates to 200 requests per second. Not good for scaling.
SO double check timeouts along with maximum number of ports/sockets/files on the Linux server - my expectation is that the aforementioned parameters need to be tuned for high loads.
It's also a good practice to have monitoring of baseline OS health metrics in place (CPU, RAM, Network, Disk, swap usage, etc.). You can use i.e. JMeter PerfMon Plugin or JMeter SSHMon Listener for this.
Is there a limit to the number of HTTP ports in a machine. I have a windows application that uses .NET Remoting. Each instance of the application, exposes a Remote object on load, through a HTTP Channel with port 0 (so that port can be decided dynamically). In a Multi user environment, will there be a limit to the number of HTTP Ports.
Thanks in Advance!
Yes there will be a limit to the number of ports available which is 65535 minus the number of ports already in use for existing services (for example, SMTP [25], HTTPS [443], SQL Server [1433], etc).
So on a typical Windows server, a finger in the air calculation would be 65535 - 1024 (the well know service ports <= 1024 which are considered out of bounds) - another 10-20 or so possible other application (SQL Server, MySQL, Oracle, etc). This would leave around 64490 post available.
However will you really be running 64000 of instances of your server?
I am new to comet,and have two questions:
I think comet will cause the TCP connection between client and server become long(than normal request/response),this will reduce server performance?(server has TCP connection size limit)
And sometimes the nature of the device or network can prevent an application from maintaining a long-lived TCP connection to a server.how comet aviod this issue?
On Linux (epoll) or BSD (kqueue), you can have hundreds of thousands of idle connections without a performance pennalty (except memory usage). The same is not true on other systems which hit the wall much earlier: because of the limited pool of Windows handles allocated for this purpose in the kernel, your applications will suffer (unless you invest in an 'unlimited' Windows Server license).
Proxy servers notably (low-end routers also), will cut idle connections after a short delay but the usual workaround is to use connection keep-alives.
Hope it helps.
Does anyone know what is the maximum number of concurrent TCP/IP connections on Windows XP SP3? I am trying to load test a machine and would like to know what is the max number of tcp connections that can be opened by an application (in my case, java application) towards that machine.
Note that often you may be limited by the number of outbound connections supported on the client machine rather than by the number of concurrent connections possible. See this Socket Bind Error for how to tune MAX_USER_PORT to enable more outbound connections from the machine running the tests.
I found some very useful information here:
http://smallvoid.com/article/winnt-tcpip-max-limit.html