What is the theoretical maximum number of open TCP connections allowed on a Windows server [closed] - windows

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
On my Windows server I have a port open listening to incoming TCP connections from multiple clients. Is there a limit to the number of unique clients that can concurrently establish a socket connection on that opened port on my Windows server? One of the threads What is the theoretical maximum number of open TCP connections that a modern Linux box can have talks about number of socket connections being limited by the allowed file descriptors on Unix platforms. Is there such a limitation on the latest available Windows servers? If so how to go about changing that limit?

Based on an answer by a MSFT employee:
It depends on the edition, Web and Foundation editions have connection limits while Standard, Enterprise, and Datacenter do not.
Though as Harry mentions in another answer, there is a setting for open TCP connections that has a limit of a bit over 16M.
But while technically you could have a large amount of connections, there are practical issues that limit the amount:
each connection is identified by server address and port as well as client address and port. Theoretically even the number of connections two machines can have between them is very large, but usually the server uses a single port which limits the number. Also rarely a single client would open thousands of connections, but in NAT cases it may seem like it
the server can only handle a certain amount of data and packets per second, so a high speed data transfer or lots of small packets may cause the number to go down
the network hardware might not be able to handle all the traffic coming in
the server has to have memory allocated for each connection, which again limits the number
also what the server does is an important issue. Is it a realtime game server or a system delivering chess moves between people thinking their moves for 15 minutes at a time

In addition to the licensing and practical limits outlined in Sami's answer, there is in fact a configurable limit to the number of simultaneous open connections, determined by the TcpNumConnections setting. The default value is also the maximum, which is just shy of 16M.
(The linked documentation is for Windows 2003. The corresponding documentation for later versions of Windows Server does not appear to exist. However, I can't find anything to suggest that the setting has been removed.)
In practice, however, you are likely to run into the practical issues outlined in Sami's answer long before you hit this. (Unless the system administrator has manually changed the setting, of course.)

Related

my hosting provider doesn't offer ipv6 only ipv4 and it appears to be slowing down my website [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am doing as much as I can to reduce the download time of the pages on my website and one of the free test that I tried complained about my website not being ipv6 ready or something like that. My hosting provider only support ipv4.
Should I change my hosting provider?
Not being ipv6 ready is actually a problem?
Just in case, this is the page that I am trying to improve and any suggestions will be highly appreciated:
http://www.hypnosisgate.com/past-life-regressions.html
Will lack of IPv6 be a short-term problem for you: probably not
But the internet is moving towards IPv6. Take a look at what Google sees. When writing this the graph has just peaked to 12.5%. That means that 1/8 of the internet has IPv6 connectivity. Because IPv4 addresses have run out [1] many of those users will be behind some central NAT device. Analysis from Facebook has shown that customers that can load their news feed over IPv6 see a 20% to 40% shorter load time than those that are forced to use IPv4.
So for a certain group of your users making your website accessible over IPv6 will give them a significant speed boost. And that group of users will grow very quickly. Even if you don't want to bother with IPv6 right now, it will become important soon.
Because all of this, when choosing a vendor (whether it is for equipment, services or hosting) it might not be very wise to pick one that doesn't support IPv6. At least make sure they are working on it and can give you a timeline in the contract. Otherwise you might need to migrate to a different one that does properly support IPv6 later.
[1] You can but some on the market, and some regions have a few addresses reserved for newcomers, but for all practical purposes it has run out
IPv4 vs IPv6 should not have any significant impact on the delivery speed of your content. However a quick page speed analysis shows you have a few simple fixes that would speed up the site.
https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.hypnosisgate.com%2Fpast-life-regressions.html

How do I do capacity test a websocket server? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am looking to capacity test my websocket server but don't really know where to start?
I am able to write a AI that will send messages to test the usage, but how would I simulate/make 100, 500, 1000 connections etc...?
I had a similar problem a little while ago when I had to load test thousands of connections against a server using the socket.io library. I was not able to find any off-the-shelf-solutions to do this so in the end I ended up building my own test using Node.js and a few for loops.
The advantage of Node is you can pretty much copy and paste the client side javascript into your server code so it's pretty simple to simulate the client and then you only need to make multiple connections to generate load. It's a quick and easy way to run the required javascript to establish the socket connection (assuming this is how you connect to your socket).
The gotcha I hit was running more than 600 listeners tended to max out the CPU on my node box but a little bit of AWS magic solved that.
Another issue is reporting results. There's not really any concept of response time with a socket connection, at least not in the classic sense, so it's hard to know when things are going wrong - at least from the client side perspective. But from monitoring the server we were able to see when connections failed and when resources started to get scarce and this was enough for us to benchmark how many connections it could support.
Autobahn Testsuite was designed to meet that need but the performance section of the tool still says "Under Development".
You could use JMeter for this purpose and get the WebSocket sampler plug-in from here: http://github.com/maciejzaleski/JMeter
For that many connections 1000 you might need to get more than one agent machine to achieve your task. This doesn't necessarily have to be dedicated server as you could deploy agents on few workstations (developers/testers machines) and used them for your test purpose. You could limit the impact by scheduling test execution to run out-of-hours.
Jmeter plugin is having severe limitations with number of concurrent users. It was working well only till ~450 users. Then I tried with artillery library(https://artillery.io/docs/testing_websockets.html) but this library also has restrictions with loops with their web socket package.

Difference b/w MPI,TCP/IP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I had some confusion regarding MPI,sockets and TCP/IP.
Are all these three communication protocols which can make use of
different interconnects like Infiniband,ethernet or is it something else?
Sorry for the trouble if the question sounds naive but I really get confused with these three terms.
TCP/IP is a family of networking protocols. IP is the lower-level protocol that's responsible for getting packets of data from place to place across the Internet. TCP sits on top of IP and adds virtual circuit/connection semantics. With IP alone you can only send and receive independent packets of data that are not organized into a stream or connection. It's possible to use virtually any physical transport mechanism to move IP packets around. For local networks it's usually Ethernet, but you can use anything. There's even an RFC specifying a way to send IP packets by carrier pigeon.
Sockets is a semi-standard API for accessing the networking features of the operating system. Your program can call various functions named socket, bind, listen, connect, etc., to send/receive data, connect to other computers, and listen for connections from other computers. You can theoretically use any family of networking protocols through the sockets API--the protocol family is a parameter that you pass in--but these days you pretty much always specify TCP/IP. (The other option that's in common use is local Unix sockets.)
MPI is an API for passing messages among processes running on a cluster of servers. MPI is higher level than both TCP/IP and sockets. It can theoretically use any family of networking protocols, and if it's using TCP/IP or some other family that's supported by the sockets API, then it probably uses the sockets API to communicate with the operating system.
If the purpose behind your question is to decide how you should write a parallel programming application, you should probably not be looking at TCP/IP or sockets as those things are going to be much lower level than you want. You'll probably want to look at something like MPI or any of the PGAS languages like UPC, Co-array Fortran, Global Arrays, Chapel, etc. They're going to be far easier to use than essentially writing your own networking layer.
When you use one of these higher level libraries, you get lots of nice abstractions like collective operations, remote memory access, and other features that make it easier to just write your parallel code instead of dealing with all of the OS stuff underneath. It also makes your code portable between different machines/architectures.

Performance of IPX/SPX and TCP/IP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I understand that IPX and SPX both provide connection services similar to TCP/IP. Here, IPX is similar to IP and SPX is similar to TCP and hence eager to know about this.
How does the performance of IPX/SPX exceed that of TCP in LAN ?
Why is IPX/SPX not used in LAN alone if its performance is superior to that of TCP in the case of LAN ?
I searched the internet and landed up in few links but it did not seem to convey some clear reasons for this - http://en.wikipedia.org/wiki/IPX/SPX . Any ideas ?
IPX was optimized for LANs. For one thing, IPX addresses are formed using an Ethernet MAC addresses and a 32-bit network ID. This design allowed for "zero configuration" of the IPX nodes in most cases - just plug computer in and it's on the network. IPv6 with stateless autoconf has the same properties, btw.
SPX (analogue of TCP) was also highly optimized for LANs. For example, it had per-packet nacks instead of per-octet acks in TCP without any explicit window management functions. That allowed file servers to be very simple - just spew file contents into the Ethernet at the top speed. If a client misses a packet then you can re-read it from disk/cache and re-send it.
In contrast, with TCP you have to buffer all the unacknowledged data and re-send all of the data in the send buffer after a lost packet (in case you don't use selective acknowledgment feature).
However, IPX was not suitable for the WANs at all. For example, it couldn't cope with different frame sizes. I.e. two networks with different frames (say, Ethernet and Ethernet with jumbo frames) couldn't interoperate without a proxy server or some form of encapsulation.
Additionally, packet reordering on WANs is ubiquitous but it plays hell with SPX (at least with Novell's implementation) causing a lot of spurious NAKs.
And of course, IPX addresses were not hierarchical so not very suited for routing. Network ID in theory could be used for this, but even large IPX/SPX deployments were not complex enough to develop rich routing infrastructure.
Right now, IPX is interesting only as a historical curiosity and in maintenance of a small number of VERY legacy systems.
You're missing a critical distinction between SPX/IPX and TCP/IP. TCP/IP is the basis of the Internet. SPX/IPX is not.
SPX/IPX was an interesting protocol, but is now of interest only within a given corporation.
It's often the case in the real world that something technically superior loses due to business reasons. Consider Betamax video tape format vs. VHS. Betamax was considered technically superior, yet you can't buy a Betamax recorder today except maybe on eBay. One may argue that Windows won over Macintosh, despite the fact that the MacOS user interface was much nicer, due entirely to business decisions (mainly the decision by Apple not to permit clones).
Similarly, issues far beyond the control of Xerox destroyed SPX/IPX as a viable protocol - HTTP runs over TCP/IP, not over SPX/IPX. HTTP rules the world, therefore TCP/IP rules the world.
SPX/IPX has been left as an exercise for the reader.
BTW, I've been talking about SPX/IPX as though they were a Xerox protocol - not quite. They are a Novell protocol, but based on the Xerox Network System protocols. Interestingly, I found nothing about this protocol on the web site either of Xerox nor of Novell.
Also, see the Wikipedia article on IPX/SPX.
The disadvantage of the TCP/IP Protocol stack is a lower speed than IPX/SPX. However, the TCP/IP stack is now also used in local networks to simplify the negotiation of local and wide area network protocols. Currently, it is considered the main one in the most common operating systems.
IPX/SPX can coexist on a lan with TCP/IP. PC's that wish to be isolated from the web can still share files/printers by using IPX and not loading TCP. This is more secure than any firewall and second only to cutting wires.
IPX/SPX performed better than TCP/IP back in the day, on systems where you could compare the two. That is no longer true since TCP got all the developer effort from about 1993 onwards because of HTTP.
Essentially, IPX/SPX was obsoleted by TCP/IP, and so it is no longer relevant. Maintaining two sets of protocols is too much effort for network operators, so the less capable one dies out. Eventually this will happen to IPv4.

How Much Traffic Can Shared Web Hosting Take? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have a cheap shared hosting plan with Reliablesite.net ($5/month).
I've been making a small site I want to start promoting in a few weeks and I was going to roadtest it by hosting it with the shared plan I already have.
My issue is that I don't know at what point I should move onto clustered hosting / dedicated hosting.
Questions
What pageviews / day can a
shared hosting plan be expected to
handle?
What can standard
shared database servers take without
choking up or me getting rude emails
from my hosting provider?
In my experience, shared hosting environment like Reliablesite.com can take around 10-20 000 unique users per day, or 100-200 000 pageviews/day. That number can vary, depending on your site. For optimization, It is important to reduce number of db queries (i keep it max 6-7 per page render), and be careful when programming. Using ASP.NET MVC gave nice perf improvement for me, but good written webforms app can perform well also. If you are using some other tech stack, like PHP/MySQL, i don't know the numbers.
When you exceed those numbers, you will have enough money from google adsense to go with VPS or dedicated plan.
Just to add something regarding page render / db queries performance: using multiple resultset sproc or query is great way to reduce number of db requests!
Traffic usually is not a problem on shared hosting. The only problem you may encounter is RAM and CPU restrictions. But if your application written correctly it could operate well with these limitations.
Hints:
user memory profiler to debug and optimize your web application
use CDN for storing media files
If you need some numbers, a properly written web application which use CDN for storing media files could handle at least 10k unique visitors per day on a shared hosting.
It would be best if you ask your provider these questions. Every provider is going to be different.
Generally what happens is that the provider can handle the requests, but they'll simply shut down your site once it reaches a certain threshold.
It also depends on the amount of bandwidth you have opted for. How much traffic are you expecting. My blog is in a shared hosting and and once 4k was my maximum in a day and I dint feel any difference in the performance. Dont worry unless your site appears in front page of digg or some high traffic websites link to you site.
I have been using mysql on shared hosting for a while mainly on informational websites that have gotten at most 300 visits per day. What I have found is that the hosting was barely sufficient to support more than 3 or 4 people on the website at one time without it almost crashing.
Theoretically i think shared hosting with most services could support about about 60 users per hour max efficiently if your users all came one or two at a time. This would equal out to about about 1500 users in one day. This is highly unlikely however because alot of users tend to be online at certain times of the day and you also have to throw in the fact that shared servers get sloppy alot due to abuse from others on the server.
I have heard from reliable sources that some vps hosting thats 40-50 dollars per month have supported 500,000 hits per month. I'm not sure what the websites configurations were though, i doubt the sites ran many dynamic db queries or possibly were simply static.
One other thing that is common on shared hosting is breaking up the file managers with the database hosting. Sometimes your files will do well appearing online but the database that runs your actual website will be lagging extremely due to abuse from your neighbors.
I suggest ensuring that your application is ready for large amounts of traffic, even if you are on a super duper webserver, but your app is badly written, you will loose potential clients. Some of the easiest optimizations that can be done to an existing web app is to reduce the number of DB connections, so read up on caching and partial caching.

Resources