my hosting provider doesn't offer ipv6 only ipv4 and it appears to be slowing down my website [closed] - hosting

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am doing as much as I can to reduce the download time of the pages on my website and one of the free test that I tried complained about my website not being ipv6 ready or something like that. My hosting provider only support ipv4.
Should I change my hosting provider?
Not being ipv6 ready is actually a problem?
Just in case, this is the page that I am trying to improve and any suggestions will be highly appreciated:
http://www.hypnosisgate.com/past-life-regressions.html

Will lack of IPv6 be a short-term problem for you: probably not
But the internet is moving towards IPv6. Take a look at what Google sees. When writing this the graph has just peaked to 12.5%. That means that 1/8 of the internet has IPv6 connectivity. Because IPv4 addresses have run out [1] many of those users will be behind some central NAT device. Analysis from Facebook has shown that customers that can load their news feed over IPv6 see a 20% to 40% shorter load time than those that are forced to use IPv4.
So for a certain group of your users making your website accessible over IPv6 will give them a significant speed boost. And that group of users will grow very quickly. Even if you don't want to bother with IPv6 right now, it will become important soon.
Because all of this, when choosing a vendor (whether it is for equipment, services or hosting) it might not be very wise to pick one that doesn't support IPv6. At least make sure they are working on it and can give you a timeline in the contract. Otherwise you might need to migrate to a different one that does properly support IPv6 later.
[1] You can but some on the market, and some regions have a few addresses reserved for newcomers, but for all practical purposes it has run out

IPv4 vs IPv6 should not have any significant impact on the delivery speed of your content. However a quick page speed analysis shows you have a few simple fixes that would speed up the site.
https://developers.google.com/speed/pagespeed/insights/?url=http%3A%2F%2Fwww.hypnosisgate.com%2Fpast-life-regressions.html

Related

What is the theoretical maximum number of open TCP connections allowed on a Windows server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
On my Windows server I have a port open listening to incoming TCP connections from multiple clients. Is there a limit to the number of unique clients that can concurrently establish a socket connection on that opened port on my Windows server? One of the threads What is the theoretical maximum number of open TCP connections that a modern Linux box can have talks about number of socket connections being limited by the allowed file descriptors on Unix platforms. Is there such a limitation on the latest available Windows servers? If so how to go about changing that limit?
Based on an answer by a MSFT employee:
It depends on the edition, Web and Foundation editions have connection limits while Standard, Enterprise, and Datacenter do not.
Though as Harry mentions in another answer, there is a setting for open TCP connections that has a limit of a bit over 16M.
But while technically you could have a large amount of connections, there are practical issues that limit the amount:
each connection is identified by server address and port as well as client address and port. Theoretically even the number of connections two machines can have between them is very large, but usually the server uses a single port which limits the number. Also rarely a single client would open thousands of connections, but in NAT cases it may seem like it
the server can only handle a certain amount of data and packets per second, so a high speed data transfer or lots of small packets may cause the number to go down
the network hardware might not be able to handle all the traffic coming in
the server has to have memory allocated for each connection, which again limits the number
also what the server does is an important issue. Is it a realtime game server or a system delivering chess moves between people thinking their moves for 15 minutes at a time
In addition to the licensing and practical limits outlined in Sami's answer, there is in fact a configurable limit to the number of simultaneous open connections, determined by the TcpNumConnections setting. The default value is also the maximum, which is just shy of 16M.
(The linked documentation is for Windows 2003. The corresponding documentation for later versions of Windows Server does not appear to exist. However, I can't find anything to suggest that the setting has been removed.)
In practice, however, you are likely to run into the practical issues outlined in Sami's answer long before you hit this. (Unless the system administrator has manually changed the setting, of course.)

Difference b/w MPI,TCP/IP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I had some confusion regarding MPI,sockets and TCP/IP.
Are all these three communication protocols which can make use of
different interconnects like Infiniband,ethernet or is it something else?
Sorry for the trouble if the question sounds naive but I really get confused with these three terms.
TCP/IP is a family of networking protocols. IP is the lower-level protocol that's responsible for getting packets of data from place to place across the Internet. TCP sits on top of IP and adds virtual circuit/connection semantics. With IP alone you can only send and receive independent packets of data that are not organized into a stream or connection. It's possible to use virtually any physical transport mechanism to move IP packets around. For local networks it's usually Ethernet, but you can use anything. There's even an RFC specifying a way to send IP packets by carrier pigeon.
Sockets is a semi-standard API for accessing the networking features of the operating system. Your program can call various functions named socket, bind, listen, connect, etc., to send/receive data, connect to other computers, and listen for connections from other computers. You can theoretically use any family of networking protocols through the sockets API--the protocol family is a parameter that you pass in--but these days you pretty much always specify TCP/IP. (The other option that's in common use is local Unix sockets.)
MPI is an API for passing messages among processes running on a cluster of servers. MPI is higher level than both TCP/IP and sockets. It can theoretically use any family of networking protocols, and if it's using TCP/IP or some other family that's supported by the sockets API, then it probably uses the sockets API to communicate with the operating system.
If the purpose behind your question is to decide how you should write a parallel programming application, you should probably not be looking at TCP/IP or sockets as those things are going to be much lower level than you want. You'll probably want to look at something like MPI or any of the PGAS languages like UPC, Co-array Fortran, Global Arrays, Chapel, etc. They're going to be far easier to use than essentially writing your own networking layer.
When you use one of these higher level libraries, you get lots of nice abstractions like collective operations, remote memory access, and other features that make it easier to just write your parallel code instead of dealing with all of the OS stuff underneath. It also makes your code portable between different machines/architectures.

Performance of IPX/SPX and TCP/IP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I understand that IPX and SPX both provide connection services similar to TCP/IP. Here, IPX is similar to IP and SPX is similar to TCP and hence eager to know about this.
How does the performance of IPX/SPX exceed that of TCP in LAN ?
Why is IPX/SPX not used in LAN alone if its performance is superior to that of TCP in the case of LAN ?
I searched the internet and landed up in few links but it did not seem to convey some clear reasons for this - http://en.wikipedia.org/wiki/IPX/SPX . Any ideas ?
IPX was optimized for LANs. For one thing, IPX addresses are formed using an Ethernet MAC addresses and a 32-bit network ID. This design allowed for "zero configuration" of the IPX nodes in most cases - just plug computer in and it's on the network. IPv6 with stateless autoconf has the same properties, btw.
SPX (analogue of TCP) was also highly optimized for LANs. For example, it had per-packet nacks instead of per-octet acks in TCP without any explicit window management functions. That allowed file servers to be very simple - just spew file contents into the Ethernet at the top speed. If a client misses a packet then you can re-read it from disk/cache and re-send it.
In contrast, with TCP you have to buffer all the unacknowledged data and re-send all of the data in the send buffer after a lost packet (in case you don't use selective acknowledgment feature).
However, IPX was not suitable for the WANs at all. For example, it couldn't cope with different frame sizes. I.e. two networks with different frames (say, Ethernet and Ethernet with jumbo frames) couldn't interoperate without a proxy server or some form of encapsulation.
Additionally, packet reordering on WANs is ubiquitous but it plays hell with SPX (at least with Novell's implementation) causing a lot of spurious NAKs.
And of course, IPX addresses were not hierarchical so not very suited for routing. Network ID in theory could be used for this, but even large IPX/SPX deployments were not complex enough to develop rich routing infrastructure.
Right now, IPX is interesting only as a historical curiosity and in maintenance of a small number of VERY legacy systems.
You're missing a critical distinction between SPX/IPX and TCP/IP. TCP/IP is the basis of the Internet. SPX/IPX is not.
SPX/IPX was an interesting protocol, but is now of interest only within a given corporation.
It's often the case in the real world that something technically superior loses due to business reasons. Consider Betamax video tape format vs. VHS. Betamax was considered technically superior, yet you can't buy a Betamax recorder today except maybe on eBay. One may argue that Windows won over Macintosh, despite the fact that the MacOS user interface was much nicer, due entirely to business decisions (mainly the decision by Apple not to permit clones).
Similarly, issues far beyond the control of Xerox destroyed SPX/IPX as a viable protocol - HTTP runs over TCP/IP, not over SPX/IPX. HTTP rules the world, therefore TCP/IP rules the world.
SPX/IPX has been left as an exercise for the reader.
BTW, I've been talking about SPX/IPX as though they were a Xerox protocol - not quite. They are a Novell protocol, but based on the Xerox Network System protocols. Interestingly, I found nothing about this protocol on the web site either of Xerox nor of Novell.
Also, see the Wikipedia article on IPX/SPX.
The disadvantage of the TCP/IP Protocol stack is a lower speed than IPX/SPX. However, the TCP/IP stack is now also used in local networks to simplify the negotiation of local and wide area network protocols. Currently, it is considered the main one in the most common operating systems.
IPX/SPX can coexist on a lan with TCP/IP. PC's that wish to be isolated from the web can still share files/printers by using IPX and not loading TCP. This is more secure than any firewall and second only to cutting wires.
IPX/SPX performed better than TCP/IP back in the day, on systems where you could compare the two. That is no longer true since TCP got all the developer effort from about 1993 onwards because of HTTP.
Essentially, IPX/SPX was obsoleted by TCP/IP, and so it is no longer relevant. Maintaining two sets of protocols is too much effort for network operators, so the less capable one dies out. Eventually this will happen to IPv4.

How Much Traffic Can Shared Web Hosting Take? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have a cheap shared hosting plan with Reliablesite.net ($5/month).
I've been making a small site I want to start promoting in a few weeks and I was going to roadtest it by hosting it with the shared plan I already have.
My issue is that I don't know at what point I should move onto clustered hosting / dedicated hosting.
Questions
What pageviews / day can a
shared hosting plan be expected to
handle?
What can standard
shared database servers take without
choking up or me getting rude emails
from my hosting provider?
In my experience, shared hosting environment like Reliablesite.com can take around 10-20 000 unique users per day, or 100-200 000 pageviews/day. That number can vary, depending on your site. For optimization, It is important to reduce number of db queries (i keep it max 6-7 per page render), and be careful when programming. Using ASP.NET MVC gave nice perf improvement for me, but good written webforms app can perform well also. If you are using some other tech stack, like PHP/MySQL, i don't know the numbers.
When you exceed those numbers, you will have enough money from google adsense to go with VPS or dedicated plan.
Just to add something regarding page render / db queries performance: using multiple resultset sproc or query is great way to reduce number of db requests!
Traffic usually is not a problem on shared hosting. The only problem you may encounter is RAM and CPU restrictions. But if your application written correctly it could operate well with these limitations.
Hints:
user memory profiler to debug and optimize your web application
use CDN for storing media files
If you need some numbers, a properly written web application which use CDN for storing media files could handle at least 10k unique visitors per day on a shared hosting.
It would be best if you ask your provider these questions. Every provider is going to be different.
Generally what happens is that the provider can handle the requests, but they'll simply shut down your site once it reaches a certain threshold.
It also depends on the amount of bandwidth you have opted for. How much traffic are you expecting. My blog is in a shared hosting and and once 4k was my maximum in a day and I dint feel any difference in the performance. Dont worry unless your site appears in front page of digg or some high traffic websites link to you site.
I have been using mysql on shared hosting for a while mainly on informational websites that have gotten at most 300 visits per day. What I have found is that the hosting was barely sufficient to support more than 3 or 4 people on the website at one time without it almost crashing.
Theoretically i think shared hosting with most services could support about about 60 users per hour max efficiently if your users all came one or two at a time. This would equal out to about about 1500 users in one day. This is highly unlikely however because alot of users tend to be online at certain times of the day and you also have to throw in the fact that shared servers get sloppy alot due to abuse from others on the server.
I have heard from reliable sources that some vps hosting thats 40-50 dollars per month have supported 500,000 hits per month. I'm not sure what the websites configurations were though, i doubt the sites ran many dynamic db queries or possibly were simply static.
One other thing that is common on shared hosting is breaking up the file managers with the database hosting. Sometimes your files will do well appearing online but the database that runs your actual website will be lagging extremely due to abuse from your neighbors.
I suggest ensuring that your application is ready for large amounts of traffic, even if you are on a super duper webserver, but your app is badly written, you will loose potential clients. Some of the easiest optimizations that can be done to an existing web app is to reduce the number of DB connections, so read up on caching and partial caching.

Why is p2p web hosting not widely used? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We can see the growth of systems using peer to peer principles.
But there is an area where peer to peer is not (yet) widely used: web hosting.
Several projects are already launched, but there is no big solution which would permit users to use and to contribute to a peer to peer webhosting.
I don't mean not-open projects (like Google Web Hosting, which use Google resources, not users'), but open projects, where each user contribute to the hosting of the global web hosting by letting its resources (cpu, bandwidth) be available.
I can think of several assets of such systems:
automatic load balancing
better locality
no storage limits
free
So, why such a system is not yet widely used ?
Edit
I think that the "97.2%, plz seed!!" problem occurs because all users do not seed all the files. But if a system where all users equally contribute to all the content is built, this problem does not occur any more. Peer to peer storage systems (like Wuala) are reliable, thanks to that.
The problem of proprietary code is pertinent, as well of the fact that a user might not know which content (possibly "bad") he is hosting.
I add another problem: the latency which may be higher than with a dedicated server.
Edit 2
The confidentiality of code and data can be achieved by encryption. For example, with Wuala, all files are encrypted, and I think there is no known security breach in this system (but I might be wrong).
It's true that seeders would not have many benefits, or few. But it would prevent people from being dependent of web hosting companies. And such a decentralized way to host websites is closer of the original idea of the internet, I think.
This is what Freenet basically is,
Freenet is free software which lets you publish and obtain information on the Internet without fear of censorship. To achieve this freedom, the network is entirely decentralized and publishers and consumers of information are anonymous. Without anonymity there can never be true freedom of speech, and without decentralization the network will be vulnerable to attack.
[...]
Users contribute to the network by giving bandwidth and a portion of their hard drive (called the "data store") for storing files. Unlike other peer-to-peer file sharing networks, Freenet does not let the user control what is stored in the data store. Instead, files are kept or deleted depending on how popular they are, with the least popular being discarded to make way for newer or more popular content. Files in the data store are encrypted to reduce the likelihood of prosecution by persons wishing to censor Freenet content.
The biggest problem is that it's slow. Both in transfer speed and (mainly) latency.. Even if you can get lots of people with decent upload throughput, it'll still never be as quick a dedicated servers or two.. The speed is fine for what Freenet is (publishing data without fear of censorship), but not for hosting your website..
A bigger problem is the content has to be static files, which rules out it's use for a majority of high-traffic websites.. To serve dynamic data each peer would have to execute code (scary), and would probably have to retrieve data from a database (which would be another big delay, again because of the latency)
I think "cloud computing" is about as close to P2P web-hosting as we'll see for the time being..
P2P website hosting is not yet widely used, because the companion technology allowing higher upstream rates for individual clients is not yet widely used, and this is something I want to look into*.
What is needed for this is called Wireless Mesh Networking, which should allow the average user to utilise the full upstream speed that their router is capable of, rather than just whatever some profiteering ISP rations out to them, while they relay information between other routers so that it eventually reaches its target.
In order to host a website P2P, a sort of combination of technology is required between wireless mesh communication, multiple-redundancy RAID storage, torrent sharing, and some kind of encryption key hierarchy that allows various users different abilities to change the data that is being transmitted, allowing something dynamic such as a forum to be hosted. The system would have to be self-updating to incorporate the latter, probably by time-stamping all distributed data packets.
There may be other possible catalysts that would cause the widespread use of p2p hosting, but I think anything that returns the physical architecture of hardware actually wiring up the internet back to its original theory of web communication is a good candidate.
Of course as always, the main reason this has not been implemented yet is because there is little or no money in it. The idea will be picked up much faster if either:
Someone finds a way to largely corrupt it towards consumerism
Router manufacturers realise there is a large demand for WiMesh-ready routers
There is a global paradigm shift away from the profit motive and towards creating things only to benefit all of humanity by creating abundance and striving for optimum efficiency
*see p2pint dot darkbb dot com if you're interested in developing this concept
For our business I can think of 2 reasons not to use peer hosting:
Responsiveness. Peer hosted solutions are often reliable because of the massive number of shared resources, but they are also nutoriously unstable. So the browsing experience will be intermittent.
Proprietary data/code. If I've written custom logic for my site I don't want everyone on the network having access. You also run into privacy issues with customer data.
If I were to donate some of my PCs CPU and bandwidth to some p2p web hosting service, how could I be sure that it wouldn't end up being used to serve child porn or other similarly disgusting content?
How many times have you seen "97.2%, please seed!!" for any random torrent?
Just imagine the havoc if even a small portion of the web became unavailable in this way.
It sounds like this idea would add a lot of cost to the individual seeder (bandwidth) without a lot of benefit.

Resources