In what ways does more RAM and Processing power on my server make my website faster? - web-hosting

I understand that the speed that a website loads is dependent on many things, however I'm interested to know how I can positively impact load speed by increasing the specifications on my dedicated server:
Does this allow my server to handle more requests?
Does this reduce roundtrips?
Does this decrease server response time?
Does this allow my server to generate pages on Wordpress faster?

yes-ish
no
yes-ish
yes-ish
Does this allow my server to handle more requests?
Requests come in and are essentially put into a queue until the system has enough time to handle it. By increasing system resources, such a queue might be faster processed, and such a queue might be configured to handle more requests simultaneously, so... yes-ish (note: this is very generalized)
Does this reduce roundtrips?
No, your application design is the only thing that effects this. If your application makes a request to the server, it makes a request (e.g., a "round trip"). If you increase your server resources, you do not in turn decrease the amount of requests your application makes.
Does this decrease server response time?
Yes, see first explanation. It can often decrease the response times for the same reasons given there. However, network latency and other factors outside the realm of the server can effect complete response processing times.
Does this allow my server to generate pages on Wordpress faster?
Again, see the first explanation. This can help your server generate pages faster by throwing more power at the processes that generate the pages. However, outside factors aside from the server still apply.
For performance, the two high target areas (assuming you don't have tons and tons of traffic, which most sites do not), are reducing database reads and caching. Caching covers various areas... data caching on the server, page output caching, browser caching for content, images, etc. If you're experiencing less than desirable performance, this is usually a good place to start.

Related

Is it normal for a site size to fluctuate?

I'm curious about something I've noticed while tracking the site speed of a Shopify site in Pingdom.
The site has been unchanged for a few days, but the site of the size goes up and down by very small amounts. The number of requests also goes up and down by small amounts. Is it likely that Pingdom is just slightly off occasionally?
The times below are recorded every 30min.
I am sure your shop uses some Javascript to provide some external services to support it. That is likely the source of "variance" in page size. Some services are dynamic in what they return as a payload, so you'd see that for page size assuming it includes all attached behaviours. As for variances in requests, that would probably be the fact that the faster a response, the more pings can be sent and processed, so slower responses should see less requests. Kind of makes sense right?

Is combining rest api calls to reduce # requests worth doing?

My server used to handle 700+ user burst and now it is failing at around 200 users.
(Users are connecting to the server almost at the same time after clicking a push message)
I think the change is due to the change how the requests are made.
Back then, webserver collected all the information in a single response in an html.
Now, each section in a page is making a rest api request resulting in probably 10+ more requests.
I'm considering making an api endpoint to aggregate those requests for pages that users would open when they click on push notification.
Another solution I think of is caching those frequently used rest api responses.
Is it a good idea to combine api calls to reduce api calls ?
It is always a good idea to reduce API calls. The optimal solution is to get all the necessary data in one go without any unused information.
This results in less traffic, less requests (and loads) to the server, less RAM and CPU usage, as well as less concurrent DB operations.
Caching is also a great choice. You can consider both caching the entire request and separate parts of the response.
A combined API response means that there will be just one response, which will reduce the pre-execution time (where the app is loading everything), but will increase the processing time, because it's doing everything in one thread. This will result in less traffic, but a slightly slower response time.
From the user's perspective this would mean that if you combine everything, the page will load slower, but when it does it will load up entirely.
It's a matter of finding the balance.
And for the question if it's worth doing - it depends on your set-up. You should measure the start-up time of the application and the execution time and do the math.
Another thing you should consider is the amount of time this might require. There is also the solution of increasing the server power, like creating a clustered cache and using a load balancer to split the load. You should compare the needed time for both tasks and work from there.

What's the benefit of the client-server model of memcached?

As I understand, the benefit of using memcached is to shorten the access time to the information stored in the database by caching it in the memory. But isn't the time overhead for the client-server model based on network protocol (e.g. TCP) also considerable as well? My guess is that it actually might be worse as network access is generally slower than hardware access. What am I getting wrong?
Thank you!
It's true that caching won't address network transport time. However, what matters to the user is the overall time from request to delivery. If this total time is perceptible, then your site does not seem responsive. Appropriate use of caching can improve responsiveness, even if your overall transport time is out of your control.
Also, caching can be used to reduce overall server load, which will essentially buy you more cycles. Consider the case of a query whose response is the same for all users - for example, imagine that you display some information about site activity or status every time a page is loaded, and this information does not depend on the identity of the user loading the page. Let's imagine also that this information does not change very rapidly. In this case, you might decide to recalculate the information every minute, or every five minutes, or every N page loads, or something of that nature, and always serve the cached version. In this case, you're getting two benefits. First, you've cut out a lot of repeated computation of values that you've decided don't really need to be recalculated, which takes some load off your servers. Second, you've ensured that users are always getting served from the cache rather than from computation, which might speed things up for them if the computation is expensive.
Both of those could - in the right circumstances - lead to improved performance from the user's perspective. But of course, as with any optimization, you need to have benchmarks and actually benchmark to data rather than to your perceptions of what ought to be correct.

What is the best way to handle a lot of images from a shared hosting plan?

I have a shared hosting plan and am designing a single page site which will include a slideshow. The browser typically limits the number of simultaneous requests to a single domain. I don't expect a lot of traffic, but I would like the traffic I do receive to have fast load times. I may be able to add unlimited subdomains, but does that really affect the speed for the customer considering they are probably the only one polling my server and all subdomains point to the same processor? I have already created two versions of every image, one for the slideshow, and one for larger format via AJAX request, but the lag times are still a little long for my taste. Any suggestions?
Before you contrive a bunch of subdomains to maximize parallel connections, you should profile your page load behavior so you know where most of the time is being spent. There might be easier and more rewarding optimizations to make first.
There are several tools that can help with this, use all of them:
https://developers.google.com/speed/pagespeed/
http://developer.yahoo.com/yslow/
http://www.webpagetest.org/
Some important factors to look at are cache optimization and image compression.
If you've done all those things, and you are sure that you want to use multiple (sub)domains, then I would recommend using a content delivery network (CDN) instead of hosting the static files (images) on the same shared server. You might consider Amazon's CloudFront service. It's super easy to set up, and reasonably priced.
Lastly, don't get carried away with too many (sub)domains, because each host name will require a separate DNS lookup; find a balance.

Is performance worse when putting database to a dedicated server?

I heard that one way to scale your system is to use different machine for web server, database server, and even use multiple instances for each type of server
I wonder how could this improve performance over the one-server-for-everything model? Aren't there bottle necks in the connection between those servers? Moreover, you will have to care about synchronization while accessing the database server from different web server.
If your infrastructure is small enough then yes, 1 server for everything is (probably) the best way to do things, however when your size starts to require that you use more then 1 server, scaling the size of your single box can become much more expensive then having multiple cheaper servers. This also means that you can have more failure tolerance (if one server goes down, the other(s) can take over). As for synchronizing data, on the database side that is usually achieved by using clustering or replicating, on the application side it can be achieved with the likes of memcached or saving to the drive, and web servers themselves don't really need to be synchronized. Network bottlenecks on a local network (like your servers would be from one another) are negligible.
Having numerous servers may appear to be an attractive solution. One problem which often occurs is the latency that arises from communication between the servers. Even with fiber inter-connects it will be slower than if they reside on the same server. Of course, in a single server-solution, if one server application does a lot of work it may starve the DB application of needed CPU resources.
Another issue which may turn up is that of SANs. Proponents of SANs will say that they are just as fast as locally attached storage. The purpose of SANs is to cut costs on storage. Even if the SAN were to use the same high-performance disks as the local solution (wiping out the cost savings) you still have a slower connection and more simultaneous users to contend with on the SAN.
Conventional wisdom has it that a DB should be SQL-based with normalized data. It is worthwile to spend some time weighing pros and cons (yes SQL has cons) against each other.
Since "time-immemorial" (at least the last twenty years) indifferent programmers have overloaded servers with stuff they are too lazy to implement in the client. Indifferent (or ignorant) architects allow this practice to continue. End result: sluggish c/s implementations which are close to useless. Tripling the server park is a desperate "week-before-delivery" measure which - at best - results in a marginal performance increase. Often you lose performance instead.
DBs should not be bothered with complex requests involving multiple tables. Simple requests filtered by the client is the way to go.
One thing to try might be to put framework/SOAP-handling on one server and let it send binary requests to the DB server which answers with binary responses (trying to make sense of a SOAP request is very CPU-intensive and something which you don't want to leave to the DB application which will be more or less choked anyway). This way you'll have SOAP throttling only one part of the environment (the interface to users/other framework users) and the rest of the interfaces will be as efficient as they can be (binary).
Another thing - if the application allows it - is to put a cache front-end on the DB-application. The purpose of this cache is to do as much repetitive stuff as possible without involving the DB itself. This way the DB is left with handling fewer but (perhaps) more complicated requests instead of doing everything.
Oh, don't let clients send SQL statements directly to the DB. You'd be suprised at the junk a DB has to contend with.

Resources