My server used to handle 700+ user burst and now it is failing at around 200 users.
(Users are connecting to the server almost at the same time after clicking a push message)
I think the change is due to the change how the requests are made.
Back then, webserver collected all the information in a single response in an html.
Now, each section in a page is making a rest api request resulting in probably 10+ more requests.
I'm considering making an api endpoint to aggregate those requests for pages that users would open when they click on push notification.
Another solution I think of is caching those frequently used rest api responses.
Is it a good idea to combine api calls to reduce api calls ?
It is always a good idea to reduce API calls. The optimal solution is to get all the necessary data in one go without any unused information.
This results in less traffic, less requests (and loads) to the server, less RAM and CPU usage, as well as less concurrent DB operations.
Caching is also a great choice. You can consider both caching the entire request and separate parts of the response.
A combined API response means that there will be just one response, which will reduce the pre-execution time (where the app is loading everything), but will increase the processing time, because it's doing everything in one thread. This will result in less traffic, but a slightly slower response time.
From the user's perspective this would mean that if you combine everything, the page will load slower, but when it does it will load up entirely.
It's a matter of finding the balance.
And for the question if it's worth doing - it depends on your set-up. You should measure the start-up time of the application and the execution time and do the math.
Another thing you should consider is the amount of time this might require. There is also the solution of increasing the server power, like creating a clustered cache and using a load balancer to split the load. You should compare the needed time for both tasks and work from there.
Related
I'm curious about something I've noticed while tracking the site speed of a Shopify site in Pingdom.
The site has been unchanged for a few days, but the site of the size goes up and down by very small amounts. The number of requests also goes up and down by small amounts. Is it likely that Pingdom is just slightly off occasionally?
The times below are recorded every 30min.
I am sure your shop uses some Javascript to provide some external services to support it. That is likely the source of "variance" in page size. Some services are dynamic in what they return as a payload, so you'd see that for page size assuming it includes all attached behaviours. As for variances in requests, that would probably be the fact that the faster a response, the more pings can be sent and processed, so slower responses should see less requests. Kind of makes sense right?
I understand that the speed that a website loads is dependent on many things, however I'm interested to know how I can positively impact load speed by increasing the specifications on my dedicated server:
Does this allow my server to handle more requests?
Does this reduce roundtrips?
Does this decrease server response time?
Does this allow my server to generate pages on Wordpress faster?
yes-ish
no
yes-ish
yes-ish
Does this allow my server to handle more requests?
Requests come in and are essentially put into a queue until the system has enough time to handle it. By increasing system resources, such a queue might be faster processed, and such a queue might be configured to handle more requests simultaneously, so... yes-ish (note: this is very generalized)
Does this reduce roundtrips?
No, your application design is the only thing that effects this. If your application makes a request to the server, it makes a request (e.g., a "round trip"). If you increase your server resources, you do not in turn decrease the amount of requests your application makes.
Does this decrease server response time?
Yes, see first explanation. It can often decrease the response times for the same reasons given there. However, network latency and other factors outside the realm of the server can effect complete response processing times.
Does this allow my server to generate pages on Wordpress faster?
Again, see the first explanation. This can help your server generate pages faster by throwing more power at the processes that generate the pages. However, outside factors aside from the server still apply.
For performance, the two high target areas (assuming you don't have tons and tons of traffic, which most sites do not), are reducing database reads and caching. Caching covers various areas... data caching on the server, page output caching, browser caching for content, images, etc. If you're experiencing less than desirable performance, this is usually a good place to start.
As I understand, the benefit of using memcached is to shorten the access time to the information stored in the database by caching it in the memory. But isn't the time overhead for the client-server model based on network protocol (e.g. TCP) also considerable as well? My guess is that it actually might be worse as network access is generally slower than hardware access. What am I getting wrong?
Thank you!
It's true that caching won't address network transport time. However, what matters to the user is the overall time from request to delivery. If this total time is perceptible, then your site does not seem responsive. Appropriate use of caching can improve responsiveness, even if your overall transport time is out of your control.
Also, caching can be used to reduce overall server load, which will essentially buy you more cycles. Consider the case of a query whose response is the same for all users - for example, imagine that you display some information about site activity or status every time a page is loaded, and this information does not depend on the identity of the user loading the page. Let's imagine also that this information does not change very rapidly. In this case, you might decide to recalculate the information every minute, or every five minutes, or every N page loads, or something of that nature, and always serve the cached version. In this case, you're getting two benefits. First, you've cut out a lot of repeated computation of values that you've decided don't really need to be recalculated, which takes some load off your servers. Second, you've ensured that users are always getting served from the cache rather than from computation, which might speed things up for them if the computation is expensive.
Both of those could - in the right circumstances - lead to improved performance from the user's perspective. But of course, as with any optimization, you need to have benchmarks and actually benchmark to data rather than to your perceptions of what ought to be correct.
All the images in my site are served by a controller action. This action checks to see if the image exists on disk and can do some manipulations (resizing etc.).
Considering that a page may contain 50 or so images, resulting in 50 requests, is this a good candidate for using ASP.NET MVC Async controllers?
You should read measure-the-performace-of-async-controllers first.
From the article
Asynchronous requests are useful, but only if you need to handle more concurrent requests than you have worker threads. Otherwise, synchronous requests are better just because they let you write simpler code.
(There is an exception: ASP.NET dynamically adjusts the worker thread pool between its minimum and maximum size limits, and if you have a sudden spike in traffic, it can take several minutes for it to notice and create new worker threads. During that time your app may have only a handful of worker threads, and many requests may time out. The reason I gradually adjusted the traffic level over a 30-minute period was to give ASP.NET enough time to adapt. Asynchronous requests are better than synchronous requests at handling sudden traffic spikes, though such spikes don’t happen often in reality.)
Even if you use asynchronous requests, your capacity is limited by the capacity of any external services you rely upon. Obvious, really, but until you measure it you might not realise how those external services are configured.
It’s not shown on the graph, but if you have a queue of requests going into ASP.NET, then the queue delay affects all requests – not just the ones involving expensive I/O. This means the entire site feels slow to all users. Under the right circumstances, asynchronous requests can avoid this site-wide slowdown by not forcing the other requests to queue so much.
I use Visual Studio Team System 2008 Team Suite for load testing of my Web-application (it uses ASP.MVC technology).
Load pattern:Constant (this means I have constant amount of virtual users all the time).
I specify coniguratiton of 1000 users to analyze perfomance of my Web-application in really stress conditions.I run the same load test multiple times while making some changes in my application.
But while analyzing load test results I come to a strange dependency: when average page response time becomes larger,the requests per second value increases too!And vice versa:when average page response time is less,requests per second value is less.This situation does not reproduce when the amount of users is small (5-50 users).
How can you explain such results?
Perhaps there is a misunderstanding on the term Requests/Sec here. Requests/Sec as per my understanding is just a representation of how any number of requests that the test is pushing into the application (not the number of requests completed per second).
If you look at it that way. This might make sense.
High Requests/Sec will cause higher Avg. Response Time (due to bottleneck somewhere, i.e. CPU bound, memory bound or IO bound).
So as your Requests/Sec goes up, and you have tons of object in memory, the memory is under pressure, thus triggering the Garbage Collection that will slow down your Response time.
Or as your Requests/Sec goes up, and your CPU got hammered, you might have to wait for CPU time, thus making your Response Time higher.
Or as your Request/Sec goes up, your SQL is not tuned properly, and blocking and deadlocking occurs, thus making your Response Time higher.
These are just examples of why you might see these correlation. You might have to track it down some more in term of CPU, Memory usage and IO (network, disk, SQL, etc.)
A few more details about the problem: we are load testing our rendering engine [NDjango][1] against the standard ASP.NET aspx. The web app we are using to load test is very basic - it consists of 2 static pages - no database, no heavy processing, just rendering. What we see is that in terms of avg response time aspx as expected is considerably faster, but to my surprise the number of requests per second as well as total number of requests for the duration of the test is much lower.
Leaving aside what we are testing against what, I agree with Jimmy, that higher request rate can clog the server in many ways. But it is my understanding that this would cause the response time to go up - right?
If the numbers we are getting really reflect what's happening on the server, I do not see how this rule can be broken. So for now the only explanation I have is that the numbers are skewed - something is wrong with the way we are configuring the tool.
[1]: http://www.ndjango.org NDjango
This is a normal result as the number of users increases you will load the server with higher numbers of requests per second. Any server will take longer to deal with more requests per second, meaning the average page response time increases.
Requests per second is a measure of the load being applied to the application and average page response time is a measure of the applications performance where high number=slow response.
You will be better off using a stepped number of users or a warmup period where the load is applied gradually to the server.
Also, with 1000 virtual users on a single test machine, the CPU of the test machine will be absolutely maxed out. That will most likely be the thing that is skewing the results of your testing. Playing with the number of virtual users you will find that there will be a point where the requests per second are maxed out. Adding or taking away virtual users will result in less requests per second from the app.