I have a website that scores 90-97 for PageSpeed performance. The only metric that is not doing well is the speed index. The speed index ranges from 1.4 - 1.9. I can't seem to improve the initial server response time which is 0.77s - 2.5s.
The WordPress website is hosted on GCP, using a bitnami WordPress stack and is behind a Google Cloud Load Balancer with CDN enabled. The machine type is a e2-standard-2 which I believe is more than sufficient for the static website.
Is there a specific set of server configuration that I can look into to further improve the initial server response time? I've posted this question at server fault a week ago but there is no response.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a Visual Studio load test that runs through the pages on a website, but have experienced big differences in performance when using a load balancer. If I run the tests going straight to Web Server 1 bypassing the load balancer I get an average page load time of under 1 second for 100 users as an example. If I direct the same test at the load balancer with 2 web servers behind it then I get an average page load time of about 30seconds - it starts quick but then deteriorates. This is strange as I now have 2 web servers load balanced instead of using 1 direct so I expect to be able to increase load. I am testing this with Azure Web Application Gateway now, and Azure VMs. I have experienced the same problem previously with an NGinx setup, I thought it was due to that setup but now I find I have the same on Azure. Any thoughts would be great.
I had to completely disable the firewall to get the consistent performance. I also ran into other issues with the firewall, where it gave us max entity size errors from a security module and after discussing with Azure Support this entity size can not be configured so keeping the firewall would mean some large pages would no longer function and get this error. This happened even if all rules were disabled, I spent a lot of time experimenting with different rules on/off. The SQL injection rules didn't seem to like our ASP.NET web forms site. I have now simulated 1,000 concurrent users split between two test agents and the performance was good for our site, with average page load time well under a second.
Here are a list of things that helped me to improve the same situation:
Add non-SSL listener and use that (e.g. HTTP instead of HTTPS). Obviously this is not the advised solution but maybe that can give you a hint (offload SSL to the backend pool servers? Add more gateway instances?)
Disable WAF rules (slight improvement)
Disable WAF + Added more gateway instances (increased from 2 to 4 in my case) - SOLVED THE PROBLEM!
SQL Azure is responding at 90ms avg
IIS hosted in Azure VM is responding at 2000ms avg!
What can it be done to improve the network speed of the Azure VM?
I have a webapi app hosted in an azure virtual machine. This app links to 20 SQL Azure databases using the elastic scale client API.
Response times from SQL Azure are good: 90ms avg on 900 simultaneous users running queries against the webapi. I know SQL Azure is responding quickly because I am logging response times at the controller level in my webapi including json deserialization of the response object.
However, in my load test I'm consistently getting response times of right around 2000ms!
So, there's a 20 time discrepancy between what SQL Azure is doing to what IIS in the virtual machine is returning. The virtual machine network speed is the bottleneck now and I can't figure out how to solve it.
I have looked through several posts and fixed the following:
Ensure power management is set for performance instead of balanced.
However, slow performance is still there.
I have the following setup:
Azure Virtual Machine D3 (4 cores 14GB memory). This is a 500USD per month machine... it should be pretty fast, right??
20 SQL Azure databases at Standard S3 Level. I'm happy with these ones delivering 90ms on 900 users... Actually I think they can support a lot more users than 900 at the 90ms response times avg but
WebAPI from .net 4.5
I'm very happy that through sharding I have been able to improve the performance of my database searches but now the network speed of the virtual machine I'm using is degrading my overall app performance to a point that it makes it a showstopper.
Any pointers on what I might be missing would be extremely appreciated.
Cheers,
Hector
We have a fairly large member site set up on AWS using a medium high-CPU server. Most of the time it runs at very low capacity (~3%), but once a week we send out a newsletter to our members with opportunities. In the minutes after the newsletter the server load shoots up (sometimes to over 100%) with members trying to access the site.
In the long term, we will be restructuring the system, but for now, I'd like to add an overflow server that will serve a 'try back in a few minutes' page to users while this is occurring.
I haven't been able to find any good how-tos on setting up routing for this type of thing. Any ideas?
Thanks!
Why not use Elastic Load Balancing along with Auto Scaling instead?
That would allow you to match the number of servers to your actual usage. Most of the week, you would not be paying for 97% unused capacity, and during the newsletter periods, you will have enough capacity for everyone to log on and buy something from you.
There is a post on the Amazon Web Services blog that explains how to do this. It puts failover web page on S3, which is easy to maintain and cheap.
Create a Backup Website Using Route 53 DNS Failover and S3 Website Hosting
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 days ago.
Improve this question
I have a Java based web application that is hosted in google app engine. There is a simple web service call to the Amazon Product and Advertising API to look up for books when the user inputs a title. Everything runs fine on my local development environment. However, the web service call is annoyingly slow on production.
E.g. When I invoke the web service call in my dev environment, it takes about 3-4 seconds to get the response back. In production, the same call to the same API would take 15-16 seconds. There is no datastore activity involved at this moment, just a web service call and display the results.
I am pretty sure that this is not the initial load issues others are talking about regarding GAE in production. It has been consistently slow no matter if the load is warmed up. I have tried to search everywhere but nobody seems to be complaining about the same issue. Does anyone have any clue what this might be? Is there any good tool to tackle this kind of performance issue? Thank you!
Here is my update as of 01/23/2012:
I have identified the bottleneck - it takes about 10 seconds to get the port from Amazon Service (I was using SOAP based web service client). My solution is to use RESTful client and the performance is greatly improved. Now it only takes 1 sec to get the information back from Amazon.
The speed of response of Amazon APIs has nothing to do with the performance of GAE.
It's more likelly that Amazon throttles access to their APIs per IP. Since GAE is a shared service, having a set of common IPs, it might be that there are other apps on GAE calling Amazon contributing to delay. If this continues to be a problem then you might want to setup a proxy server somewhere (Amazon EC2?).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
May I use CDN for whole website (PHP, Apache, MySQL) or just for images and CSS, JS files?
What's the best choice > cloud-hosting or dedicated-hosting? Does CDN has that support?
Witch hosting you suggest me the best - the fastest, stablest 100% uptime, CDN, not expensive at all?
CDN hosting is purely for static content only - it is never advised to host a dynamic application on CDN.
CDN is a content delivery network - your hosting company has edge servers on various locations across the globe. Job of these edge servers is to cache your content and deliver to your clients. If edge server doesn't have cached your content, they pull these content from the source server and deliver to your visitor. If they have cached copy, they deliver that immediately. This cache is usually refreshed every 12 hours - it varies host by host.
Since edge servers deliver cached copy, it is never advised to host dynamic websites on CDN Hosting.
Question:
What's the best choice? cloud-hosting or dedicated-hosting? Does CDN
has that support?
Answer:
Cloud hosting is superior by infrastructure. It has redundant array of disk drives and processors. You will enjoy almost 100% uptime on Cloud hosting.
Question:
Witch hosting you suggest me the best - the fastest, stablest 100%
uptime, CDN, not expensive at all?
Answer: From my professional experience, CDN hosting is the fastest, Cloud hosting is stable and 100% uptime and VPS Hosting is not expensive. If you want to make a choice of of these three, Cloud hosting is stable and cost-effective.
From the way the question was phrased, I think managed hosting would be most appropriate for your application.
It is fairly unlikely that you will run into any performance issues that are not self-inflicted (say, by writing suboptimal database queries, performing database processing in the frontend etc) unless you have a significant advertising budget and a mass market application, in which case you should also have a mid-sized IT department that can roll a custom solution.
Weighing cost and reliability against each other can be left to the accountants, for the most part.