A very long time to load the site - performance

Help me find the reason why before downloading any of the pages is a delay of about 2 minutes, and then abruptly loaded site. No changes to the site did not do throughout the year. Before, everything was fine, 2 weeks ago is started. Site: www.proudandcurvy.co.uk

It's probably some kind of DNS timeout, is the web server configured to do DNS lookups? You really want to turn that off.
You use the Apache HTTPD, the configuration option that you should be looking for is
HostnameLookups and it should be set to Off.

Related

Laravel slow on 1st time load

I have a laravel app that loads fine on localhost but when deployed on a shared 1&1 host server (for demo purposes) the first page load is very slow (up to 12s !). It's only occuring on first page load, after that it works perfectly fine as if the site was going on "sleep" mode when not used for a while.
It does sound like a cache issue though I activated all laravel caches (views, config, routes..).
Someone mentioned a similar problem on a godaddy shared host, their solution was to have a cron pinging the site every minute to keep it alive, that probably works but that's not a very satisfactory solution.
The debugger/console are not showing much :
On first page load :
Queries 343ms
Route request 12.46s
The console is showing a 12.69s TTFB waiting time
After reload
Queries 39.47ms
Route request 238ms
console 334ms.
Has anyone come across a similar issue before ?
Issue seems to be related to the host. Same install on a different host works perfectly, no real answer here..

Facing an issue with Magento 2.2.9 version

My sites get down every 2-3 days. It doesn't show any error on upfront, the browser keeps on loading for a very long time, but no data appears. When I check the apache error logs I found Max Request Workers limit exhausted. For the last 10 days, I am increasing the same the frequency is increased to 5days but still getting down. The site was launched 45 days ago, running perfectly for 30 days. Even we have not observed any hike in the traffic. The site is hosted at the AWS plan is t2.2xlarge.
Do you use many filters for layered navigation? When bots hit it if using sql search it will exceed max connections and lock things up and repeat over and over. One possible area to look at. I already had this issue and had to block all bad bots in robot.txt. Check mostly for Chinese bots and block by IP in htaccess or firewall tune robot.txt to instruct delay 10 for bots. Connect your site to cloudflare and tune things to disallow huge hits. In general, mostly Chinese bots are the ones who don't respect rules and robot.txt si personally blocked all China.

Why are my localhost HTTP response times so slow?

Using localhost and Tomcat 7, I'm seeing between 600-800ms per request in Chrome Developer tools for a specific webapp. Requests are JS files, CSS files, images or the initial server response. Some responses are less than 1KB, others are over 100KB.
As a result, it's taking around 10 seconds to load one page of the webapp. When I load the same webapp on our production server, it's taking less than 1 second to load an entire page.
I'm not sure where to continue debugging the issue...
I've ruled out it being a browser issue by testing in Safari too.
I've turned it off and on again
Reduced response to 500-600ms overall
I've cleared out my log files
I've ruled out the webapp's frontend entirely by hitting a resource directly, ex: http://ts.xyz.com:9091/1.0/toolsList/javascript/toolsList.js or http://ts.xyz.com:9091/awake
I've tested another webapp and that performs lightning-quick
So, it has to be this particular app and it has to be locally.
I've seen such behaviour long time ago when the webserver (Apache httpd back then) was configured to make DNS lookups for logs - these took awfully long time especially when an IP could not be resolved. As it doesn't make sense for a localhost app to be orders of magnitude slower (especially when you're talking about serving static resources) I'd check for any network related issues: Database connections, logging configurations, DNS lookups, TLS server trust issues (with backends, database, LDAP or others).
I can't decide if I add this as "if everything else fails" or rather add this as "but first try this:"... you decide:
Compare the setup of your production server with your development server (localhost) and make extra extra extra sure that there's no meaningful difference.

MVC3 Speed on production iis7

I have an MVC3 application which is working fast in my dev environment (even when pointed at the production database). However, when I publish the application and move it onto the production iis7 environment it runs at a snails pace. I understand that the inital load can take a few seconds as the application pool starts up, but this is taking 20+ seconds. Then will be fast for a few clicks and the next click will again take 20+ seconds.
I've put in the MVCMiniPorifler and it doesn't look like the database is causing problems. But, I also can't see what is causing the problem. I can hit the same page multiple times and it comes back in a second or 2 and then suddenly that same page will take 20+ seconds to respond.
Has anyone seen this sort of behaviour before? Any help would be greatly appreciated and I'm not sure what to try next.
It's possible that the other web apps running on your production server are locking required resources. Is there a common file or folder that multiple sites utilize? Are you sharing the app pool between any of the sites?

AppFabric velocity as a state server

Is anybody using Windows AppFabric Server for out of process state management?
Any feedback, advice would be appreciated.
Using AppFabric Caching. We tried this and it appeared to work, was easy to setup etc. There are some very strange settings when setting up the cache about peristance which need to be read carefully.
Our issue was on two server we installed IIS and Appfabric Caching and told app to try the local one first. When we went into production is just started to fail. It appears that with only two servers there is a lead server which if it goes down things stop working, we read that if needed to scale to 3 or more servers to get the behaviour we wanted. Not an option when just gone live and not working so we switched to SQL server for now while we look at nCache and ScaleOut and Memchached
The other issue is that caching and session state are not the same animal, if you loose your cache it should not be the end of the world just put it back together, we need to keep session state for the lotted time period at all costs.

Resources