Azure webapp files loading slow - performance

I have a problem with an Azure WebApp that is running very slow (loading times +3min with a cleaned cache).
The web app is a single page application, so the first load is the biggest.
Always On is enabled.
The tier is currently at B2.
An example of the slow loading
The metrics of the application plan are the following:
CPU is at 6.3% average (25% peak).
Memory is at 33.7% average (40% peak).
Other applications seem to be loading also slow. It's really the loading of a file that seems to be slow.
When using the development tools of Firefox I can see the network communication and timings. For example the file vendor.bundle.js (6.77MB): The browser needs to wait 4709 ms (seems to be too long?) Then it takes 112374 ms to receive it?
My location is Belgium. The location of the WebApp is West Europe. I found this very usefull azure speedtest to determine if that would be the problem. Seems like that shouldn't be the problem (latency of 78ms).
My internet speed could have been the problem as suggested, so I did a speedtest.
90.47 Mbps Download and 18.00 Mbps upload. So that seems to be OK for me.
Any thoughts?

Related

Sitecore page load slowness

I'm using Sitecore instance 9.1, Solr 7.2.1, and SXA 1.8.
I have deployed the environment on Azure and while monitoring incoming requests (to CD instance), I've noticed slowness in loading some pages at specific times.
I've explored App Insights and found an unexplainable behavior the request is taking 28.7 seconds while the breakdown of it shows executions of milli-seconds .. How is that possible? and How to explain what's happening during extra 28 seconds on the app service ??
I've checked the profiler and it shows that the thread is taking only 1042.48 ms .. How is that possible ?
This is an intermittent issue happens during the day .. regular requests are being served within 3 to 4 seconds.
I noticed that Azure often shows a profile trace for a "similar", but completely different request when clicking from the End-to-end transaction view. You can check this by comparing the timestamp and URL of the profile trace and the transaction you clicked from.
For example, I see a transaction logged at 8:58:39 PM, 2021-09-25 with 9.1 s response time:
However, when I click the profile trace icon, Azure takes me to a trace that was captured 10 minutes earlier, at 08:49:20 PM, 2021-09-25 and took only 121.64 ms:
So, if the issue you experience is intermittent and you cannot replicate it easily, try looking at the profile traces with the Slowest wall clock time by going to Application Insights → Performance → Drill into profile traces:
This will show you the worst-performing requests captured by the profiler at the top of the list:
In order to figure out why it is slow, you’ll need to understand what happens internally, f.e:
How the wall clock time is spent while processing your request?
Are there any locks internally?
The source of that data is dynamic profiling, Azure can do that on demand.
The IIS stats report would show you slowest requests, so you could look into Thread Time distribution to see where those 28 seconds are spent:
In Sitecore the when the application start the Initial prefetch configuration allows to pre-populate prefetch caches. Pre-heated prefetch caches help to reduce the processing time of incoming requests. The initial prefetch configuration of caches are taking time to load on initial stage.
Sitecore XP instance takes too long to load. This is caused by a performance issue in the CatalogRepository.GetCatalogItems method. It will be fixed in upcoming updates
see Site core knowledge base
In Sitecore XP 9.0 the initial prefetch configuration was revised. The prefetch cache for the core database was configured to include items that are used to render the Sitecore Client interface.
The Sitecore Client interface is not used on Content Delivery instances. Disabling initial prefetch configuration for the Core database helps in avoiding excessive resource consumption on the SQL Server hosting the Core database.
Change the configuration of the Core database in the \App_Config\Sitecore.config file:
Refer site core knowledge base

Elasticsearch speed vs. Cloud (localhost to production)

I have got a single ELK stack with a single node running in a vagrant virtual box on my machine. It has 3 indexes which are 90mb, 3.6gb, and 38gb.
At the same time, I have also got a Javascript application running on the host machine, consuming data from Elasticsearch which runs no problem, speed and everything's perfect. (Locally)
The issue comes when I put my Javascript application in production, as the Elasticsearch endpoint in the application has to go from localhost:9200 to MyDomainName.com:9200. The speed of the application runs fine within the company, but when I access it from home, the speed drastically decreases and often crashes. However, when I go to Kibana from home, running query there is fine.
The company is using BT broadband and has a download speed of 60mb, and 20mb upload. Doesn't use fixed IP so have to update A record whenever IP changes manually, but I don't think is relevant to the problem.
Is the internet speed the main issue that affected the loading speed outside of the company? How do I improve this? Is cloud (CDN?) the only option that would make things run faster? If so how much would it cost to host it in the cloud assuming I would index a lot of documents in the first time, but do a daily max. 10mb indexing after?
UPDATE1: Metrics from sending a request from Home using Chrome > Network
Queued at 32.77s
Started at 32.77s
Resource Scheduling
- Queueing 0.37 ms
Connection Start
- Stalled 38.32s
- DNS Lookup 0.22ms
- Initial Connection
Request/Response
- Request sent 48 μs
- Waiting (TTFB) 436.61.ms
- Content Download 0.58 ms
UPDATE2:
The stalling period seems to been much lesser when I use a VPN?

What can cause this slow load pattern - Firebug

We are eventually going to use CDN but currently all files are loaded from the same domain as the main site.
I am perplexed to see close to 15 secs of blocked time. The site loads reasonably fast at my end but at the client's environment it takes 30 sec for the first time load.
All optimization for loading has been implemented (gzip, expiry...etc) and the site ranks 95 in gtmetrix.
Any suggestions as to why this might be happening at the clients computer ?
First of all note that Firebug's 'Blocking' time actually the time required to establish a connection with the server. In the Firefox DevTools this is more obvious:
And the connecting time may be influenced by several factors. In your case it looks like the images are blocked by the styles.css file, which takes extremely long to load regarding its size. I assume, this is caused by some proxy server or web application firewall or something else that may slow down the receiving of that file.

Why is the latency of my GAE app serving static files so high?

I was checking the performance of my Go application on GAE, and I thought that the response time for a static file was quite high (183ms). Is it? Why is it? What can I do about it?
64.103.25.105 - - [07/Feb/2013:04:10:03 -0800] "GET /css/bootstrap-responsive.css
HTTP/1.1" 200 21752 - "Go http package" "example.com" ms=183 cpu_ms=0
"Regular" 200 ms seems on the high side of things for static files. I serve a static version of the same "bootstrap-responsive.css" from my application and I can see two types of answer times:
50-100ms (most of the time)
150-500ms (sometimes)
Since I have a ping roundtrip of more or less 50ms to google app engine, it seems the file is usually served within 50ms or so.
I would guess the 150-300ms response time is related to google app engine frontend server being "cold cached". I presumed that retrieving the file from some persistent storage, involves higher latencies than if it is in the frontend server cache.
I also assume that you can hit various frontend servers and get sporadic higher latencies.
Lastly, the overall perceived latency from a browser should be closely approximated by:
(tc)ping round trip + tcp/http queuing/buffering at the frontend server + file serving application time (as seen in your google app logs) + time to transfer the file.
If the frontend server is not overloaded and the file is small, the latency should be close to ping + serving time.
In my case, 50ms (ping) + 35ms (serving) = 85ms, is quite close to what I see in my browser 95ms.
Finally, If your app is serving a lot of requests, they maybe get queued, introducing a delay that is not "visible" in the application logs.
For a comparison I tested a site using tools.pingdom.com
Pingdom reported a Load time of 218ms
Here was the result from the logs:
2013-02-11 22:28:26.773 /stylesheets/bootstrap.min.css 200 35ms 45kb
Another test resulting in 238ms from Pingdom and 2ms in the logs.
Therefore, I would say that your 183ms seems relatively good. There are so many factors at play:
Your location to the server
Is the server that is serving the resource overloaded?
You could try serving the files using a Go instance instead of App Engine's static file server. I tested this some time ago, the results were occasionally faster, but the speeds were less consistent. Response time also increased under load, due to App Engine Instance being Limited to 10 Concurrent Requests. Not to mention you will be billed for the instance time.
Edit:
For a comparison to other Cloud / CDN providers see Cedexis's - Free Country Reports
You should try setting caching on static files.

ASP.NET MVC lost in finding botleneck

I have ASP.NET MVC app which accept file uploads and has result pooling using SignalR. The app hosted on Prod server with IIS7, 4 Gb Ram and two cores CPU.
The app on Dev server works perfectly but when I host it on Prod server with about 50 000 users per day the app become unrresponsible after five minutes of running. The web page request time increase dramatically and it takes about 30 seconds to load one page. I have tried to record all MvcApplication.Application_BeginRequest event call and got 9000 hits in 5 minutes. Not sure is this acceptable number of hits or not for app like this.
I have used ANTS Performance Profiler(not useful in Prod app profiling, slow and eats all memory) to profile code but profiler do not show any time delay issues in my code/MSSQL queries.
Also I have tried to monitored CPU and RAM spike problems but I didn't find any. CPU percentage sometimes goes to 15% but never up and memory usage is normal.
I suspect that there is something wrong with request or threads limits in ASP.NET/IIS7 but don't know how to profile it.
Could someone suggest any profiling solutions which could help in this situation? Tried to hunt the problem for two week already without any result :(
You may try using the MiniProfiler and more specifically the MiniProfiler.MVC3 NuGet package which is specifically created for ASP.NET MVC applications. It will show you all kind of useful information such as the time spend for different methods in the execution of the request.

Resources