angular service calls takes time only sometimes - performance

I have an web app, front end in angular and backend .Net Web API.
There are times when the application becomes slow and during that time I have seen that some of the service calls take longer, as shown in the network capture in the chrome tool. If I call the same services directly using postman or jmeter they are very fast, but through UI it takes time.
[Update] The slow services are random. Next time when it is ran, there are other services that takes time. May not be the ones those were found slower earlier.
This happens sometimes not always. When the app performs fine, all services respond in milli seconds and so continue to do so when directly called as well.
Any pointer.

Related

understanding network tab on chrome console

I'm in the process of writing an app that builds a table of Trello card data based on multiple API calls, and while the app works I'm finding the performance degrades considerably the longer it runs. The initial calls take a couple of seconds while later calls (after 100 runs or so) take upwards of a minute.
Looking at the XHR Network tab on my Chrome console, I can see the bulk of the call is taken by the 'Content Download' phase of the Ajax call. I'm curious as to whether this means the issue is with my application or if the problem resides with the API I'm trying to call? I'm a bit of a novice so my terminology is probably not appropriate here.
The Content Download time is the time during which your content is downloaded from the server.
Very long time can be due to slow connection client-side or server-side.
As you can see TTFB (time to first byte) is about 200ms. So your server is starting sending data after 200ms. Your server process seems to be OK.
You can click on the Explanation link for further information.

WCF - WebHttpBinding - RESTful - Performance Issue

first time poster so go easy on me.
I am currently trying to address a performance issue when hitting my web service after a one minute period of inactivity. Literally after one minute of THAT user not hitting the web service then the next call will take 15 seconds before actually hitting the service operation. If you keep making random (not the same service operation just so you guys don't think it is "caching" the call) service operation calls the service returns immediately (less than a second).
Here are some "timings" I decided to take so you can see how I came to the one minute of inactivity:
2:04PM
2:16PM --15 seconds
2:21PM --15 seconds
2:24PM --15 seconds
2:25PM --15 seconds
Again, if you hit the web service continuously without a one minute period of inactivity then ALL methods will return in less than a second.
Here are some details regarding my web service:
WCF, WebHttpBinding, RESTful, using HTTPs.
Basic Authentication + Custom Authentication using IDispatchMessageInspector. Authentication happens with EVERY call (except to the Initializer.aspx page).
Custom Initialization.aspx page has been created which is called every night after the Application Pool is recycled. This page caches a bunch of global data used by all users along with starting that compile.
Application Pool ONLY recycles every night at 2AM. Worker threads are never killed off because timeout is disabled.
I heard about ReliableSession but as the setting implies that sounds like it would only work for PerSession, not PerCall.
Is there any way to resolve this or am I stuck to resorting to "pinging" the server every 45 seconds using a dummy service operation?
Found out the issue. We have multiple domain controllers. When the user was getting authenticated it would start from the forest level and work its way down to the actual domain controller that server resided on. The firewalls that were put in place were blocking all domain controllers except what the server resided on.
So basically, it would fail to communicate to the N+ domain controllers until it finally reached the only one it could.
You can fix this a number of ways but we just created firewall rules to allow the web server to communicate to the domain controller the users needed to be authenticated against.

load duration of web page differs

for testing purposes I measure the time it takes for parsing, db accessing, posting and rendering of one of my web php web pages in the browser (by using Firebug's network tool). When I press F5 after clearing the cache by "Delete recent data" it takes about 5 seconds, when I hit Ctrl-F5 it takes about 20 seconds.
Isn't that the same? What's the difference between them? What is the recommended way to test the performance of php code and db access?
Thank you very much in advance ...
There could be any number of reasons all of which have to do with the implementation of firebug.
You cannot test the performance on the client side since clients differ a lot and also have the network latency which is even harder rule out.
You should do this all on the server side: start a timer when the request reaches your web server and then stop it when it exits. If that is a bit difficult then in the PHP script itself you can run a wrapper script that has a start timer, a require statement for the script you want and a stop timer.

ASP.Net Web Api initial requests are taking too long

Initial requests are taking 3-5 seconds, subsequent calls takes < 500 milliseconds. Service makes a light weight stored proc call and there is no latency found when we profile it.
This service does not hit very frequently and Idle Time-out of the Process model is 20 minutes (default). We tried to hit it every 19 minutes but no change.
Service running under IIS 7.5 with .Net framework 4.5
The first request for an ASP.NET website always takes some time because the code needs to be compiled. You can install a module for IIS that automatically makes that initial request so you don't get that slowdown yourself.
http://www.iis.net/downloads/microsoft/application-initialization

performance of accessing a mono server application via remoting

This is my setting: I have written a .NET application for local client machines, which implements a feature that could also be used on a webpage. To keep this example simple, assume that the client installs a software into which he can enter some data and gets some data back.
The idea is to create a webpage that holds a form into which the user enters the same data and gets the same results back as above. Due to the company's available web servers, the first idea was to create a mono webservice, but this was dismissed for reasons unknown. The "service" is not to be run as a webservice, but should be called by a PHP script. This is currently realized by calling the mono application via shell_exec from PHP.
So now I am stuck with a mono port of my application, which works fine, but takes way too long to execute. I have already stripped out all unnecessary dlls, methods etc, but calling the application via the command line - submitting the desired data via commandline parameters - takes approximately 700ms. We expect about 10 hits per second, so this could only work when setting up a lot of servers for this task.
I assume the 700m are related to the cost of starting the application every time, because it does not make much difference in terms of time if I handle the request only once or five hundred times (I take the original input, vary it slighty and do 500 iterations with "new" data every time. Starting from the second iteration, the processing time drops down to approximately 1ms per iteration)
My next idea was to setup the mono application as a remoting server, so that it only has to be started once and can then handle incoming requests. I therefore wrote another mono application that serves as the client. Calling the client, letting the client pass the data to the server and retrieving the result now takes 344ms. This is better, but still way slower than I would expect and want it to be.
I have then implemented a new project from scratch based on this blog post and get stuck with the same performance issues.
The question is: am I missing something related to the mono-projects that could improve the speed of the client/server? Although the idea of creating a webservice for this task was dismissed, would a webservice perform better under these circumstances (as I would not need the client application to call the service), although it is said that remoting is faster than webservices?
I could have made that clearer, but implementing a webservice is currently not an option (and please don't ask why, I didn't write the requirements ;))
Meanwhile I have checked that it's indeed the startup of the client, which takes most of the time in the remoting scenario.
I could imagine accessing the server via pipes from the command line, which would be perfectly suitable in my scenario. I guess this would be done using sockets?
You can try to use AOT to reduce the startup time. On .NET you would use ngen for that purpoise, on mono just do a mono --aot on all assemblies used by your application.
AOT'ed code is slower than JIT'ed code, but has the advantage of reducing startup time.
You can even try to AOT framework assemblies such as mscorlib and System.
I believe that remoting is not an ideal thing to use in this scenario. However your idea of having mono on server instead of starting it every time is indeed solid.
Did you consider using SOAP webservices over HTTP? This would also help you with your 'web page' scenario.
Even if it is a little to slow for you in my experience a custom RESTful services implementation would be easier to work with than remoting.

Resources