We've recently copied an api that we have from being IIS hosted into a console app (to be hosted with Owin + TopShelf as a service) and have been performance profiling the two hosting options using JMeter.
We throw 18 threads at the apis and we get differing results back from the IIS hosted vs console hosted, specifically as follows :
Response times through IIS are slower. This isn't surprising as the pipeline in IIS is more involved.
Throughput through IIS is consistent, i.e. we don't see significant increases/decreases in throughput (we achieve 5500 requests/responses per min)
Throughput when hosted in a console app starts off very high (20,000 per min) but degrades quickly to approximately 4,500 per min over a 10 minute period.
We're trying to determine what the cause of this throughput drop is when hosting as a console. Why is we start with 20,000 requests per min (presumably calculated on initial response times when it hasn't run for a minute) but degrade to 4,500?
Other things of note, CPU isn't a concern. It's fluctuates to start but settles below 30% available, and memory is average 1.34GB on a 4GB ram machine.
Why might the throughput in IIS be stable and why does it degrade when hosted in using MS Owin hosting through a console app (given stable CPU and Memory)?
Incidentally we're trying to isolate pieces of code that could cause the degradation.
Any thoughts on this would be appreciated.
Related
Getting 503 Error while Running the JMeter for the Thread User 400,Is it Because of Server issues.? When I run the thread group for 100 user with ramp up period 25 seconds then it will be working fine but for the user 400 users its giving 503 error.
Given you don't experience any issues with 100 users and have issues with 400 users most probably it's a server issue connected with the overload so congratulations on finding the bottleneck.
You can either report it as is or perform a little bit deeper investigation in order to find the cause, suggested steps:
Instead of kicking off 400 users at once try increasing the load gradually at the same time looking at Response Times vs Threads and Transaction Throughput vs Threads charts. Ideally response time should remain the same and throughput should be growing as the number of threads increase. When response time starts increasing and throughput starts decreasing it indicates the saturation point and at this stage you can state that this is the maximum number of users your application can support
Check your application logs and configuration as it might be not properly tuned for the high loads, you can use 15 Simple ASP.NET Performance Tuning Tips as a reference or look for a similar guide for your application technology stack
Ensure that your application has enough headroom to operate in terms of CPU, RAM, Network, etc. as it might be the case that it's basically a lack of resources, it can be done using i.e. JMeter PerfMon Plugin
Repeat your test with profiler tool telemetry in place, this way you will be able to localize the problem and state where is the problematic piece of code or inefficient algo lives.
If server isn't down/restarted, then yes, 503 indicate overload
Common causes are a server that is down for maintenance or that is overloaded
You need to find what stop server from serving 400 concurrent requests/users
Notice that if you are testing on a test environment which isn't equal/similar to production environment, it may not reflect the load that production server can endure
I am looking to implement some alerting and monitoring in my organisation and have a few questions in regards to IIS and CPU usage.
1) If there is a process (could be IIS itself) taking up a large amount of CPU resources running on the same machine as IIS, what effect does that have on client response rates? i.e. does it take much longer to reply to a clients request ?
2) In IIS land are there any gold standard metrics that should be monitored to get an idea on the "health" of the windows server running a bunch of web services i.e. CPU %, request/response times, memory etc
I have two REST endpoints driving some navigation in a web site. Both create nearly the same response, but one gets its data straight from the db whereas the other has to ask a search engine (solr) first to get some data and then do the db calls.
If i profile both endpoints via JProfiler i get a higher runtime (approx. 60%) for the second one (about 31ms vs. 53ms). That's as expected.
Profile result:
If i view the same ajax calls from the client side i get a very different picture.
The faster of the both calls takes about 146 ms waiting and network time
The slower of the both calls takes about 1.4 seconds waiting and network
Frontend timing is measured via chrome developer tools. The server is a tomcat 7.0.30 running in STS 3.2. Client and server live on the same system, db and solr are external so there should be no network latency between tomcat and the browser. As a side note: The faster response has the bigger payload (2.6 vs 4.5 kb).
I have no idea why the slower of the both calls takes about 60% more server time but in sum nearly 1000% more "frontend time".
The question is: Is there any way i can figure out where this timing differences originate?
By default, the CPU views in JProfiler show times in the "Runnable" thread state. If a thread reads data from a socket connection or waits for some condition, that time is not included in the "Runnable" thread state.
In the upper right corner of the CPU views there is a thread state selector. If you change that to "All states", you will get times that you can compare with the wall clock times from the browser.
I was checking the performance of my Go application on GAE, and I thought that the response time for a static file was quite high (183ms). Is it? Why is it? What can I do about it?
64.103.25.105 - - [07/Feb/2013:04:10:03 -0800] "GET /css/bootstrap-responsive.css
HTTP/1.1" 200 21752 - "Go http package" "example.com" ms=183 cpu_ms=0
"Regular" 200 ms seems on the high side of things for static files. I serve a static version of the same "bootstrap-responsive.css" from my application and I can see two types of answer times:
50-100ms (most of the time)
150-500ms (sometimes)
Since I have a ping roundtrip of more or less 50ms to google app engine, it seems the file is usually served within 50ms or so.
I would guess the 150-300ms response time is related to google app engine frontend server being "cold cached". I presumed that retrieving the file from some persistent storage, involves higher latencies than if it is in the frontend server cache.
I also assume that you can hit various frontend servers and get sporadic higher latencies.
Lastly, the overall perceived latency from a browser should be closely approximated by:
(tc)ping round trip + tcp/http queuing/buffering at the frontend server + file serving application time (as seen in your google app logs) + time to transfer the file.
If the frontend server is not overloaded and the file is small, the latency should be close to ping + serving time.
In my case, 50ms (ping) + 35ms (serving) = 85ms, is quite close to what I see in my browser 95ms.
Finally, If your app is serving a lot of requests, they maybe get queued, introducing a delay that is not "visible" in the application logs.
For a comparison I tested a site using tools.pingdom.com
Pingdom reported a Load time of 218ms
Here was the result from the logs:
2013-02-11 22:28:26.773 /stylesheets/bootstrap.min.css 200 35ms 45kb
Another test resulting in 238ms from Pingdom and 2ms in the logs.
Therefore, I would say that your 183ms seems relatively good. There are so many factors at play:
Your location to the server
Is the server that is serving the resource overloaded?
You could try serving the files using a Go instance instead of App Engine's static file server. I tested this some time ago, the results were occasionally faster, but the speeds were less consistent. Response time also increased under load, due to App Engine Instance being Limited to 10 Concurrent Requests. Not to mention you will be billed for the instance time.
Edit:
For a comparison to other Cloud / CDN providers see Cedexis's - Free Country Reports
You should try setting caching on static files.
I have ASP.NET MVC app which accept file uploads and has result pooling using SignalR. The app hosted on Prod server with IIS7, 4 Gb Ram and two cores CPU.
The app on Dev server works perfectly but when I host it on Prod server with about 50 000 users per day the app become unrresponsible after five minutes of running. The web page request time increase dramatically and it takes about 30 seconds to load one page. I have tried to record all MvcApplication.Application_BeginRequest event call and got 9000 hits in 5 minutes. Not sure is this acceptable number of hits or not for app like this.
I have used ANTS Performance Profiler(not useful in Prod app profiling, slow and eats all memory) to profile code but profiler do not show any time delay issues in my code/MSSQL queries.
Also I have tried to monitored CPU and RAM spike problems but I didn't find any. CPU percentage sometimes goes to 15% but never up and memory usage is normal.
I suspect that there is something wrong with request or threads limits in ASP.NET/IIS7 but don't know how to profile it.
Could someone suggest any profiling solutions which could help in this situation? Tried to hunt the problem for two week already without any result :(
You may try using the MiniProfiler and more specifically the MiniProfiler.MVC3 NuGet package which is specifically created for ASP.NET MVC applications. It will show you all kind of useful information such as the time spend for different methods in the execution of the request.