Parse.com Throttling and Server Resource Monitoring - parse-platform

Our app is experiencing an extremely high number of timeouts at times and we are fielding hundreds of inquiries from users each day. The performance of the app is intermittent - one minute running AMAZINGLY fast and the next 10 minutes, timing out non-stop. We have optimized our cloud code and when timeouts are happening, they are happening anywhere and everywhere - there is no pattern to the failure so I am lead to believe this is related to server resource issues or throttling. Is there anyway to gain insights into throttling or server performance / resource usage?

Related

ASP.NET 5 Web API application intermittently unresponsive

We are working on an ASP.NET 5 Web API project that is in production now but we are experiencing an issue where it becomes unresponsive intermittently throughout the day.
A few notes about the application architecture. It is an ASP.NET Web API project using a MariaDB database on a separate EC2 instance within the same private network. The connection string uses the private IP of the database server to avoid any name resolution issues. The site is hosted via IIS 10.
The application itself has been developed carefully following the best practices provided by Microsoft. Heavy focus on async operations, minimizing query response times and offloading more expensive operations into background services.
The app is extremely responsive. It performs with sub 100ms responses on almost all requests, even the more complicated requests, and all the way up until it becomes unresponsive this high level of performance remains the same. We tend to see between 10-30 requests per second and 300-500 select queries per second at peak usage so not too extreme. However, randomly (2-3 times over a 24 hour period) it will begin hanging on requests and simply not respond to the request. During this time, the database is still extremely responsive and we are never over 300 connections out of our 512 connection limit.
The resources on the application server itself are never really taxed much at all. The CPU never gets above ~20% and the memory usage sits around 20-30%.
If I were to stop the site in IIS and start it again while this is happening, it will quickly come back online. If I don't it will be down for a few minutes until IIS finally kills it due to a failed health check. There are no real errors generated as a response to the issue other than typical errors caused by the hanging of the process such as connection terminated errors. The only thing I have seen before that gave me pause was the fact that there a few connection timeouts when getting the connection from the pool, but like I said, the connections to the server are never close to the limit.
Also, this app and version has been in production for months and it wasn't until the traffic volume started to grow that we started seeing these issues. At this point, I am at a loss for next steps of troubleshooting and I'm seeking suggestions.
In IIS App Pool advanced settings set Start Mode to AlwaysRunning
I never found a root cause for this issue, however, after updating to newer versions of .NET MVC this issue went away. My best guess is that changes with the Kestrel possibly resolved this issue, although, I have no idea what specific change that might have been. I have gone through the change logs a few times and didn't see anything that specifically jumped out at me.

Web Server Performance Degradation

The web application is running on Springboot and deployed on WebLogic.
We have assigned 400 as max threads and JDBC to be 100 connections.
When we perform load testing on the web application, the performance is optimal when the load is low (the response time is less than 200ms for most of the http request that we called).
When we increase the load, we can see that the thread count increases and jdbc count also increases gradually but no where near to max. However, the response time is getting much longer and it could take more than 5 seconds to response.
CPU usage, thread count, memory, JDBC connection seems to be normal during these period.
Another observation is that during testing and we saw that the performance is degrading, we used another machine to make a http call to the server that is only retrieving text without any DB or logic, and even this simple http call will take 10s to respond. (And the server resources is still not MAX!)
So, we are wondering what keep them waiting ?
Any other possible bottleneck?
If the server doesn't lack resources like CPU/RAM/etc. only a profiler can tell you where your application spends the most time which might be in:
Waiting in a queue for next thread/db connection from the pool to be available
Slow database query
Inefficient functions/algorithms which a subject to optimization
WebLogic configuration not suitable for high loads
JVM configuration not suitable for high loads (i.e. system is doing garbage collection to often/too long)
So I would recommend re-running your test with profiler tool telemetry enabled and at the same time monitoring essential JVM metrics using i.e. JMXMon Sample Collector which can be used for monitoring your application-specific metrics as well. It's a plugin which can be installed using JMeter Plugins Manager
For a detailed approach on how ago about identifying poor thread performance I suggest you take look at the TSA Method by Brendan Gregg.

503 Error while Running JMeter for Thread 400,Is it Because of Server issues?

Getting 503 Error while Running the JMeter for the Thread User 400,Is it Because of Server issues.? When I run the thread group for 100 user with ramp up period 25 seconds then it will be working fine but for the user 400 users its giving 503 error.
Given you don't experience any issues with 100 users and have issues with 400 users most probably it's a server issue connected with the overload so congratulations on finding the bottleneck.
You can either report it as is or perform a little bit deeper investigation in order to find the cause, suggested steps:
Instead of kicking off 400 users at once try increasing the load gradually at the same time looking at Response Times vs Threads and Transaction Throughput vs Threads charts. Ideally response time should remain the same and throughput should be growing as the number of threads increase. When response time starts increasing and throughput starts decreasing it indicates the saturation point and at this stage you can state that this is the maximum number of users your application can support
Check your application logs and configuration as it might be not properly tuned for the high loads, you can use 15 Simple ASP.NET Performance Tuning Tips as a reference or look for a similar guide for your application technology stack
Ensure that your application has enough headroom to operate in terms of CPU, RAM, Network, etc. as it might be the case that it's basically a lack of resources, it can be done using i.e. JMeter PerfMon Plugin
Repeat your test with profiler tool telemetry in place, this way you will be able to localize the problem and state where is the problematic piece of code or inefficient algo lives.
If server isn't down/restarted, then yes, 503 indicate overload
Common causes are a server that is down for maintenance or that is overloaded
You need to find what stop server from serving 400 concurrent requests/users
Notice that if you are testing on a test environment which isn't equal/similar to production environment, it may not reflect the load that production server can endure

IIS experiences super slow requests intermittently

I have been trying to pinpoint an error in a legacy application for our customers. They have complained about slow response times and checking IIS logs I can see that sometimes requests that shouldn't take over 500 ms take 10-30 seconds.
There seems to be no pattern: these requests happen with requests handled by our application, they happen with small static files (pictures and .js-files), they happen during high and low traffic. There doesn't seem to be a request type happening before or during these requests that would cause this to happen.
I have tried failed request tracing for long requests but everything happening in IIS pipeline seems to take 0 ms or at least close to that. Could this be caused by extremely slow network connections or our legacy application blocking threads (or something completely different)?
Resolved my own issue.
The slowness experienced by our customer is likely caused by other factors. Compared request statistics to other services (with completely different technology) and found that they all have similar slow requests now and then probably caused by slow connections.
Useful tools:
Failed request logging for long requests
Performance monitor and server performance logging
Logging application pool recycle events in log
Log parser studio for querying and analyzing IIS logs

ASP.NET request spending time in ACQUIRE_REQUEST_STATE

In View Currently Executing Requests in IIS7 I can see some requests in the RequestAcquireState with really high Time Elapsed, e.g. 7 seconds
If I understand correctly this means that 7 seconds were spent somewhere in the states below.
BEGIN_REQUEST
AUTHENTICATE_REQUEST
AUTHORIZE_REQUEST
RESOLVE_REQUEST_CACHE
MAP_REQUEST_HANDLER
ACQUIRE_REQUEST_STATE
We are using ASP.NET Sessions Server which is accessed over network by 3 Web Servers.
I heard about some potential locking issues that may arise.
Does anybody have any idea how to diagnose this further?
Thanks,
Piotr

Resources