I have an Azure App Service running an MVC Web API. It connects to a DB and Redis cache. The calls to the API are taking a massive amount of time or timing out. I have stripped back the methods to be doing pretty much nothing.
public HttpResponseMessage GetData()
{
Request.CreateResponse(HttpStatusCode.OK, "abc");
}
I still have the same issue. The webserver isn't under any pressure, nor is the DB, both below 30%. I'm at a loss to know where to even start.
This method is called quite a lot so there may be a lot of concurrent requests. I'm running an S3 App Service plan which should more than suffice.
Any suggestions on how to trouble shoot greatly welcome? I can only think it is down to the number of simultaneous requests.
As far as I know, this issue is often caused by:
requests taking a long time
application using high memory/CPU
application crashing due to an exception.
I suggest you could firstly enable diagnostics logging for your web app.
By using this way, you could find the details information about Detailed Error, Failed Request Tracing,Web Server Logging in your web application.
More details, you could refer to bellow link:
Enable diagnostics logging for web apps in Azure App Service
You could also use the Azure App Service Support Portal.
By using this way, you could troubleshoot issues related to your web app by looking at HTTP logs, event logs, process dumps, and more.
You can access all this information using our Support portal at http://.scm.azurewebsites.net/Support
More details, you could refer to below link:
New Updates to Support Site Extension for Azure Websites
Besides, you could also refer to this article:Troubleshoot slow web app performance issues in Azure App Service
Related
Classic approach on GCP is rent a linux host with static monthly payment. It doesn't matter if your application is not running or users aren't consuming it, you will always pay the static monthly payment. I think this is acceptable for production environments but for development and testing not.
This does not happen on Heroku :
If an app has a free web dyno, and that dyno receives no web traffic in a 30-minute period, it will sleep. In addition to the web dyno sleeping, the worker dyno (if present) will also sleep.
Free web dynos do not consume free dyno hours while sleeping.
Question
How stop or delete app on google (gae, cloud run, cloud build, containers) if does not receive web traffic?
If it is possible using just google tools it would be great:
https://cloud.google.com/products
Idea
Developing a basic router with nodejs which works as minimal balancer. If web traffic is not detected for some apps, an instruction to google cloud platform api could stop the app or container. This would also apply to other clouds.
Any help is appreciated.
Update
I cannot find any solution yet. I will try to add that feature here https://github.com/jrichardsz/http-operator or a basic shell script to detect incoming request to a specific port like How to print incoming http request on specific port
GCP is offering several serverless products (like you mentioned) and they offer a pricing where you are only charged for the resources you use (when requests are processed).
In Cloud Run you are only billed when an instance is handling a
request using the autoscaling to know more. See their pricing as well for a better overview.
For Google App Engine the app.yaml configuration file contains several settings you can use to adjust the trade-off between performance and resource load for a specific version of your app. You
also check this link how to manage the auto scaling settings.
You can also check this Google Cloud blog for other strategies in auto scaling your applications.
To answer the Comment below:
This video can help you better understand their differences to be able to see the appropriate service for your use case.
To clarify, there's 2 variations of cloud run, the first is managed by google and the other runs on gke. As long as your classic application (api app) is stateless, you should be able to deploy it as a container and take advantage of being charged based on only the resources you use. Snippets would fall under Cloud function where it only runs functions based on triggers.
You can choose to deploy your Cloud Run app on fully managed infrastructure ("serverless", pay per use, auto-scaling up rapidly and down to 0 depending on traffic) or on a Google Kubernetes Engine cluster.
It is also possible to run Docker containers in Serverless using App Engine (Flexible). App Engine is always fully managed, with auto-scaling. App Engine Flex auto-scales gradually and down to 1. App Engine Second Generation auto-scales up rapidly and down to 0.
In your current use case I would recommend to use Cloud Run, check its limitations first before getting started. See the official documentation here and on Cloud Run How-To Guides
Is it possible to add application insights for web api that's hosted on the on-premise version of service fabric?
So far I have tried to add the application insights to my project and wondering where to send for monitoring. It was easy when app is also on cloud.
I believe there is no on-premise application insights service, so even if the web api is hosted on-premise over service fabric; one must use cloud version application insights service, is that correct? In that case can anyone let me know how to setup?
App Insights is only hosted in Azure. If you're looking for an on-premise solution, you're best off looking at using something like the ELK stack (Elastic Search, Logstash and Kabana).
Nonetheless, even though your cluster is hosted on-premise, using Asure App Insights is still very much a valid scenario (assuming your IT organisation is fine with it).
Assuming you're fine with Application Insights, I strongly recommend you have a look at App Insights Service Fabric. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics, as well as,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications
One thing however that the above won't solve is providing overall cluster health information - e.g. when/how often nodes go up/down, how much CPU/Memory and disk IO is consumed on individual nodes. For this you could try MS EventFlow or a custom windows service
There is no "on premise" application insights, but as long as your on premise service has access to send outbound data, you can use application insights on your site. You won't be able to use some features, like webtests, because application insights wont be able to make calls into your site.
Setup is the same as always, create an application insights resource in azure, and either configure it in visual studio, or manually set the instrumentation key in your applicationinsights.config (or via code) in your app.
If you need to configure outbound firewall rules or anything to let AI send data, that information is all here: https://learn.microsoft.com/en-us/azure/application-insights/app-insights-ip-addresses
I have a web role with co-located cache. there are two instances of this role.
Even when there is a cache-hit, the turn-around time for our request measures to a few seconds. Upon analysis we found that the time taken by cache to get back with data is 1 second on average. However, IIS logs suggest that the overall servicing of the request takes about 4 seconds. there is no intermediate operation before or after data retrieval from cache.
What could be wrong here? What would be a good way to analyse the problem?
For what it's worth we were having a similar problem with caching in Redis in Azure and a RESTful API.
The problem turned out to be the serialization of data.
Some ways to debug the problem:
Download ANTS profile (it has a free trial) and profile your worker role locally.
Enable profiling for your worker role, deploy it, run it for a bit, then download the profile file in Visual Studio. (You can use Server Explorer to find your instance and download the log).
Download the Azure tool kit (http://blogs.msdn.com/b/kwill/archive/2013/08/26/azuretools-the-diagnostic-utility-used-by-the-windows-azure-developer-support-team.aspx) on your instance. It has things like Process Explorer that can tell you how much memory your role is taking, how much CPU, what it's doing on the network etc.
You can contact Azure support and have them help you profile your application. We did that and got absolutely amazing support. They talked with us on the phone for hours and helped us profile our code.
You really should increase the log level for client and server refer In-Role Cache Troubleshooting and Diagnostics (Windows Azure Cache) and take a look at the performance counters. If read operations (GET) is taking long time then there can be paging in one of the instances or may be there is overload on the server. If you see any performance issue on the cache instances then you should take reassess the capacity using Capacity Planning Considerations for In-Role Cache (Windows Azure Cache) .
If this doesn't help then please open a support ticket.
I have a rather strange scenario. We have a range of WEBAPIs hosted on the cloud. We consume those services in our Windows 8 application. The problem is when the services are running locally it takes less than 400ms but when hosted on Windows azure it takes upto 20 seconds for some requests. I have checked the indexes of our database tables and its fine. I have no clue so as to what to profile and how to improve the performance.
Thanks!
Everyone Thanks a lot!
But I found a way to use dottrace(Excellent profiling tool) on the azure deployment. Here is the link
http://blog.maartenballiauw.be/post/2013/03/13/Remote-profiling-Windows-Azure-Cloud-Services-with-dotTrace.aspx
You can also use windows azure diagnostics and stopwatch class to log all times to the wad tables.
Also found out that the first request to the azure service is always slow in another thread. Have just copied it here below
Serkan, you would need to first make sure in your post, weather you have published a Cloud Service or a Website to Windows Azure. Based on Cloud Service (A Web Role) or a WebSite the answer to your question will be different. As you want to learn more I would explain what goes on behind.
As you suggested that your first connection is slow, I can see that happen with Windows Azure Websites. Windows Azure Websites are running in shared pool of resources and uses the concept of hot (active) and cold (inactive) sites in which if a websites has no active connection for x amount of time, the site goes into cold state means the host IIS process exits. When a new connection is made to that websites it takes a few seconds to get the site ready and working. Depend on how your first page code is, the time to load the site for the first time varies. Similar discussion is logged: Very slow opening MySQL connection using MySQL Connector for .net
With Windows Azure Cloud Service the overall application model is different. Your webrole has its own IIS server which is fully dedicated to your application and above Website limitation does not occur however there could be other reasons which could have slower page load. If you are using WebRole, then what you could do is run a page load profiler first and RD to your Azure Instance to collect the page load data to see what else you could do to boost the performance.
You'll obviously need to profile your app to find the real cause. Check out these two articles which should get you started:
http://msdn.microsoft.com/en-us/library/windowsazure/hh369930.aspx
http://www.windowsazure.com/en-us/develop/net/common-tasks/profiling-in-visual-studio/
So say I've got an MVC app hosted in the cloud somewhere, meaning I don't have access to IIS or any infrastructure.
All I have control over is the App code itself, and what comes down to the client.
My goal
Is to collect data over time of how well the MVC app is performing in terms of response times.
Current Problems
I can get a lot of data from Google Analyics, and other client-side tricks, but that won't tell if say, the App Pool is recycling too often.
Similarly if I put stop watches in the actions, that won't tell me about any delays in the App Startup (if it has to start up again).
Also, if I do put a stop watch in the Action, it doesn't take into account any delays in redering the View. For example, even though it's bad practice, there might be a DB call being made from the View, and my action metrics won't take that into account.
My Question
So, if I want to get true metrics of how long requests are taking overal from mulitple clients and users, where are the best places to but Stopwatches in the App. Or is it impossible to get true metrics from the app itself, and I have to place counters outside of the App (like in IIS).
Add New Relic, it's available for free as part of the AppHarbor service - https://appharbor.com/addons/newrelic
Since you mention "in the cloud somewhere" are you using Microsoft Azure for hosting? If so, there's some great diagnostics you can log to your Azure storage with DiagnosticsMonitorConfiguration.
Here's a tutorial on how to add diagnostics to your web and worker roles. You can find a full list of performance counters on MSDN
You can get everything from application requests/second, memory and CPU utilization, network adapter statistics, output cache hits/misses, request execution time, etc.