I have a function that makes various LogWarning calls, using the ILogger capability from Microsoft. When deployed to Azure Services, this will write log data to blobs. It seems to be slowing down my application a lot, which I guess is the nature of having to send traffic over network to the storage unit.
I am trying to work out -- Is ILogger something I can run asynch so it is faster? How do I go about doing this?
This is not a direct answer. For azure web app, we usually use Application Insights for logging. It's actually using async mode for logging.
The benefits:
1.Easy to setup and integrated with ILogger.
2.You can control the log level.
3.You can also export the logs from Application insights to blob storage.
Related
Is it possible to add application insights for web api that's hosted on the on-premise version of service fabric?
So far I have tried to add the application insights to my project and wondering where to send for monitoring. It was easy when app is also on cloud.
I believe there is no on-premise application insights service, so even if the web api is hosted on-premise over service fabric; one must use cloud version application insights service, is that correct? In that case can anyone let me know how to setup?
App Insights is only hosted in Azure. If you're looking for an on-premise solution, you're best off looking at using something like the ELK stack (Elastic Search, Logstash and Kabana).
Nonetheless, even though your cluster is hosted on-premise, using Asure App Insights is still very much a valid scenario (assuming your IT organisation is fine with it).
Assuming you're fine with Application Insights, I strongly recommend you have a look at App Insights Service Fabric. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics, as well as,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications
One thing however that the above won't solve is providing overall cluster health information - e.g. when/how often nodes go up/down, how much CPU/Memory and disk IO is consumed on individual nodes. For this you could try MS EventFlow or a custom windows service
There is no "on premise" application insights, but as long as your on premise service has access to send outbound data, you can use application insights on your site. You won't be able to use some features, like webtests, because application insights wont be able to make calls into your site.
Setup is the same as always, create an application insights resource in azure, and either configure it in visual studio, or manually set the instrumentation key in your applicationinsights.config (or via code) in your app.
If you need to configure outbound firewall rules or anything to let AI send data, that information is all here: https://learn.microsoft.com/en-us/azure/application-insights/app-insights-ip-addresses
I have an Azure App Service running an MVC Web API. It connects to a DB and Redis cache. The calls to the API are taking a massive amount of time or timing out. I have stripped back the methods to be doing pretty much nothing.
public HttpResponseMessage GetData()
{
Request.CreateResponse(HttpStatusCode.OK, "abc");
}
I still have the same issue. The webserver isn't under any pressure, nor is the DB, both below 30%. I'm at a loss to know where to even start.
This method is called quite a lot so there may be a lot of concurrent requests. I'm running an S3 App Service plan which should more than suffice.
Any suggestions on how to trouble shoot greatly welcome? I can only think it is down to the number of simultaneous requests.
As far as I know, this issue is often caused by:
requests taking a long time
application using high memory/CPU
application crashing due to an exception.
I suggest you could firstly enable diagnostics logging for your web app.
By using this way, you could find the details information about Detailed Error, Failed Request Tracing,Web Server Logging in your web application.
More details, you could refer to bellow link:
Enable diagnostics logging for web apps in Azure App Service
You could also use the Azure App Service Support Portal.
By using this way, you could troubleshoot issues related to your web app by looking at HTTP logs, event logs, process dumps, and more.
You can access all this information using our Support portal at http://.scm.azurewebsites.net/Support
More details, you could refer to below link:
New Updates to Support Site Extension for Azure Websites
Besides, you could also refer to this article:Troubleshoot slow web app performance issues in Azure App Service
We have an app and we want to log how the user is interacting with it. For example are they using the pages we expect them to. I dont want to log this via the app as it will be very hard for me to then get this information from the device. Each page interacts with webservices so I was planning to log that interaction.
I have had some thoughts on this
* as the webservice is being called add a logging table to the database - problem here could be performance impact
* use log4j async mode to log these details.
Does anyone have any other suggestion on how to do this? Im reading the Lean Startup at the moment (very good so far) and this sort of thing seems fundamental to it so Im wondering if there are any other tips to this.
Thanks
Since no one answered this for a couple months, I thought a couple pointers might help you...
Use mobile analytics tools
Fabric.io
Google Analytics for Mobile Apps
Flurry
Amazon Mobile Analytics
appsee
Have the server record what users access (that's the approach you're considering). To offload the overhead, there are a couple tactics you could employ (mix 'n match as you will):
Use async mechanisms (async operations in the server, such as Futures; log4j async mode; async databases; etc).
Use a separate database.
Use a NoSQL database only to write accesses. Later on you process that information in a separate analytics application.
Have the client (mobile app) record the actions and send them in bulk to the server once in a while (as frequently as you need / want / can afford).
Cheers
I have a .net project that I am trying to deploy as a worker role in Azure. I am able to publish the file directly from Visual Studio but then when the worker role runs I am getting a uncaught exception. I am attempting to enable logging to azure storage from the worker role so I can get more information on the exception but I am running into issues getting MIT configured. Can anyone provide assistance on the best way to enable this logging?
I’m not a massive fan of the recommended Azure Worker Role logging process, namely using the Trace.WriteLine() method as I don’t feel as though it provides sufficient flexibility for my logging needs and I think it looks crap when my code is liberally scattered with Trace.WriteLine() statements, code is art and all that. I also don't like that Trace statements aren't always logged and can be 'lost' if the Worker Role hiccups or generally goes astray.
I therefore came up with an approach that writes log files to local storage via NLog, which are then flushed to Azure Storage on a schedule. Works like a charm.
I've got it documented over in the blog post at: https://modhul.wordpress.com/2014/10/28/capturing-custom-logs-from-azure-worker-roles-using-azure-diagnostics/
If I want to watch my log files in real-time (rather than waiting for them to be flushed to Azure Storage), I RDP into the Worker Role instance and fire-up a copy of BareTail (http://www.baremetalsoft.com/baretail/) which is a great way of watching log files in real-time, it also lets you add colour-coding for errors, info, warnings etc.
So say I've got an MVC app hosted in the cloud somewhere, meaning I don't have access to IIS or any infrastructure.
All I have control over is the App code itself, and what comes down to the client.
My goal
Is to collect data over time of how well the MVC app is performing in terms of response times.
Current Problems
I can get a lot of data from Google Analyics, and other client-side tricks, but that won't tell if say, the App Pool is recycling too often.
Similarly if I put stop watches in the actions, that won't tell me about any delays in the App Startup (if it has to start up again).
Also, if I do put a stop watch in the Action, it doesn't take into account any delays in redering the View. For example, even though it's bad practice, there might be a DB call being made from the View, and my action metrics won't take that into account.
My Question
So, if I want to get true metrics of how long requests are taking overal from mulitple clients and users, where are the best places to but Stopwatches in the App. Or is it impossible to get true metrics from the app itself, and I have to place counters outside of the App (like in IIS).
Add New Relic, it's available for free as part of the AppHarbor service - https://appharbor.com/addons/newrelic
Since you mention "in the cloud somewhere" are you using Microsoft Azure for hosting? If so, there's some great diagnostics you can log to your Azure storage with DiagnosticsMonitorConfiguration.
Here's a tutorial on how to add diagnostics to your web and worker roles. You can find a full list of performance counters on MSDN
You can get everything from application requests/second, memory and CPU utilization, network adapter statistics, output cache hits/misses, request execution time, etc.