I'm hoping to get some definitive answers on this great question. When writing web applications in .net, should you use the append feature in the IIS access log to store what you want or should you write to the windows event log?
I'm talking about informational logging, warning logging and error logging. Would it be smart to split it up and write to both the iis log and event log?
Or for that matter for normal .net windows applications, should they write to their own log file or use the event log?
I would advice you to not put your application specific log information in the IIs log file.
That log file is specific to the web server and the format of that log file is determined by the log settings in IIs. 3rd party log analyzing tools are available for it, and you may ending up in a situation where these tools can not parse the IIs log files because information you have put there could be mis-formatted.
Application specific log information is better to separate into either the Event log or a dedicated log file. Whether it should be the Event log or not is really a matter of preference. The Event log information could more easily be parsed and filtered by the Event Viewer and therefore could be much easier to deal with. It's a well known format that you can send to other people for further investigation and they can easily load that up into their Event Viewer. Excellent choice for support cases.
If you expect a log of loggin it's preferable to create dedicated application specific Event log so you don't clutter the generic Application Event log.
I would say that this is true for Windows desktop applications as well.
If you don't already, I would also recommend you to use one of the well known logging frameworks such as Log4Net or the Enterprise Logging Block. It will save you a lot of time and pain.
Related
I am confused why we have to us this package.
Microsoft.Extensions.Logging.Console
I have 3 questions.
1. what is logging, whats the advantages of logging, and if i don't use it then whats pitfalls.?
2.what is logging.console, why we have to use it?
3.what is loggerfactory?
You should always have a logging facility in your application, without log entries it will be hard to find runtime bugs as there will be no information on what is happening in your application. Its not required to use it, but in a production environment it is a must have thing.
Logging.Console configures the logging facility to print out the log entries in the console, there are others provider options and you can write a custom one as well.
The LoggerFactory is an abstraction that "behind the hood" redirects your log messages to all installed providers, so you can have as many log outputs as you want by only changing the application startup.
I recommend to read the asp net core logging fundamentals documentation
I have a web API going in a live environment soon. I want to log all exceptions/errors globally.
I found multiple solutions online, some newer than others like using Elmah or
doing a custom version using "Global error handling".
What is a reliable, painless solution to set up for this?
There are many different solutions depending on what you are looking for.
There are a few tools like Sentry that focus on exception tracking (Basically the Crashlytics, but for the web world). They focus a lot less on logs and mostly on uncaught (or sometimes caught exceptions). The pro is you don't have to code up many log messages and their tools can gather other context like the environmental variables or SDK versions.
There is also a group of tools that focus on logging. There is no SDK or agent unlike the first companies, rather they hook directly into the stdout/syslog outputs. Such companies would be Loggly and Logentries. These tools are designed more for searching millions of lines of logs.
There is also us (Moesif), we focus not on exception tracking, rather API errors and analytics by capturing context around API calls and the JSON payloads with them. While I work at Moesif, any of these options can be good and usually integrate pretty quickly. Each have 30 day to try, so sometimes you can try a few and see what fits in your workflow the best.
since Umbraco v6 decided to implement logging to a text file by default, I would like to ask you guys what kind of logging you use.
Do you log to a text file on a production website, or do you log to a database table? Or do you implement any other kind of logging?
And what are the performance implications of this?
I do both type of logging file as well as DB on production environment, as I need to audit logs so need to have everything actual and saved.
I use nLog.
http://nlog-project.org/
Its robust, fast and good and have been using it in production environment from last year.
Its good and gives you logging at various levels.
I would recommend you to use NLog.
At one time I investigated question about the best frameworks for logging and stopped on NLog.
I have already used it on different projects and it always show good results.
With NLog you can sent your logs to a different targets:
file, database, event log, console, email, nlogviewer and so forth.
You can set up all configuration on config files. It's very cool and useful. You can easily set up how and where you want to write your logs.
At your disposal is also Wrapper Targets (see datail in documentation). In my opinion the most useful target is AsyncWrapper (provides asynchronous, buffered execution of target writes). It will give you good performance.
There are also a lot of another cool featers.
I have a need for a tool that would monitor and more importantly log requests on IIS. This tool would have to report basic info about requests such as date/time of request, time spent for request, kbytes transferred... etc
What do you people use for such monitoring?
You should extend and add all of the IIS properties you want to log.
To do this, do the following:
Go into IIS
Select properties on
your website.
Under the website tab,
choose properties in the logging
section.
Select the Extended
Properties tab.
Select extended
properties
Select all of the
items you want to log.
Reset IIS.
You can now use a log parser to look through the log. http://www.smartertools.com/ has a decent one called smarter stats, and is free for a small site.
You can use IIS's log files and read them using Log Parser (free download from MS).
In response to comment: the format of the IIS log file can be found here: IIS Log File Format (IIS 6.0) and here.
IIS Log files + log analysers.
Log analysers like webtrends will give you a lot of information.
You have a look on Operations Manager from System Center family. - http://www.microsoft.com/systemcenter/operationsmanager/en/us/default.aspx
Please see the eginnovations IIS web server monitoring tool - http://www.eginnovations.com/web/iismonitor.htm
Background: I've inherited a web application that is intended to create on-the-fly connections between local and remote equipment. There are a tremendous number of moving parts recently: the app itself has changed significantly; the development toolchain was just updated; and both the local and remote equipment have been "modified" to support those changes.
The bright side is that it has a reasonable logging system that will write debug messages to a file, and it will log to both the file and a real-time user screen. I have an opportunity to re-work the entire log/debug mechanism.
Examples:
All messages are time-stamped and prefixed with a severity level.
Logs are for the customer. They record the system's responses to his/her requests.
Any log that identifies a problem also suggests a solution.
Debugs are for developers and Tech Support. They reveal the system internals.
Debugs indicate the function and/or line that generated them.
The customer can adjust the debug level on the fly to set the verbosity.
Question: What best practices have you used as a developer, or seen as a consumer, that generate useful logs and debugs?
Edit: Many helpful suggestions so far, thanks! To clarify: I'm more interested in what to log: content, format, etc.--and the reasons for doing so--than specific tools.
What was it about the best logs you've seen that made them most helpful?
Thanks for your help!
Don't confuse Logging, Tracing and Error Reporting, some people I know do and it creates one hell of a log file to grep through in order to get the information I want.
If I want to have everything churned out, I seperate into the following:
Tracing -> Dumps every action and step, timestamped, with input and
output data of that stage (ugliest and
largest file)
Logging -> Log the business process steps only, client does enquiry so log
the enquiry criteria and output data
nothing more.
Error Reporting / Debugging -> Exceptions logged detailing where it
occurred, timestamped, input/output
data if possible, user information etc
That way if any errors occurred and the Error/Debug log doesn't contain enough information for my liking I can always do a grep -A 50 -B 50 'timestamp' tracing_file to get more detail.
EDIT:
As has also been said, sticking to standard packages like the built in logging module for python as an example is always good. Rolling your own is not a great idea unless the language does not have one in it's standard library. I do like wrapping the logging in a small function generally taking the message and value for determining which logs it goes to, ie. 1 - tracing, 2 - logging, 4 - debugging so sending a value of 7 drops to all 3 etc.
The absolutley most valueable thing done with any logging framework is a "1-click" tool that gathers all logs and mail them to me even when the application is deployed on a machine belonging to a customer.
And make good choices at what to log so you can roughly follow the main paths in your application.
As frameworks I've used the standards (log4net, log4java, log4c++)
do NOT implement your own logging framework, when there already is a good one out-of-the-box. Most people who do just reinvent the wheel.
Some people never use a debugger but logs everything. That's different philosophies, you have to make your own choice. You can find many advices like these, or this one. Note that these advice are not language related...
Coding Horror guy got an interesting post about logging problem and why abusive logging could be a time waste in certain conditions.
I simply believe logging is for tracing things that could remain in production. Debug is for development. Maybe it's a too simple way of seeing things, cause some people use logs for debugging because they can't stand debuggers. But debugger-mode can be a waste of time too: you don't have to use it like a sort of test case, because it's not written down and will disappear after debug session.
So I think my opinion about this is :
logging for necessary and useful traces through development and production environments, with development and production levels, with the use of a log framework (log4 family tools)
debugging-mode for special strange cases when things are going out of control
test cases are important and can save time spend in infernal labyrinthine debugging sessions, used as an anti-regression method. Note that most of the people don't use test cases.
Coding horror said resist to the tendency of logging everything. That's right, but I've already seen a hudge app that does the exact contrary in a pretty way (and through a database)...
I would just setup your logging system to have multiple logging levels, on the services I write I have a logging/audit for almost every action and it's assigned a audit level 1-5 the higher the number the more audit events you get.
The very basic logging: starting, stopping, and restarting
Basic logging: Processing x number of files etc
Standard logging: Beginning to Processing, Finished processing, etc
Advanced logging: Beginning and ending of every stage in Processing
Everything : every action taken
you set the audit level in a config file so it can be changed on the fly.
Some general rules-of-thumb I have found to be useful in server-side applications:
requestID - assign a request ID to each incoming (HTTP) request and then log that on every log line, so you can easily grep those logs later by that ID and find all relevant lines. If you think it is very tedious to add that ID to every log statement, then at least java logging frameworks have made it transparent with the use of Mapped Diagnostic Context (MDC).
objectID - if your application/service deals with manipulating some business objects that have primary key, then it is useful to attach also that primary key to diagnostic context. Later, if someone comes with question "when was this object manipulated?" you can easily grep by the objectID and see all log records related to that object. In this context it is (sometimes) useful to actually use Nested Diagnostic Context instead of MDC.
when to log? - at least you should log whenever you cross an important service/component boundary. That way you can later reconstruct the call-flow and drill down to the particular codebase that seems to cause the error.
As I'm a Java developer, I will also give my experience with Java APIs and frameworks.
API
I'd recommend to use Simple Logging Facade for Java (SLF4J) - in my experience, it is the best facade to logging:
full-featured: it has not followed the least-common denominator approach (like commons-logging); instead, it is using degrade gracefully approach.
has adapters for practically all popular Java logging frameworks (e.g. log4j)
has solutions available on how to redirect all legacy logging APIs (log4j, commons-logging) to SLF4J
Implementation
The best implementation to use with SLF4J is logback - written by the same guy who also created SLF4J API.
Use an existing logging format, such as that used by Apache, and you can then piggyback on the many tools available for analysing the format.