Extra logging during debugging though multiple log levels - debugging

Using NLog I know I can change the minLevel in the Nlog.config so I can exclude certain log messages. I think this is generally great when software is running in production. If a problem happens I can switch the minLevel and see more detail. This makes sense.
What I have problems with is during debugging the "Debug" level quite honestly seems a bit inadequate. This is mostly because "Debug" seems to be the catch all for everything a developer may care about and no one else.
For backend systems that do a lot I have seen this fill up a 25 MB log file in a few seconds. Sorting through this and trying to tie pieces together is a bit difficult.
Is it possible to have multiple levels of "Debug" so that I can limit the amount of information to actually make using the log file easier?

No sure if this solves your problem,
but it's common in NLog to use the following pattern:
use different loggers for each class or process (by using LogManager.GetCurrentClassLogger() or LogManager.GetLogger("loggernameForFlow1"))
always write all the logs messages to the logger (e.g. logger.Trace(...), logger.Debug(...) etc
filter the logs in the config by level, but also by logger name. Because LogManager.GetCurrentClassLogger() creates a logger with the current class name with namespace, you could easily filter per class. e.g
filter on namespace:
<logger name="myNamespace.*" minLevel=... writeTo=... />
filter on 1 class
<logger name="myNamespace.MyClass.*" minLevel=... writeTo=... />

Related

The effect of println statements in production?

I was reviewing the Grails application code and i found println statements in a lot of places. These were used for debugging. I am wondering whether leaving these statements affect the production app performance?
Yes, it affects in production environment, because println statements are synchronous. Without processing your println statements execution will not move forward, if you are printing huge size objects like Map, List, file content etc, this will take more execution time and increase the log file size as well, so this will definitely affect your production performance.
The better way is if you want to maintain the logs use Log4J like asynchronous library for auditing your important logs in the application.
Log4j reference

What is the metric "Configuration" in Oracle Enterprise Manager Performance graphs mean?

When looking at the OEM performance graphs what does the "Configuration" Metric indicate?
Since the term configuration is so generic, it is difficult to find meaningful results from a search. I have tried the help pages within OEM as well.
You can check all the wait events on this Oracle page:
http://docs.oracle.com/cd/B19306_01/server.102/b14237/waitevents001.htm
Configuration
Waits caused by inadequate configuration of database or instance
resources (for example, undersized log file sizes, shared pool size)
Ocurrences of log file switch, write complete waits or log buffer space are labeled "Configuration" on EM. Just click on "configuration" to see a more detailed graph about this events.
[EDIT]: An additional note... I don't remember seeing any meaningful ocurrences of "Configuration" events on any of the databases I've managed. Which either means I'm pretty good at sizing, or that it usually isn't a problem at all. I'm prone to choose the second one ;)

Effect of logging on Apache's performence

I am developing an Apache module. During development I found it convenient to use logging functions at various points in my code.
For example after opening a file, I would log an error if the operation was not successful so that I may know exactly where the problem occurred in my code.
Now, I am about to deliver my module to my boss (I am on an internship). I wanted to what are the best practices regarding logging. Is it good for maintenance purposes or is it bad because it may hamper the response time of the server.
It really depends on how you wrote those logging instructions. If you wrote:
logger.debug(computeSomeCostlyDebugOutput());
You might affect performance badly if the logger is not set on a DEBUG level (computeSomeCostlyDebugOutput will always take time to execute and its result will then be ignored by the logger if not matching the DEBUG level).
If you write it like this instead:
if (logger.isDebugEnabled()) {
logger.debug(computeSomeCostlyDebugOutput());
}
then the costly operations and logging will occur only if the correct logger level is set (i.e. the logger won't ignore it). It basically acts like another switch for the logger, the first switch being the configured logger level.
As Andrzej Doyle very well pointed out, the logger will check its level internally, but this happens in the debug method, after time was already wasted in the computeSomeCostlyDebugOutput. If you waste time in computeSomeCostlyDebugOutput, you better do it when you know that its result won't be in vain.
A lot of software ships with logging instructions which you can activate if you need more details into the inner workings of the thing, execution path and stuff. Just make sure they can be deactivated and only take computing time if the appropriate level is set.
One of the design goals of Log4J (for good reason) was performance; Ceki Gulku wanted the library to be usable in production, enterprise software and the overhead of Log4J itself is actually pretty minimal (measured on my own webapp project with a profiler).
Two things that are liable to take some time though, are:
Forming the arguments to pass into the logging method, for some calls. As dpb says, this should be avoided by wrapping any computation of complex output in a logging check so you're not deriving complex debug output when the logger's going to throw it away as it's set to only record errors.
The sheer I/O required to record the log data. If you've got your application logging 200Mb of debug logs per second (it might sound infeasible, but it's happened to me before) then that's likely to put a strain on how fast it runs, as IIRC the file writing happens synchronously. In various webapp-type projects I've developed, I can usually actually notice a slight difference in responsiveness when I set the logs to debug level (and IMO this is a good thing, as you should be generating a lot of output when you ask for debug logs).

What logging implementation do you prefer?

I'm about to implement a logging class in C++ and am trying to decide how to do it. I'm curious to know what kind of different logging implementations there are out there.
For example, I've used logging with "levels" in Python. Where you filter out log events that are lower than a certain threshold. It also includes logging "names" where you can filter out events via a hierarchy, for example "app.apples.*" will not be displayed but "app.bananas.*" will be.
I've had thoughts about using "tags", but unsure of the implementation. I've seen games use "bits" for compactness.
So my questions:
What implementations have you created or used before?
What do you think the advantages and disadvantages of them are?
I'd read this post by Jeff Atwood
It's about the overflow of Logging and how to avoid it.
There are lots of links on the Log4J wikipedia page.
One of our applications uses Registry entries to dynamically control logging/tracing during production execution.
For example:
if (Logger.TraceOptionIsEnabled(TraceOption.PLCF_ShowConfig)) {...whatever
Whe executed at run-time, if registry value PLCF_ShowConfig is true, the call returns true, and whatever is executed.
Quite handy.
Jeff Atwood had a pretty interesting blog entry about logging. The ultimate message of it was that logging is generally unnecessary (to some extents).
Logging generally doesn't scale well (too much data on high traffic systems).
I think the best point of it is that you generally don't need it. It's easier to trace through your code by hand to understand what values are being assigned to things than it is to sift through lots of log files.
It's just information overload.
Now the same can't be said for single user applications. For things like media encoding or general OS usage, it can be nice to have a log for small apps because debug info is useful (to me) in this situation. If you're burning a DVD and something goes wrong, looking at log info can be very helpful to troubleshoot with if you understand the log output.
I think having a few levels would help for the user, such as:
No logs
Basic logging for general user feedback
Highly technical data for a developer or tech-support person to interpret
Depending on the situation, it may be useful to store ALL log data and only display to the user the basic info, or perhaps giving the option to see all detailed data.
It all depends on the domain.

How to keep your own debug lines without checking them in?

When working on some code, I add extra debug logging of some kind to make it easier for me to trace the state and values that I care about for this particular fix.
But if I would check this in into the source code repository, my colleagues would get angry on me for polluting the Log output and polluting the code.
So how do I locally keep these lines of code that are important to me, without checking them in?
Clarification:
Many answers related to the log output, and that you with log levels can filter that out. And I agree with that.
But. I also mentioned the problem of polluting the actual code. If someone puts a log statement between every other line of code, to print the value of all variables all the time. It really makes the code hard to read. So I would really like to avoid that as well. Basically by not checking in the logging code at all. So the question is: how to keep your own special purpose log lines. So you can use them for your debug builds, without cluttering up the checked in code.
If the only objetive of the debugging code you are having problems with is to trace the values of some varibles I think that what you really need is a debugger. With a debugger you can watch the state of any variable in any moment.
If you cannot use a debugger, then you can add some code to print the values in some debug output. But this code should be only a few lines whose objective has to be to make easier the fix you are doing. Once it's commited to trunk it's fixed and then you shouldn't need more those debug lines, so you must delete them. Not delete all the debug code, good debug code is very useful, delete only your "personal" tracing debug code.
If the fix is so long that you want to save your progress commiting to the repository, then what you need is a branch, in this branch you can add so much debugging code as you want, but anyway you should remove it when merging in trunk.
But if I would check this in into the
source code repository, my colleagues
would get angry on me for polluting
the Log output and polluting the code.
I'm hoping that your Log framework has a concept of log levels, so that your debugging could easily be turned off. Personally I can't see why people would get angry at more debug logging - because they can just turn it off!
Why not wrap them in preprocessor directives (assuming the construct exists in the language of your choice)?
#if DEBUG
logger.debug("stuff I care about");
#endif
Also, you can use a log level like trace, or debug, which should not be turned on in production.
if(logger.isTraceEnabled()) {
logger.log("My expensive logging operation");
}
This way, if something in that area does crop up one day, you can turn logging at that level back on and actually get some (hopefully) helpful feedback.
Note that both of these solutions would still allow the logging statements to be checked in, but I don't see a good reason not to have them checked in. I am providing solutions to keep them out of production logs.
If this was really an ongoing problem, I think I'd assume that the central repository is the master version, and I'd end up using patch files to contain the differences between the official version (the last one that I worked on) and my version with the debugging code. Then, when I needed to reinstate my debug, I'd check out the official version, apply my patch (with the patch command), fix the problem, and before check in, remove the patch with patch -R (for a reversed patch).
However, there should be no need for this. You should be able to agree on a methodology that preserves the information in the official code line, with mechanisms to control the amount of debugging that is produced. And it should be possible regardless of whether your language has conditional compilation in the sense that C or C++ does, with the C pre-processor.
I know i'm going to get negative votes for this...
But if I were you, i'd just build my own tool.
It'll take you a weekend, yes, but you'll keep your coding style, and your repository clean, and everyone will be happy.
Not sure what source control you use. With mine, you can easily get a list of the things that are "pending to be checked in". And you can trigger a commit, all through an API.
If I had that same need, i'd make a program to commit, instead of using the built-in command in the Source Control GUI. Your program would go through the list of pending things, take all the files you added/changed, make a copy of them, remove all log lines, commit, and then replace them back with your version.
Depending on what your log lines look like, you may have to add a special comment at the end of them for your program to recognize them.
Again, shouldn't take too much work, and it's not much of a pain to use later.
I don't expect you'll find something that does this for you already done (and for your source control), it's pretty specific, I think.
Similar to
#if DEBUG #endif....
But that will still mean that anyone running with the 'Debug' configuration will hit those lines.
If you really want them skipped then use a log level that no one else uses, or....
Create a different run configuration called MYDEBUGCONFIG
and then put your debug code in between blocks like this:
#if MYDEBUGCONFIG
...your debugging code
#endif;
What source control system are you using? Git allows you to keep local branches. If worse comes to worst, you could just create your own 'Andreas' branch in the repository, though branch management could become pretty painful.
If you really are doing something like:
puts a log statement
between every other line of code, to
print the value of all variables all
the time. It really makes the code
hard to read.
that's the problem. Consider using a test framework, instead, and write the debug code there.
On the other hand, if you are writing just a few debug lines, then you can manage to avoid those by hands (e.g. removing the relevant lines with the editor before the commit and undoing the change after it's done) - but of course it have to be very infrequent!
IMHO, you should avoid the #if solution. That is the C/C++ way of doing conditional debugging routines. Instead attribute all of logging/debugging functions with the ConditionalAttribute. The constructor of the attribute takes in a string. This method will only be called if the particular pre-processor definition of the same name as the attribute string is defined. This has the exact same runtime implications as the #if/#endif solution but it looks a heck of a lot better in code.
This next suggestion is madness do not do it but you could...
Surround your personal logging code with comments such as
// ##LOG-START##
logger.print("OOh A log statment");
// ##END-LOG##
And before you commit your code run a shell script that strips out your logs.
I really wouldn't reccomend this as it's a rubbish idea, but that never stops anyone.
Alternative you could also not add a comment at the end of every log line and have a script remove them...
logger.print("My Innane log message"); //##LOG
Personally I think that using a proper logging framework with a debug logging level etc should be good enough. And remove any superfluous logs before you submit your code.
Treat it as first class code and keep with the code with proper logging API and build option to compile it out/completely disable.

Resources