WAS server logs vs trace logs - websphere

At this link http://en.wikipedia.org/wiki/Tracing_(software) they point out differences between server logs and trace logs. As a developer, I have always been sufficed by server logs and never needed trace logs. What situations require looking into trace logs?

As #bkail mentions, WebSphere Application Server's built-in server tracing is typically for IBM support. It is generally too fine-grained and tightly coupled with IBM's closed source code to be of use to customers.
However, there are also uses of the trace logs for application support as well. If your application utilizes java.util.logging, these log events will be written to WAS's log files (e.g. SystemOut.log, trace.log). The log messages written to SystemOut.log (Level.CONFIG and higher) are typically intended for system administrators. Log messages written to trace.log (Level.FINE and lower), on the other hand, are messages that are typically intended for developers or troubleshooting and debugging purposes; these messages may be tightly coupled with the code or contain extensive diagnostic information useful in troubleshooting situations. Generally, you only want to enable tracing during troubleshooting or development, as this type of extensive logging can be expensive and potentially impact the performance of your applications.
As a developer, you should make a first-class distinction between logging intended for your system administrators and logging (tracing) intended for developers or troubleshooting. Logging is a great method for communicating with system administrators and can be an invaluable for troubleshooting, but each of these use cases should be handled differently. This is one of the primary reasons that logging APIs (including java.util.logging) provide multiple logging levels. The article you referenced seems to do a great job distinguishing between logging and tracing (which translates to SystemOut.log and trace.log in WAS). IBM's documentation also provides a good overview of the differences.

Trace is primarily used by WebSphere Application Server support at IBM. Customers of that product would very rarely enable trace themselves.

Related

Enable Trace/Debug logs for JMS client of Websphere MQ

How do I enable Trace or DEBUG level logging in the JMS client of Websphere IBM MQ.
I have no control over server and it doesn't even run in our infrastructure.
Thanks,
Anuj
If you want instructions for tracing the MQ classes for JMS, try using your favourite search engine. You'll probably come across this Technote:
http://www-01.ibm.com/support/docview.wss?uid=swg21174924#Java
that has details for different product versions and environments.
Alternatively, why not look in thte "Toubleshooting and support" section of the production documentation in the IBM Knowledge Center:
https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.tro.doc/q031860_.htm
You can enable the trace log easily using the command line argument below.
-Dcom.ibm.msg.client.commonservices.trace.status=ON
Refer to the below link for more information on using this argument:
https://www.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.tro.doc/q131840_.htm#q131840_
For other options of enabling the trace:
https://www.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.tro.doc/q031860_.htm
Also, just bear in mind that the trace logs grow too large over time and affect performance. Try to use them in lower environments (not production) first and investigate your issues. You may also refer to the below link for more options on controlling the generated trace output.
https://www-01.ibm.com/support/docview.wss?uid=swg27018159&aid=1

Microsoft.Extensions.Logging.Console unable to understand

I am confused why we have to us this package.
Microsoft.Extensions.Logging.Console
I have 3 questions.
1. what is logging, whats the advantages of logging, and if i don't use it then whats pitfalls.?
2.what is logging.console, why we have to use it?
3.what is loggerfactory?
You should always have a logging facility in your application, without log entries it will be hard to find runtime bugs as there will be no information on what is happening in your application. Its not required to use it, but in a production environment it is a must have thing.
Logging.Console configures the logging facility to print out the log entries in the console, there are others provider options and you can write a custom one as well.
The LoggerFactory is an abstraction that "behind the hood" redirects your log messages to all installed providers, so you can have as many log outputs as you want by only changing the application startup.
I recommend to read the asp net core logging fundamentals documentation

Performance monitoring all layers of a system

I use several loadtesting tools (Loadrunner, JMeter, NeoLoad) to performance test different applications. Im wondering if it is possible to monitor all layers of an application stack so for example. Say i have the following data chain.
Loadbalancer <-x-> Application Server <-x-> RMI <-x-> Java Application <-x-> MQ <-x-> Legacy application <-x-> Database
Where i have marked the x in the chain i am interested in monitoring, for example avg responsetimes.
Obviously we could simply create a wrapper on all endpoints which would gather the statistics for us and maybe we could import it into loadrunner or other loadtesting tools and sideline hem with the tools inbuilt performance statistics, but maybe there is tools/applications which already does this?
If not, how should we proceed, in order to gather this kind of statistics?
The standard for this was supposed to be Application Response Measurement (ARM). It was a cross language set of APIs that did just what you were looking for. The issue is that the products that implement this spec all tend to be big, expensive "enterprise" level monitoring tools. Think multi-week installs, consultants, more infrastructure and lots of buzzwords.
Still, if this is a mission critical app with a mission critical budget, this may be what you need. But you may be able to build your own that does just enough without too much effort. A quick search turns up at least one open source ARM implementation if you still want to use that API.
Another option is to simply to have transactions you can run against each tier of the system to check general responsiveness. For example you can have a static web page on the LB, a no-op tx on the app server, a "hello" servlet on the Java app, put a message directly on the queue, etc. During a performance / load test, these could be hit directly by the load testing tool or you could write a wrapper servlet / application call that does this as a single HTTP (RMI?) call. Running these a few times a minute won't add too much load to the system, but it should help you pinpoint which tier is slower. The nice thing about this approach is that it also works in production, just watch out for security issues.
For single user kind of test, where you know you have problem (e.g. this tx is "slow"), I have also had pretty good luck with network tracing. It's very tedious, but when you aren't sure what tier is slow, starting up a network trace on a few machines and running a single tx usually gives a good idea of what the system is doing.
I have handled this decomposition a number of ways in the past. The first is at a very low level using protocol analyzer dumped data to find the time points where a conversation leaves tier X and enters tier Y. The second method is through the use of log examination for the various tiers. Something that can make your examination quite usefule in this case is a common log server for all of your components (syslog, Rsyslog, etc....) and a nice log parsing tool, such as the freely available Microsoft Logparser. The third method utilization of the audit trail for an application stored in the database. You may find this when working on enterprise services bus style applications which have a consumer/producer model and a bus to pass information rather than a direct connection. The audit trails I have seen are typically stored in a database and allow the tracking of an individual transaction through the entire application infrastructure. Your Load balancer, as a network device, may be out of the hunt on this one.
Note, if you go the protocol analyzer or log route, then be sure and synchronize all of your source information devices to a common time server. Having one of your collectors (analyzer, app log) off on a time stamp basis can really be a hair pulling experience when you get into the analysis phase.
As to how you move from your collected data into LoadRunner, that part is very mechanical. The Analysis program supports an interface to import external datapoints. The format is very specific and is documented in both help and the online docs. This import process works very well, as I often have to use it for collection of statistics from hosts which I do not have direct monitoring access to, but which need to be included as a part of the monitored test infrastructure.
James Pulley
Moderator (YahooGroups LoadRunner, Advanced-Loadrunner; GoogleGroups lr-LoadRunner; Linkedin LoadRunner, LoadRunnerByTheHour; SQAForums LoadRunner, WinRunner)

WebLogic Diagnostic Framework (WLDF): Alternatives?

WLDF (WebLogic Diagnostic Framework) allows many performance-related analyses - in particular resource demands tracking and tracing across classes and methods. In that sense, it is similar to a profiler - however, it works on the server side, and is bound to the particular product/vendor.
Are there any other products (maybe even open/free) which offer similar level of detail? I'm not interested in "conventional" monitoring products such as JMX, VisualVM, Hyperic etc. but in low-level, detailed tracing and request tracking.
Many thanks,
Michael
The free version of http://appdynamics.com/ is probably your best bet. It does the low level detailed tracing without the traditional overhead of a profiler (so yes, you can run it in prod).

Logging vs. Debugging

Background: I've inherited a web application that is intended to create on-the-fly connections between local and remote equipment. There are a tremendous number of moving parts recently: the app itself has changed significantly; the development toolchain was just updated; and both the local and remote equipment have been "modified" to support those changes.
The bright side is that it has a reasonable logging system that will write debug messages to a file, and it will log to both the file and a real-time user screen. I have an opportunity to re-work the entire log/debug mechanism.
Examples:
All messages are time-stamped and prefixed with a severity level.
Logs are for the customer. They record the system's responses to his/her requests.
Any log that identifies a problem also suggests a solution.
Debugs are for developers and Tech Support. They reveal the system internals.
Debugs indicate the function and/or line that generated them.
The customer can adjust the debug level on the fly to set the verbosity.
Question: What best practices have you used as a developer, or seen as a consumer, that generate useful logs and debugs?
Edit: Many helpful suggestions so far, thanks! To clarify: I'm more interested in what to log: content, format, etc.--and the reasons for doing so--than specific tools.
What was it about the best logs you've seen that made them most helpful?
Thanks for your help!
Don't confuse Logging, Tracing and Error Reporting, some people I know do and it creates one hell of a log file to grep through in order to get the information I want.
If I want to have everything churned out, I seperate into the following:
Tracing -> Dumps every action and step, timestamped, with input and
output data of that stage (ugliest and
largest file)
Logging -> Log the business process steps only, client does enquiry so log
the enquiry criteria and output data
nothing more.
Error Reporting / Debugging -> Exceptions logged detailing where it
occurred, timestamped, input/output
data if possible, user information etc
That way if any errors occurred and the Error/Debug log doesn't contain enough information for my liking I can always do a grep -A 50 -B 50 'timestamp' tracing_file to get more detail.
EDIT:
As has also been said, sticking to standard packages like the built in logging module for python as an example is always good. Rolling your own is not a great idea unless the language does not have one in it's standard library. I do like wrapping the logging in a small function generally taking the message and value for determining which logs it goes to, ie. 1 - tracing, 2 - logging, 4 - debugging so sending a value of 7 drops to all 3 etc.
The absolutley most valueable thing done with any logging framework is a "1-click" tool that gathers all logs and mail them to me even when the application is deployed on a machine belonging to a customer.
And make good choices at what to log so you can roughly follow the main paths in your application.
As frameworks I've used the standards (log4net, log4java, log4c++)
do NOT implement your own logging framework, when there already is a good one out-of-the-box. Most people who do just reinvent the wheel.
Some people never use a debugger but logs everything. That's different philosophies, you have to make your own choice. You can find many advices like these, or this one. Note that these advice are not language related...
Coding Horror guy got an interesting post about logging problem and why abusive logging could be a time waste in certain conditions.
I simply believe logging is for tracing things that could remain in production. Debug is for development. Maybe it's a too simple way of seeing things, cause some people use logs for debugging because they can't stand debuggers. But debugger-mode can be a waste of time too: you don't have to use it like a sort of test case, because it's not written down and will disappear after debug session.
So I think my opinion about this is :
logging for necessary and useful traces through development and production environments, with development and production levels, with the use of a log framework (log4 family tools)
debugging-mode for special strange cases when things are going out of control
test cases are important and can save time spend in infernal labyrinthine debugging sessions, used as an anti-regression method. Note that most of the people don't use test cases.
Coding horror said resist to the tendency of logging everything. That's right, but I've already seen a hudge app that does the exact contrary in a pretty way (and through a database)...
I would just setup your logging system to have multiple logging levels, on the services I write I have a logging/audit for almost every action and it's assigned a audit level 1-5 the higher the number the more audit events you get.
The very basic logging: starting, stopping, and restarting
Basic logging: Processing x number of files etc
Standard logging: Beginning to Processing, Finished processing, etc
Advanced logging: Beginning and ending of every stage in Processing
Everything : every action taken
you set the audit level in a config file so it can be changed on the fly.
Some general rules-of-thumb I have found to be useful in server-side applications:
requestID - assign a request ID to each incoming (HTTP) request and then log that on every log line, so you can easily grep those logs later by that ID and find all relevant lines. If you think it is very tedious to add that ID to every log statement, then at least java logging frameworks have made it transparent with the use of Mapped Diagnostic Context (MDC).
objectID - if your application/service deals with manipulating some business objects that have primary key, then it is useful to attach also that primary key to diagnostic context. Later, if someone comes with question "when was this object manipulated?" you can easily grep by the objectID and see all log records related to that object. In this context it is (sometimes) useful to actually use Nested Diagnostic Context instead of MDC.
when to log? - at least you should log whenever you cross an important service/component boundary. That way you can later reconstruct the call-flow and drill down to the particular codebase that seems to cause the error.
As I'm a Java developer, I will also give my experience with Java APIs and frameworks.
API
I'd recommend to use Simple Logging Facade for Java (SLF4J) - in my experience, it is the best facade to logging:
full-featured: it has not followed the least-common denominator approach (like commons-logging); instead, it is using degrade gracefully approach.
has adapters for practically all popular Java logging frameworks (e.g. log4j)
has solutions available on how to redirect all legacy logging APIs (log4j, commons-logging) to SLF4J
Implementation
The best implementation to use with SLF4J is logback - written by the same guy who also created SLF4J API.
Use an existing logging format, such as that used by Apache, and you can then piggyback on the many tools available for analysing the format.

Resources