OK, I spent quiet some time figuring out how to configure stuff to have DataDog trace ID in logs but couldn't get it working. To be clear what I'm looking for is to see trace IDs in logs message, the same way that adding spring-cloud-starter-sleuth to the classpath, automatically configure Slf4j/Logback to show trace IDs in log messages.
Where I've started:
We've got a simple web spring boot application running as a Docker container deployed as an AWS Elastic BeansTalk, whose logs go to CloudWatch and we read them there.
We have DataDog as a Java agent (thus no dependencies in pom.xml)
We have SLF4J/Logback in our dependencies list.
There's no other related depndencies (like dd-trace-ot or any opertracing libs)
What I did so far:
I found on SO that adding opentracing-spring-cloud-starter will add log integration automatically. But I couldn't get it working.
On DD website, it says configuring the pattern is enough to see the IDs, but in our case it didn't work. (is it because we don't have our logs a JSON?). Also, adding dd-trace-ot didn't help.
Notes:
We can't switch to JSON logs.
We can't switch to any other library (e.g. Slueth).
We can't go away from CloudWatch.
Can someone tell me how exactly I need to configure the application to see trace IDs in log messages? Is there any documentation or samples I can look at?
Do you have the ability to add some parameters in the logs sent. From the documentation you should be able to inject the trace id into your logs in a way that Datadog will interpret them.
You can also look at a parser to extract the trace id and span id from the raw log. This documentation should help you out on that.
From the documentation, if you don't have JSON logs, you need to include dd.trace_id and dd.span_id in your formatter:
If your logs are raw formatted, update your formatter to include
dd.trace_id and dd.span_id in your logger configuration:
<Pattern>"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L -
%X{dd.trace_id:-0} %X{ dd.span_id:-0} - %m%n"</Pattern> ```
So if you add %X{dd.trace_id:-0} %X{ dd.span_id:-0}, it should work.
Related
I use #Slf4j to add logs in all layers in my SpringBoot project.
I want to use some unque ID for all logs inside onq request.
For example generate UUID at the RestController and automatic add it to every logs in this thread
For example
[..... Some UUID ....] "The real logger message"
How to do this?
It need to have possibility to filter all logs of specific request.
If your application is threaded (or any better word to describe "opposed to a reactive application where it might be possible that everything happens in the main thread"), use the mapped diagnostic context (MDC). The MDC is a thread-bound Key-Value Map, where you can put and retrieve data. Baeldung has a tutorial on logging with the mdc using multiple logging frameworks. Also, there are plenty of examples across the web.
If your application should be reactive, you may wanna check out this project reactor reference.
I was trying to use multiline parser of Fluentd to parse Spring boot application log before submitting to Elasticsearch. It was working properly both Stacktrace and others logs. if I read the log from file.
But when I use Fluentd to directly receive the log from that Spring boot container by using logging driver instead, there was a error parsing stacktrace log but others were okay.
I found out on Fluentd documentation page that other input plugins except in_tail don't work with multiline. So, I'm not sure how I can achieve this. I saw someone mention about
fluent-plugin-detect-exceptions or fluent-plugin-concat but I cannot find the good example how to use it or how I parse the log after concatenate the multiline.
We are using raven with logback in java and sentry for error reporting. We can successfully log errors but how can we add the request info?
I have not found any example on https://docs.getsentry.com/hosted/clients/java/modules/logback/
You can use MDC or marker in slf4j, here's the link of example from sentry: https://docs.sentry.io/clients/java/modules/logback/. Please check Mapped Tags.
You can specify MDC keys to send as tags instead of including them in Additional Data. This allows them to be filtered within Sentry.
I cannot control logging levels for my code in Websphere Liberty Profile server.
I have configured the server.xml on the server not to log hibernate and spring, since my logs will get flooded with activity from those two frameworks. I commonly do this using log4j and it works fine in standalone WAS.
<logging consoleLogLevel="INFO" copySystemStreams="true" traceFormat="ENHANCED" traceSpecification="org.springframework.*=off:com.ibm.ws.*=off:org.hibernate*=off"/>
In Liberty this does not work.
I get the following log when liberty updates the configuration (when I save server.xml with the changes):
[INFO ] TRAS0040I: The configured trace state included the following specifications that do not match any loggers currently registered in the server: org.hibernate*=off:org.springframework.*=off
Basically this message applies to any of my code and any third party code (Spring, Hibernate, etc).
However the traceSpecification levels work fine for the IBM classes, and I'm able to specify *=off, which effectively turns off all logging.
Has anyone experienced this?
IBM's documentation for TRAS0040I seems simple enough, but I can't seem to figure out why my loggers are not getting registered with the server.
Liberty doesn't have a rich control on logging. You should understand the difference between "logging" and "tracing". Check description of console.log, messages.log, trace.log files at the beggining of: http://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/rwlp_logging.html
Your configuration in "traceSpecification" - actually will do nothing, as spring and hibernate are logs from JVM and they doesn't go to trace, so trace configuration doesn't affect them.
All you can configure in liberty for jvm logs is consoleLogLevel (INFO, AUDIT, WARNING, ERROR, and OFF)
If you want to configure log levels for specific components in Liberty - you should use for example log4j with own configuration
I'm confused as to how the errors are logged without me implicitly catching them and logging out the error. All that I've done is put a log4j.xml file in my project defining appenders and now the logs catch and log everything from the frameworks.
If I say, try to query in Hibernate and the query fails, or I try to open a file that doesn't exist, or I get a null pointer exception, if the log4j.xml file defines a log file, and the error level is set correctly, then the error will be captured there?
How does my spring web app capture errors that I didn't catch and log? Is this a result of apache commons logging?
Or is this some magic that log4j knows how to deal with - catch stream to the console etc?
Any info appreciated.
From spring official documentation:
The nice thing about commons-logging is that you don't need anything else to make your application work. It has a runtime discovery algorithm that looks for other logging frameworks in well known places on the classpath and uses one that it thinks is appropriate (or you can tell it which one if you need to). If nothing else is available you get pretty nice looking logs just from the JDK (java.util.logging or JUL for short). You should find that your Spring application works and logs happily to the console out of the box in most situations, and that's important.
To make Log4j work with the default JCL dependency (commons-logging)
all you need to do is put Log4j on the classpath, and provide it with
a configuration file (log4j.properties or log4j.xml in the root of the
classpath).
Take a look for a complete explanation: http://static.springsource.org/spring/docs/3.0.x/reference/overview.html#d0e743