How did Spring Cloud Sleuth add tracing information to logback log lines? - spring-boot

I have web application based on Spring Boot and it uses logback for logging.
I also inherit some logback defaults from spring boot using:
<include resource="org/springframework/boot/logging/logback/base.xml"/>
I want to start logging tracing information, so I added:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
Sleuth adds tracing information to log lines, but I can't find any %X or %mdc in patterns: https://github.com/spring-projects/spring-boot/blob/2.3.x/spring-boot-project/spring-boot/src/main/resources/org/springframework/boot/logging/logback/defaults.xml
How does Sleuth add tracing information into log lines?
I use spring-cloud-starter-parent Hoxton.SR9 parent which brings Spring Boot 2.3.5.RELEASE and spring-cloud-starter-sleuth 2.2.6.RELEASE

(tl;dr at the bottom)
From the question I suppose you already figured out that the traceId and spanId are placed into the MDC.
If you take a look at the log integration section of the sleuth docs you will see that the tracing info in the example is between the log level (ERROR) and the pid (97192). If you try to match this with the logback config you will see that there is nothing between the log level and the pid: ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } so how the tracing information get there could be a valid question.
If you take another look to the docs, it says this:
This log configuration was automatically setup by Sleuth. You can disable it by disabling Sleuth via spring.sleuth.enabled=false property or putting your own logging.pattern.level property.
Which still not explicitly explains the mechanism but it gives you a huge hint:
putting your own logging.pattern.level property
Based on this, you could think that there is nothing between the log level and the pid, Sleuth simply overrides the log level and places the tracing information into it. And if you search for the property that the docs mention in the code, you will found out that it is exactly what happens:
TL;DR
Sleuth overrides the log level pattern and adds tracing info into it:
map.put("logging.pattern.level", "%5p [${spring.zipkin.service.name:" + "${spring.application.name:}},%X{traceId:-},%X{spanId:-}]");

In order to bring this back to Spring Boot 3.0 where Sleuth is no longer provided. The TraceEnvironmentPostProcessor has to be copied along with having an entry in META-INF/spring.factories
Here's the code I modified slightly from the original to make it pass SonarLint.
import java.util.HashMap;
import java.util.Map;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.env.EnvironmentPostProcessor;
import org.springframework.core.env.ConfigurableEnvironment;
import org.springframework.core.env.MapPropertySource;
class TraceEnvironmentPostProcessor implements EnvironmentPostProcessor {
private static final String DEFAULT_PROPERTIES_SOURCE_NAME = "defaultProperties";
#Override
public void postProcessEnvironment(
final ConfigurableEnvironment environment, final SpringApplication application) {
final Map<String, Object> map = new HashMap<>();
final boolean sleuthEnabled =
environment.getProperty("spring.sleuth.enabled", Boolean.class, true);
final boolean sleuthDefaultLoggingPatternEnabled =
environment.getProperty(
"spring.sleuth.default-logging-pattern-enabled", Boolean.class, true);
if (sleuthEnabled && sleuthDefaultLoggingPatternEnabled) {
map.put(
"logging.pattern.level",
"%5p [${spring.zipkin.service.name:${spring.application.name:}},%X{traceId:-},%X{spanId:-}]");
String neverRefreshables =
environment.getProperty(
"spring.cloud.refresh.never-refreshable", "com.zaxxer.hikari.HikariDataSource");
map.put(
"spring.cloud.refresh.never-refreshable",
neverRefreshables
+ ",org.springframework.cloud.sleuth.instrument.jdbc.DataSourceWrapper");
}
final var propertySources = environment.getPropertySources();
if (propertySources.contains(DEFAULT_PROPERTIES_SOURCE_NAME)) {
final var source = propertySources.get(DEFAULT_PROPERTIES_SOURCE_NAME);
if (source instanceof MapPropertySource target) {
map.entrySet().stream()
.filter(e -> !(target.containsProperty(e.getKey())))
.forEach(e -> target.getSource().put(e.getKey(), e.getValue()));
}
} else {
propertySources.addLast(new MapPropertySource(DEFAULT_PROPERTIES_SOURCE_NAME, map));
}
}
}
And
org.springframework.boot.env.EnvironmentPostProcessor=\
net.trajano.swarm.logging.autoconfig.TraceEnvironmentPostProcessor

Related

How to add Log4j2 JDBC Appender programmatically to an existing configuration in Spring Boot?

A short rant at the beginning, just because it has to be said:
I'm getting tired of reading the terrible documentation of Log4j2 for the umpteenth time and still not finding any solutions for my problems. The documentation is completely outdated, sample code is torn uselessly out of a context that is needed but not explained further and the explanations are consistently insufficient. It shouldn't be that only Log4j2 developers can use Log4j2 in-depth. Frameworks should make the work of other developers easier, which is definitely not the case here. Period and thanks.
Now to my actual problem:
I have a Spring Boot application that is primarily configured with yaml files. The DataSource however is set programmatically so that we have a handle to its bean. Log4j2 is initially set up using yaml configuration as well.
log4j2-spring.yaml:
Configuration:
name: Default
status: warn
Appenders:
Console:
name: Console
target: SYSTEM_OUT
PatternLayout:
pattern: "%d{yyyy-MM-dd HH:mm:ss} %-5level [%t] %c: %msg%n"
Loggers:
Root:
level: warn
AppenderRef:
- ref: Console
Logger:
- name: com.example
level: debug
additivity: false
AppenderRef:
- ref: Console
What I want to do now is to extend this initial configuration programmatically with a JDBC Appender using the already existing connection-pool. According to the documentation, the following should be done:
The recommended approach for customizing a configuration is to extend one of the standard Configuration classes, override the setup method to first do super.setup() and then add the custom Appenders, Filters and LoggerConfigs to the configuration before it is registered for use.
So here is my custom Log4j2Configuration which extends YamlConfiguration:
public class Log4j2Configuration extends YamlConfiguration {
/* private final Log4j2ConnectionSource connectionSource; */ // <-- needs to get somehow injected
public Log4j2Configuration(LoggerContext loggerContext, ConfigurationSource configSource) {
super(loggerContext, configSource);
}
#Override
public void setup() {
super.setup();
}
#Override
protected void doConfigure() {
super.doConfigure();
LoggerContext context = (LoggerContext) LogManager.getContext(false);
Configuration config = context.getConfiguration();
ColumnConfig[] columns = new ColumnConfig[]{
//...
};
Appender jdbcAppender = JdbcAppender.newBuilder()
.setName("DataBase")
.setTableName("application_log")
// .setConnectionSource(connectionSource)
.setColumnConfigs(columns)
.build();
jdbcAppender.start();
config.addAppender(jdbcAppender);
AppenderRef ref = AppenderRef.createAppenderRef("DataBase", null, null);
AppenderRef[] refs = new AppenderRef[]{ref};
/* Deprecated, but still in the Log4j2 documentation */
LoggerConfig loggerConfig = LoggerConfig.createLogger(
false,
Level.TRACE,
"com.example",
"true",
refs,
null,
config,
null);
loggerConfig.addAppender(jdbcAppender, null, null);
config.addLogger("com.example", loggerConfig);
context.updateLoggers();
}
}
The ConnectionSource exists as an implementation of AbstractConnectionSource in the Spring context and still needs to be injected into the Log4j2Configuration class. Once I know how the configuration process works I can try to find a solution for this.
Log4j2ConnectionSource:
#Configuration
public class Log4j2ConnectionSource extends AbstractConnectionSource {
private final DataSource dataSource;
public Log4j2ConnectionSource(#Autowired #NotNull DataSource dataSource) {
this.dataSource = dataSource;
}
#Override
public Connection getConnection() throws SQLException {
return dataSource.getConnection();
}
}
And finally the ConfigurationFactory as described here in the documentation (It is interesting that the method getConfiguration calls with new MyXMLConfiguration(source, configFile) a constructor that doesn't exist. Is witchcraft at play here?).
Log4j2ConfigurationFactory:
#Order(50)
#Plugin(name = "Log4j2ConfigurationFactory", category = ConfigurationFactory.CATEGORY)
public class Log4j2ConfigurationFactory extends YamlConfigurationFactory {
#Override
public Configuration getConfiguration(LoggerContext context, ConfigurationSource configSource) {
return new Log4j2Configuration(context, configSource);
}
#Override
public String[] getSupportedTypes() {
return new String[]{".yml", "*"};
}
}
Now that the set up is more or less done, the running Log4j2 configuration needs somehow to be updated. So somebody should call doConfigure() within Log4j2Configuration. Log4j2 doesn't seem to do anything here on its own. Spring Boot doesn't do anything either. And I unfortunately don't have any plan what do at all.
Therefore my request:
Can anyone please explain to me how to get Log4j2 to update its configuration?
Many thanks for any help.

SpringBoot: How to update Logback file-pattern from a library (custom spring boot starter)

I would like to add TraceId to all log lines. I do that easily by:
Add traceId to MDC
MDC.put("TRACE_ID", sessionAware.getTraceId()+": ");
Update file-pattern in my application.properties (by adding: "%X{TRACE_ID}"):
logging.pattern.file=-%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%t] %-40.40logger{39} : %X{TRACE_ID}%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}
But I would like my CustomSpringBootStarter to set the file pattern. But it doesn't take effect when I update property "logging.pattern.file" from my CustomSpringBootStarter. Does anyone know the solution to this problem?
I have tried to set the "logging.pattern.file" property in the CustomSpringBootStarter's application.properties but it does not work.
I found a solution to the problem, very much inspired by this question.
I created the below EnvironmentPostProcessor in my CustomSpringBootStarter:
public class LifeLogBackConfiguration implements EnvironmentPostProcessor {
#Override
public void postProcessEnvironment(ConfigurableEnvironment environment, SpringApplication application) {
Map<String, String> properties = Collections.unmodifiableMap(Map.of(
"logging.pattern.file", LogConstants.FILE_LOG_PATTERN_LIFE,
"logging.pattern.console", LogConstants.CONSOLE_LOG_PATTERN_LIFE));
PropertySource propertySource = new OriginTrackedMapPropertySource("LogPatternsLife", properties, true);
environment.getPropertySources().addLast(propertySource);
}
}
,and registered it in: resources/META-INF/spring.factories
org.springframework.boot.env.EnvironmentPostProcessor=dk.topdanmark.life.lifespringbootstarter.log.LifeLogBackConfiguration
That solved the problem for me.

Spring sleuth Baggage key not getting propagated

I've a filter (OncePerRequestFilter) which basically intercepts incoming request and logs traceId, spanId etc. which works well,
this filter lies in a common module which is included in other projects to avoid including spring sleuth dependency in all of my micro-services, the reason why I've created it as a library because any changes to library will be common to all modules.
Now I've to add a new propagation key which need to be propagated to all services via http headers like trace and spanId for that I've extracted current span from HttpTracing and added a baggage key to it (as shown below)
Span span = httpTracing.tracing().tracer().currentSpan();
String corelationId =
StringUtils.isEmpty(request.getHeader(CORELATION_ID))
? "n/a"
: request.getHeader(CORELATION_ID);
ExtraFieldPropagation.set(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
span.annotate("baggage_set");
span.tag(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
I've added propagation-keys and whitelisted-mdc-keys to my application.yml (with my library) file like below
spring:
sleuth:
propagation-keys:
- x-corelationId
log:
slf4j:
whitelisted-mdc-keys:
- x-corelationId
After making this change in filter the corelationId is not available when I make a http call to another service with same app, basically keys are not getting propagated.
In your library you can implement ApplicationEnvironmentPreparedEvent listener and add the configuration you need there
Ex:
#Component
public class CustomApplicationListener implements ApplicationListener<ApplicationEvent> {
private static final Logger log = LoggerFactory.getLogger(LagortaApplicationListener.class);
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ApplicationEnvironmentPreparedEvent) {
log.debug("Custom ApplicationEnvironmentPreparedEvent Listener");
ApplicationEnvironmentPreparedEvent envEvent = (ApplicationEnvironmentPreparedEvent) event;
ConfigurableEnvironment env = envEvent.getEnvironment();
Properties props = new Properties();
props.put("spring.sleuth.propagation-keys", "x-corelationId");
props.put("log.slf4j.whitelisted-mdc-keys:", "x-corelationId");
env.getPropertySources().addFirst(new PropertiesPropertySource("custom", props));
}
}
}
Then in your microservice you will register this custom listener
public static void main(String[] args) {
ConfigurableApplicationContext context = new SpringApplicationBuilder(MyApplication.class)
.listeners(new CustomApplicationListener()).run();
}
I've gone through documentation and seems like I need to add spring.sleuth.propagation-keys and whitelist them by using spring.sleuth.log.slf4j.whitelisted-mdc-keys
Yes you need to do this
is there another way to add these properties in common module so that I do not need to include them in each and every micro services.
Yes, you can use Spring Cloud Config server and a properties file called application.yml / application.properties that would set those properties for all microservices
The answer from Mahmoud works great when you want register the whitelisted-mdc-keys programatically.
An extra tip when you need these properties also in a test, then you can find the anwser in this post: How to register a ApplicationEnvironmentPreparedEvent in Spring Test

Spring Boot auto-configured metrics not arriving to Librato

I am using Spring Boot with auto-configure enabled (#EnableAutoConfiguration) and trying to send my Spring MVC metrics to Librato. Right now only my own created metrics are arriving to Librato but auto-configured metrics (CPU, file descriptors, etc) are not sent to my reporter.
If I access a metric endpoint I can see the info generated there, for instance http://localhost:8081/actuator/metrics/system.cpu.count
I based my code on this post for ConsoleReporter. so I have this:
public static MeterRegistry libratoRegistry() {
MetricRegistry dropwizardRegistry = new MetricRegistry();
String libratoApiAccount = "xx";
String libratoApiKey = "yy";
String libratoPrefix = "zz";
LibratoReporter reporter = Librato
.reporter(dropwizardRegistry, libratoApiAccount, libratoApiKey)
.setPrefix(libratoPrefix)
.build();
reporter.start(60, TimeUnit.SECONDS);
DropwizardConfig dropwizardConfig = new DropwizardConfig() {
#Override
public String prefix() {
return "myprefix";
}
#Override
public String get(String key) {
return null;
}
};
return new DropwizardMeterRegistry(dropwizardConfig, dropwizardRegistry, HierarchicalNameMapper.DEFAULT, Clock.SYSTEM) {
#Override
protected Double nullGaugeValue() {
return null;
}
};
}
and at my main function I added Metrics.addRegistry(SpringReporter.libratoRegistry());
For the Librato library I am using in my compile("com.librato.metrics:metrics-librato:5.1.2") build.gradle. Documentation here. I used this library before without any problem.
If I use the ConsoleReporter as in this post the same thing happens, only my own created metrics are printed to the console.
Any thoughts on what am I doing wrong? or what am I missing?
Also, I enabled debug mode to see the "CONDITIONS EVALUATION REPORT" printed in the console but not sure what to look for in there.
Try to make your MeterRegistry for Librato reporter as a Spring #Bean and let me know whether it works.
UPDATED:
I tested with ConsoleReporter you mentioned and confirmed it's working with a sample. Note that the sample is on the branch console-reporter, not the master branch. See the sample for details.

Sending System Metrics to Graphite with Spring-Boot

Spring-Boot actuator exposes many useful metrics at /metrics such as uptime, memory usage, GC count.
Only a subset of these are sent to Graphite when using the Dropwizard Metrics integration. In specific, only the counters and gauges
Is there any way to get these other metrics to be published to graphite?
The documentation suggests that it should be possible:
Users of the Dropwizard ‘Metrics’ library will find that Spring Boot metrics are automatically published to com.codahale.metrics.MetricRegistry
System Metrics created by Spring boot are not reported automatically because MetricsRegistry does not know anything about those Metrics.
You should register those metrics manually when your application boots up.
#Autowired
private SystemPublicMetrics systemPublicMetrics;
private void registerSystemMetrics(MetricRegistry metricRegistry) {
systemPublicMetrics.metrics().forEach(m -> {
Gauge<Long> metricGauge = () -> m.getValue().longValue();
metricRegistry.register(m.getName(), metricGauge);
});
}
I have defined Gauge, not all the system metrics should be added as gauge. e.g. the Counter should be used to capture count values.
If you don't want to use Spring boot. Use can include metrics-jvm out of the box to capture JVM level metrics.
Here's a solution that does update DropWizard metrics on Spring metrics change. It also does that without turning #EnableScheduling on:
#EnableMetrics
#Configuration
public class ConsoleMetricsConfig extends MetricsConfigurerAdapter {
#Autowired
private SystemPublicMetrics systemPublicMetrics;
#Override
public void configureReporters(MetricRegistry metricRegistry) {
metricRegistry.register("jvm.memory", new MemoryUsageGaugeSet());
metricRegistry.register("jvm.thread-states", new ThreadStatesGaugeSet());
metricRegistry.register("jvm.garbage-collector", new GarbageCollectorMetricSet());
metricRegistry.register("spring.boot", (MetricSet) () -> {
final Map<String, Metric> gauges = new HashMap<String, Metric>();
for (final org.springframework.boot.actuate.metrics.Metric<?> springMetric :
systemPublicMetrics.metrics()) {
gauges.put(springMetric.getName(), (Gauge<Object>) () -> {
return systemPublicMetrics.metrics().stream()
.filter(m -> StringUtils.equals(m.getName(), springMetric.getName()))
.map(m -> m.getValue())
.findFirst()
.orElse(null);
});
}
return Collections.unmodifiableMap(gauges);
});
registerReporter(ConsoleReporter
.forRegistry(metricRegistry)
.convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.MILLISECONDS)
.build())
.start(intervalSecs, TimeUnit.SECONDS);
}
}
It uses the com.ryantenney.metrics library for enabling additional Spring annotations support and DropWizard reporters:
<dependency>
<groupId>com.ryantenney.metrics</groupId>
<artifactId>metrics-spring</artifactId>
<version>3.1.3</version>
</dependency>
But it is actually not necessary in this particular case.

Resources