I would like to reduce the size of certain logs using logback (I attempted to use Pattern layout with with a pattern of %.100000msg to limit the max size to one hundred thousand but had no luck), the large(over 1 million characters caused by a few REST GET calls) logs are causing Elastic to slow down when searching for certain information.
What would be the best practice to overcome this?
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="Console"
class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>
%.30msg
</Pattern>
</layout>
</appender>
<logger name="" level="INFO"/>
I solved this by refering to this documentation to create a custom converter. The property was an extra, it was implemented to allow for configuration within the xml file.
<property scope="context" name="max-message-length" value="100000"/>
<conversionRule conversionWord="boundedMsg"
converterClass="za.co.ksdc.qadee.config.TruncationCustomLogConverter" />
The following java class was implemented to truncate the log message:
public class TruncationCustomLogConverter extends ClassicConverter {
#Override
public String convert(ILoggingEvent event) {
int maxMessageLength = Integer.parseInt(getContext().getProperty("max-
message-length"));
String formattedMessage = event.getFormattedMessage();
if (formattedMessage == null ||
formattedMessage.length() < maxMessageLength) {
return formattedMessage;
}
return new StringBuilder(maxMessageLength)
.append(formattedMessage, 0, maxMessageLength)
.toString();
}
}
Related
I have created a test project with Spring Boot to learn about about using the logback-spring.xml file. I want to use Spring's default setting for writing to console so I am using the following line
<include resource="org/springframework/boot/logging/logback/base.xml" />
And I also want to log to a file on a rolling basis and keep a maximum number of log files on the system. Writing to console is working as expected. However no logs are written to the log file. The folder named "logs" gets created and the file "logfile.log" also gets created. But nothing gets logged to it.
Below is the fill logback-spring.xml file
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/base.xml" />
<property name="LOG_PATH" value="logs" />
<property name="LOG_ARCHIVE" value="${LOG_PATH}/archive" />
<appender name="File-Appender" class="ch.qos.logback.core.FileAppender">
<file>${LOG_PATH}/logfile.log</file>
<encoder>
<pattern>%d{dd-MM-yyyy HH:mm:ss.SSS} %magenta([%thread]) %highlight(%-5level) %logger{36}.%M - %msg%n</pattern>
<outputPatternAsHeader>false</outputPatternAsHeader>
</encoder>
</appender>
<logger name="test" level="DEBUG" />
</configuration>
and below is the TestApplication.java file which is part of the test package
#SpringBootApplication
public class TestApplication {
private static final Logger logger = LoggerFactory.getLogger(TestApplication.class);
public static void main(String[] args) {
SpringApplication.run(TestApplication.class, args);
logger.trace("Trace Message!");
logger.debug("Debug Message!");
logger.info("Info Message!");
logger.warn("Warn Message!");
logger.error("Error Message!");
}
}
Why is nothing being logged to the file?
I think there are a few issues.
First, remove the line <logger name="test" level="DEBUG" /> . This sets up a logger for classes under the package test but defines no appender, so nothing is logged.
Once that's gone, add
<root level="DEBUG">
<appender-ref ref="File-Appender"/>
</root>
This will configure the root logger (which all loggers inherit) on debug level and to output all the logs to the File-Appender.
Also, I cannot recall if logback creates missing directories, so you might need to ensure the logs directory does exist before starting the application.
I have this configuration in my logback.xml into a Spring Web Application (NO Spring Boot).
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="Console" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
<timestampFormat>yyyy-MM-dd'T'HH:mm:ss.SSSX</timestampFormat>
<timestampFormatTimezoneId>Etc/UTC</timestampFormatTimezoneId>
<jsonFormatter class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
<prettyPrint>true</prettyPrint>
</jsonFormatter>
</layout>
<customFields>{"appname":"foobar"}</customFields>
</encoder>
</appender>
<!-- LOG everything at INFO level -->
<root level="INFO">
<appender-ref ref="Console" />
</root>
</configuration>
The JSON layout works fine but custom fields as "appname": "foobar" are not printed:
{
"timestamp" : "2020-06-10T14:55:25.534Z",
"level" : "INFO",
"thread" : "Catalina-utility-1",
"logger" : "org.springframework.web.servlet.DispatcherServlet",
"message" : "FrameworkServlet 'dispatcher': initialization completed in 72 ms",
"context" : "default"
}
What am I doing wrong?
SOLUTION
I was using the wrong libraries for my needs:
logback-jackson
logback-json-classic
Because of the fact that I need to process logs through Logstash I've corrected my configuration like this:
pom.xml
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.4</version>
</dependency>
logback.xml
<appender name="Console" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"customer":"X", "appname":"Y", "environment":"dev"}</customFields>
</encoder>
</appender>
and now It works fine.
I just stumbled this question because I had the same problem, and I found a solution, with logback-jackson and logback-json-classic.
Option 1: Per-Thread via Mapped Diagnostic Context (MDC)
SLF4j's Mapped Diagnostic Context is a per-thread key-value store that we can use to write custom structured data to the log output.
MDC.put("customKey", "customValue");
Logback's JsonLayout will automatically print this value under a special mdc JSON object without any further configuration.
{ [...], "mdc": {"customKey", "customValue"}}
Note that the MDC is constructed per thread and if it is empty, no mdc field is printed to the log output.
Option 2: Global (for all threads)
If you want custom fields to appear at the JSON output's root, you need to create a custom, but simple Layout class that extends JsonLayout. JsonLayout provides us with a addCustomDataToJsonMap we can override.
package com.mypackage;
import ch.qos.logback.contrib.json.classic.JsonLayout;
public class CustomJsonLayout extends JsonLayout {
#Override
protected void addCustomDataToJsonMap(Map<String, Object> map, ILoggingEvent event) {
map.put("customKey", "customValue");
}
}
Now, you just need to tell Logback to use CustomJsonLayout instead of JsonLayout in your logback.xml file and keep the rest the same.
<layout class="com.mypackage.CustomJsonLayout">
...
</layout>
Now, any log message will have the following output:
{ ..., "customKey": "customValue"}
I am using log4j2 in Spring Boot application to asyn logging.
Here is my config log4j2-dev.xml
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" monitorInterval="30">
<Properties>
<Property name="LOG_PATTERN">%d{yyyy-MM-dd HH:mm:ss.SSS} %highlight{%5p}--[%T-%-15.15t] [%-20X{serviceMessageId}]%-40.40c{1.} :%m%n%ex</Property>
</Properties>
<Appenders>
<Console name="ConsoleAppender" target="SYSTEM_OUT"
follow="true">
<PatternLayout pattern="${LOG_PATTERN}" />
</Console>
<!-- Rolling File Appender -->
<RollingFile name="FileAppender" fileName="logs/app.log"
filePattern="logs/app-%d{yyyy-MM-dd}-%i.log">
<PatternLayout>
<Pattern>${LOG_PATTERN}</Pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="100MB" />
</Policies>
<DefaultRolloverStrategy max="10" />
</RollingFile>
<Kafka name="KafkaAppender" topic="ServiceCentrallog">
<Property name="bootstrap.servers">10.2.16.2:9092,10.2.16.3:9092,10.2.16.4:9092</Property>
<JSONLayout compact="true" properties="true">
<KeyValuePair key="application"
value="${bundle:application-dev:spring.application.name}" />
</JSONLayout>
</Kafka>
</Appenders>
<Loggers>
<AsyncRoot level="info">
<AppenderRef ref="ConsoleAppender" />
<AppenderRef ref="FileAppender" />
<AppenderRef ref="KafkaAppender" />
</AsyncRoot>
</Loggers>
My BaseClass in Project
public abstract class BaseObject {
protected final org.apache.logging.log4j.Logger logger = LogManager.getLogger(getClass());
#Override
public String toString() {
ObjectMapper mapper = new ObjectMapper();
mapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false);
String jsonString = "";
try {
jsonString = mapper.writeValueAsString(this);
} catch (JsonProcessingException e) {
logger.error("BaseObject: ", e);
jsonString = "Can't build json from object";
}
return jsonString;
}
}
Here is how I do to write log:
logger.info("Input: " + input.toString());
....
logger.info("output: " + Utils.toJson(restRes));
It work fine in normal case.
But if I'm using Jmetter to send a lot of request (TOTAL: 7996, AVG: 98 message/s)
I see that the logging is too slow, after stop sending requests about 1.5 minutes the logging is still continues and log files are still increasing in capacity.
I have searched a lot but still do not know how to speed up logging, or find out what's not reasonable in my configuration.
But if I'm using Jmetter to send a lot of request (TOTAL: 7996, AVG:
98 message/s) I see that the logging is too slow, after stop sending
requests about 1.5 minutes the logging is still continues and log
files are still increasing in capacity.
You are using asynch logging of Log4J2. Its intention is to not block the executing threads during logging operations. So if your application logs many things (7996*98 messages) in a couple of minutes, this behavior makes completely sense : messages are queued more and more and handling these until the last one will take time.
I have searched a lot but still do not know how to speed up logging,
or find out what's not reasonable in my configuration.
1) Use synchronous logging will speed up your logging as it will use a blocking approach : the logging invocation will return only when the message would be effectively logged in the appender(s) but it will also affect the speed of your processings.
2) Don't use 3 appenders for this scenario (that is logging request/response):
<AsyncRoot level="info">
<AppenderRef ref="ConsoleAppender" />
<AppenderRef ref="FileAppender" />
<AppenderRef ref="KafkaAppender" />
</AsyncRoot>
It performs the log three times. It is a lot.
If you really need to log these information, log these in a single appender. You can achieve it easily with the filter feature (the MarkerFilter should be fine).
For example add the marker JSON_REQUEST_RESPONSE when you log and specify that only one of the appender logs it if present and the others don't log in any case :
<RollingFile name="FileAppender" fileName="logs/app.log"
filePattern="logs/app-%d{yyyy-MM-dd}-%i.log">
<!-- ACCEPT Marker-->
<MarkerFilter marker="JSON_REQUEST_RESPONSE" onMatch="ACCEPT" />
<...>
</RollingFile>
<Console name="ConsoleAppender" target="SYSTEM_OUT"
follow="true">
<!-- DENY Marker-->
<MarkerFilter marker="JSON_REQUEST_RESPONSE" onMatch="DENY" />
<...>
</Console>
3) Don't log so much in info() :
logger.info("Input: " + input.toString());
....
logger.info("output: " + Utils.toJson(restRes));
As a side note, don't use concatenation for logging because this can be expensive for nothing if the logger level doesn't match and that nothing is logged.
The lazy evaluated computation methods that takes a Supplier are better in this case :
logger.info("Input: {}", () -> input.toString());
....
logger.info("output: {}", () -> Utils.toJson(restRes));
I have a scheduled task in a fixed rate, that reads a queue.
Each message that comes from the queue has an ID.
I wanna know if it's possible split the log by ID, appending to a different file.
I was thinking about use aspects or a custom appender, one of these can do the job for me?
Thanks.
Well, after some search I've remembered of MDC (Mapped Diagnostic Context) wich can do what I want with almost no workarounds.
I just need to add a SiftingAppender to the logback-spring.xml like this:
<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false">
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>checkoutId</key>
<defaultValue>system</defaultValue>
</discriminator>
<sift>
<appender name="${checkoutId}" class="ch.qos.logback.core.FileAppender">
<file>${checkoutId}.log</file>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d{HH:mm:ss:SSS} | %-5level | %thread | %logger{20} | %msg%n%rEx</pattern>
</layout>
</appender>
</sift>
</appender>
<root level="INFO">
<appender-ref ref="SIFT" />
</root>
</configuration>
Than I call like that:
#Scheduled(initialDelayString = "${consumeStart:10000}", fixedRateString = "${consumeRate:5000}")
private void task() {
try {
val message = queue.get(timeout);
if (message != null) {
MDC.put("checkoutId", message.toString());
. . .
}
} finally {
MDC.remove("checkoutId");
}
}
I have a scenario where I send some sensitive data in the ResponseBody of a spring controller. Spring logs the entire response body in DEBUG mode, where I can see the sensitive data also. Now if I configure my app to print only INFO logs or higher, this won't be displayed, but that is not a fool proof method, since its possible that DEBUG mode could be turned on accidentally.
Is there anyway I can disable certain fields in the response body from being logged by Spring?
Thanks
My suggestion would be that you implement a custom converter for logback (if that is your logging library)
http://logback.qos.ch/manual/layouts.html#customConversionSpecifier
It enables you to convert your logging data and blur out any data that should be removed from the message.
From the documentation it is pretty straight forward
public class MySampleConverter extends ClassicConverter {
long start = System.nanoTime();
#Override
public String convert(ILoggingEvent event) {
long nowInNanos = System.nanoTime();
return Long.toString(nowInNanos-start);
}
}
And the config
<configuration>
<conversionRule conversionWord="nanos"
converterClass="chapters.layouts.MySampleConverter" />
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%-6nanos [%thread] - %msg%n</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="STDOUT" />
</root>
</configuration>
If you're using Log4j you can implement a custom filter which would mask the data per your layout
http://vozis.blogspot.sg/2012/02/log4j-filter-to-mask-payment-card.html
Edit
The final easier solution turned out to be to override toString of the return object. Spring uses toString to log the return message