I would like to configure rate limit for log lines per logger (for example: each logger can send max of 100 log lines per minute).
I had thoughts, shall I do it with a new simple filter (I don't think TurboFilter is appropriate)? Or a new appender?
Filter sounds more appropriate, but other filters can override my decision, that's why I think to implement it with appender.
Do you have any Ideas?
I've wrote my own filter. it looks like this:
public class ThresholdFilter extends Filter<ILoggingEvent> {
private final Cache<String/*Logger*/, AtomicInteger> loggerRates = CacheBuilder.newBuilder()
.expireAfterWrite(1, TimeUnit.MINUTES).build();
#Override
public FilterReply decide(ILoggingEvent event) {
return (isStarted() && (getLimit(event.getLoggerName()) > 100)) ? FilterReply.DENY : FilterReply.NEUTRAL;
}
private int getLimit(String loggerName) {
int i = 0;
try {
i = loggerRates.get(loggerName, () -> new AtomicInteger(0)).incrementAndGet();
} catch (ExecutionException ignored) {
}
return i;
} }
And then added this filter to my appenders in the logback.xml
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<filter class="org.ahiel.ThresholdFilter">
<encoder><pattern>[%thread] %-5level %logger{35} - %msg %n</pattern></encoder>
</appender>
Related
Suppose we use Logback for logging.
It’s required to change the path to the log file every time a certain event (i.e. function call) occurs.
For example, somewhere we call a function.
startNewLogSegment("A")
After this event, the logger is expected to start writing to the logs/mySegment_A.log file.
Then, again a call performed:
startNewLogSegment("B")
After this event, the logger is expected to finish writing to the previous file and start writing to the logs/mySegment_B.log file.
Let's assume that a state changed by startNewLogSegment should be visible in the whole application (all threads).
I tried to apply the approach with MDC:
logback.xml
...
<appender name="SIFTING_BY_ID" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>id</key>
<defaultValue>initial</defaultValue>
</discriminator>
<sift>
<appender name="FULL-${id}" class="ch.qos.logback.core.FileAppender">
<file>logs/mySegment_${id}.log</file>
<append>false</append>
<encoder >
<pattern>%d{dd-MM-yyyy HH:mm:ss.SSS} [%thread] [%-5level] %logger{36}.%M - %msg%n</pattern>
</encoder>
</appender>
</sift>
</appender>
...
and calling MDC.put("id", "A") when a custom event appears.
But it works a different way than I need.
It’s known that the MDC manages contextual information on a per thread basis, so at least we need a control over threads creation to accomplish the goal described above.
I wonder if this approach could be used with Spring, and in particular with async operations performed by Spring Reactor. I’ve found no information about using a custom thread pool for internal Spring activities.
Possibly, I hope, there’s a more simple way to tune logging that way without abusing Spring internals.
I ended up with a cusom implementation of discriminator AbstractDiscriminator<ILoggingEvent> allowing uasge of globally visible values.
GVC.java
/**
* Global values context.
* Allows to sift log files globally independent from a thread calling log operation.
* <p>
* Used API analogous to standard {#link org.slf4j.MDC}.
*/
public final class GVC {
private static Map<String, String> STORED = new HashMap<>();
private GVC() {
}
public static synchronized void put(String key, String value) {
STORED.put(key, value);
}
public static synchronized String get(String key) {
return STORED.get(key);
}
}
GVCBasedDiscriminator.java
/**
* Customized analogue of MDCBasedDiscriminator.
* <p>
* GVCBasedDiscriminator essentially returns the value mapped to an GVC key.
* If the value is null, then a default value is returned.
* <p>
* Both Key and the DefaultValue are user specified properties.
*/
public class GVCBasedDiscriminator extends AbstractDiscriminator<ILoggingEvent> {
private String key;
private String defaultValue;
public String getDiscriminatingValue(ILoggingEvent event) {
String value = GVC.get(key);
if (value == null) {
return defaultValue;
} else {
return value;
}
}
#Override
public String getKey() {
return key;
}
#Override
public void start() {
int errors = 0;
if (OptionHelper.isEmpty(key)) {
errors++;
addError("The \"Key\" property must be set");
}
if (OptionHelper.isEmpty(defaultValue)) {
errors++;
addError("The \"DefaultValue\" property must be set");
}
if (errors == 0) {
started = true;
}
}
/**
* Key for this discriminator instance
*
* #param key
*/
public void setKey(String key) {
this.key = key;
}
/**
* The default GVC value in case the GVC is not set for
* {#link #setKey(String) mdcKey}.
* <p/>
* <p> For example, if {#link #setKey(String) Key} is set to the value
* "someKey", and the MDC is not set for "someKey", then this appender will
* use the default value, which you can set with the help of this method.
*
* #param defaultValue
*/
public void setDefaultValue(String defaultValue) {
this.defaultValue = defaultValue;
}
}
logback.xml
<appender name="TRACES_PER_SESSION_FILE" class="ch.qos.logback.classic.sift.SiftingAppender">
<!-- here the custom discriminator implementation is applied -->
<discriminator class="internal.paxport.misc.logging.GVCBasedDiscriminator">
<key>id</key>
<defaultValue>initial</defaultValue>
</discriminator>
<sift>
<appender name="FULL-${id}" class="ch.qos.logback.core.FileAppender">
<file>logs/mySegment_${id}.log</file>
<append>false</append>
<encoder>
<pattern>%d{dd-MM-yyyy HH:mm:ss.SSS} [%thread] [%-5level] %logger{36}.%M - %msg%n</pattern>
</encoder>
</appender>
</sift>
</appender>
In this legacy app I'm working on, here's an excerpt from the logback.xml.
Alas I'm not used to this logging framework and I'm having a hard time understanding its configuration, despite reading extensively the filters related page here: https://logback.qos.ch/manual/filters.html
<appender name="EMAIL_WHATEVER" class="ch.qos.logback.classic.net.SMTPAppender">
<filter class="com.whatever.logback.MarkerFilter">
<marker>NOTIFY_WHATEVER</marker>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
<evaluator class="ch.qos.logback.classic.boolex.OnMarkerEvaluator">
<marker>NOTIFY_WHATEVER</marker>
</evaluator>
<smtpHost>${smtp}</smtpHost>
<to>${to}</to>
<from>${from}</from>
<subject>Whatever...</subject>
<append>false</append>
<layout class="com.whatever.logback.NotificationMailLayout">
<pattern>%msg</pattern>
</layout>
</appender>
I don't understand why there's both a <filter> AND and <evaluator> provided that they seem (to me) to done the same job.
Also the thing is I want to configure another <appender> (a ch.qos.logback.core.FileAppender one) but with almost identical marker-filtering. And I want to understand what I'm doing, not just blindly copy-paste some supposedly-working code/config, with additional personal satisfaction if the solution is neat (understand: simple and concise).
Additional Java code for your information i.e. the MarkerFilter class - the thing here is I don't get why they choose to reimplement it instead of using the ch.qos.logback.classic.turbo.MarkerFilter, as there's a logback-classic-xxx.jar in the build/class path:
public class MarkerFilter extends AbstractMatcherFilter {
Marker markerToMatch;
public void start() {
if (this.markerToMatch != null) {
super.start();
} else {
addError(String.format("The marker property must be set for [%s]", getName()));
}
}
public FilterReply decide(Object event) {
Marker marker = ((ILoggingEvent) event).getMarker();
if (!isStarted()) {
return FilterReply.NEUTRAL;
}
if (marker == null) {
return onMismatch;
}
if (markerToMatch.contains(marker)) {
return onMatch;
}
return onMismatch;
}
public void setMarker(String markerStr) {
if (markerStr != null) {
markerToMatch = MarkerFactory.getMarker(markerStr);
}
}
}
...and the Layout which is just an extend:
public class NotificationReferentielMailLayout extends PatternLayout {
#Override
public String getContentType() {
return "text/html";
}
}
My team uses Spring Boot Admin for controlling our spring application,
In Spring Boot Admin we have the option to change the logger level in Runtime,
we have a separate logger for each task(Thread), and if we want to see the console logs only for one thread we turning off all the other threads loggers,
The problem is that each logger sends it's output both for the STDOUT and also for a certain file, and we want to turn off only the stdout output.
log4j2.xml configuration example:
<Loggers>
<Logger name="task1" level="info">
<AppenderRef ref="Console"/>
<AppenderRef ref="File"/>
</Logger>
<Logger name="task2" level="info">
<AppenderRef ref="Console"/>
<AppenderRef ref="File"/>
</Logger>
</Loggers>
we tried a lot of solutions:
use parent loggers combined with additivity, and separate each appender to different logger,
Any ideas about it?
Log4j2 does not allow to manage System.out and System.err streams by default.
To clarify how console logger works:
Simply Console appender prints its output to System.out or System.err. According to documentation, if you do not specify target by default it will print to System.out:
https://logging.apache.org/log4j/2.x/manual/appenders.html
target || String || Either "SYSTEM_OUT" or "SYSTEM_ERR". The default is "SYSTEM_OUT".
Here is an example:
log4j2.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<Properties>
<Property name="log-pattern">%d{ISO8601} %-5p %m\n</Property>
</Properties>
<appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout>
<pattern>${log-pattern}</pattern>
</PatternLayout>
</Console>
</appenders>
<Loggers>
<logger name="testLogger" level="info" additivity="false">
<AppenderRef ref="Console"/>
</logger>
</Loggers>
</configuration>
LogApp.java
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class LogApp {
public static void main(String[] args) {
Logger log = LogManager.getLogger("testLogger");
log.info("Logger output test!");
System.out.println("System out test!");
}
}
Output:
2019-01-08T19:08:57,587 INFO Logger output test!
System out test!
A Workaround To Manage System Streams
Take Dmitry Pavlenko's stream redirection class
https://sysgears.com/articles/how-to-redirect-stdout-and-stderr-writing-to-a-log4j-appender/
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.Logger;
import java.io.IOException;
import java.io.OutputStream;
/**
* A change was made on the existing code:
* - At (LoggingOutputStream#flush) method 'count' could contain
* single space character, this types of logs has been skipped
*/
public class LoggingOutputStream extends OutputStream {
private static final int DEFAULT_BUFFER_LENGTH = 2048;
private boolean hasBeenClosed = false;
private byte[] buf;
private int count;
private int curBufLength;
private Logger log;
private Level level;
public LoggingOutputStream(final Logger log,
final Level level)
throws IllegalArgumentException {
if (log == null || level == null) {
throw new IllegalArgumentException(
"Logger or log level must be not null");
}
this.log = log;
this.level = level;
curBufLength = DEFAULT_BUFFER_LENGTH;
buf = new byte[curBufLength];
count = 0;
}
public void write(final int b) throws IOException {
if (hasBeenClosed) {
throw new IOException("The stream has been closed.");
}
// don't log nulls
if (b == 0) {
return;
}
// would this be writing past the buffer?
if (count == curBufLength) {
// grow the buffer
final int newBufLength = curBufLength +
DEFAULT_BUFFER_LENGTH;
final byte[] newBuf = new byte[newBufLength];
System.arraycopy(buf, 0, newBuf, 0, curBufLength);
buf = newBuf;
curBufLength = newBufLength;
}
buf[count] = (byte) b;
count++;
}
public void flush() {
if (count <= 1) {
count = 0;
return;
}
final byte[] bytes = new byte[count];
System.arraycopy(buf, 0, bytes, 0, count);
String str = new String(bytes);
log.log(level, str);
count = 0;
}
public void close() {
flush();
hasBeenClosed = true;
}
}
And create a custom logger for system output stream, than register it.
Here is the complete code of the logger usage:
log4j2.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<Properties>
<Property name="log-pattern">%d{ISO8601} %-5p %m\n</Property>
</Properties>
<appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout>
<pattern>${log-pattern}</pattern>
</PatternLayout>
</Console>
</appenders>
<Loggers>
<logger name="testLogger" level="info" additivity="false">
<AppenderRef ref="Console"/>
</logger>
<logger name="systemOut" level="info" additivity="true"/>
</Loggers>
</configuration>
SystemLogging.java
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.LogManager;
import java.io.PrintStream;
public class SystemLogging {
public void enableOutStreamLogging() {
System.setOut(createPrintStream("systemOut", Level.INFO));
}
private PrintStream createPrintStream(String name, Level level) {
return new PrintStream(new LoggingOutputStream(LogManager.getLogger(name), level), true);
}
}
LogApp.java
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class LogApp {
public static void main(String[] args) {
new SystemLogging().enableOutStreamLogging();
Logger log = LogManager.getLogger("testLogger");
log.info("Logger output test!");
System.out.println("System out test!");
}
}
Final output
2019-01-08T19:30:43,456 INFO Logger output test!
19:30:43.457 [main] INFO systemOut - System out test!
Now, customize system out with new logger configuration as you wish.
Plus; if you don't want to override System.out and just want to save it: there is TeeOutputStream in commons-io library. You can just replace original System.out with a combination of original System.out and LoggingOutputStream that will write simultaniously to both streams. This won't change the original output but allow you to save System.out with a logging appender.
My spring boot log currently looks like the following.
{"#timestamp":"2018-08-07T14:49:21.244+01:00","#version":"1","message":"Starting Application on ipkiss bla bla)","logger_name":"logger name....","thread_name":"main","level":"INFO","level_value":20000}
with the logback-spring.xml setup like below
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="com.ipkiss.correlate.logback.CorrelationPatternLayoutEncoder">
<pattern>%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(%5p) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} id = %id %m%n%wEx</pattern>
</encoder>
</appender>
and my class for LayoutEncoder looks like this
public class CorrelationPatternLayoutEncoder extends PatternLayoutEncoder {
public CorrelationPatternLayoutEncoder() {
}
#Override
public void start() {
PatternLayout patternLayout = new PatternLayout();
patternLayout.getDefaultConverterMap().put("id", CorrelationConverter.class.getName());
patternLayout.setContext(context);
patternLayout.setPattern(getPattern());
patternLayout.setOutputPatternAsHeader(outputPatternAsHeader);
patternLayout.start();
this.layout = patternLayout;
this.started = true;
}
}
what I was trying to achieve is to add the id to the log, I can't make logstach append my id, i tried Custom field according to the docs but I couldn't make it work,
any ideas how I can achieve this?
this is what i want to end up with
{"id":"3a7ccd34-d66a-4fcc-a12e-763a395a496c","#timestamp":"2018-08-07T14:49:21.244+01:00","#version":"1","message":"Starting Application on ipkiss bla bla)","logger_name":"logger name....","thread_name":"main","level":"INFO","level_value":20000}
or id being appended at the end of the log.
From the logstash-encoder github page
By default, each entry in the Mapped Diagnostic Context (MDC) (org.slf4j.MDC) will appear as a field in the LoggingEvent.
So in short, if you add your id entry into MDC it will automatically be included in all of your logs.
To add your id to MDC do the following:
MDC.put("id", uuid);
As MDC is a thread local variable you will have to clear it after your request has finished using
MDC.remove("id")
In a web application, add and clearing the values in MDC is usually done in a servlet filter ie.
public class IdFilter implements Filter{
#Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
MDC.put("id", UUID.randomUUID().toString());
try {
filterChain.doFilter(servletRequest, servletResponse);
} finally {
MDC.remove("id");
}
}
}
You can add custom log field by Creating a custom conversion specifier.
First off all create a custom converter class which extends ClassicConverter class.
import ch.qos.logback.classic.pattern.ClassicConverter;
import ch.qos.logback.classic.spi.ILoggingEvent;
import java.util.UUID;
public class LogUniqueIdConverter extends ClassicConverter {
#Override
public String convert(ILoggingEvent event) {
return String.valueOf(UUID.randomUUID());
}
}
ClassicConverter objects are responsible for extracting information out of ILoggingEvent instances and producing a String.
So just write in your logback configuration file so that logback know about the new Converter. For this you just need to declare the new conversion word in the configuration file, as shown below:
<configuration>
<conversionRule conversionWord="customId"
converterClass="com.naib.util.MySampleConverter" />
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>{"id": "%customId" %msg%n%throwable %msg%n}</pattern>
</encoder>
</appender>
<root level="DEBUG">
<appender-ref ref="STDOUT" />
</root>
</configuration>
I'm trying to log org.springframework.ws.client.MessageTracing logs into db.
I add the MessageTracing logger to my logback.xml.
<logger name="org.springframework.ws.client.MessageTracing">
<level value="TRACE"/>
<appender-ref ref="STDOUT" />
<appender-ref ref="WebServiceDBAppender" />
</logger>
and for inserting them into db i gave another appender-ref which is WebServiceDBAppender. It is my DBAppender which extends AppenderBase.
I override the append method and typed them in logback.xml
I can reach request&response seperately.First org.springframework.ws.server.MessageTracing.received i got it and then org.springframework.ws.server.MessageTracing.send.
Is there any way to reach both of them at the same time for inserting db request&response in the same row?
You can achieve it only using single method.
I suggest you to implement some ClientInterceptor and inject it to the WebServiceTemplate
There is similar interceptor on the matter on server side PayloadLoggingInterceptor.
As far as there is no such solution for client side, I think it won't иу so difficult to wrap that interceptor with ClientInterceptor implementation.
And here is an extension of PayloadLoggingInterceptor how to log request and response at once:
public class PayloadLoggingInterceptor extends org.springframework.ws.server.endpoint.interceptor.PayloadLoggingInterceptor {
private static Logger log = LoggerFactory.getLogger(PayloadLoggingInterceptor.class);
protected boolean isLogEnabled() {
return log.isDebugEnabled();
}
protected void logMessage(String message) {
log.debug(message);
}
#Override
public boolean handleResponse(MessageContext messageContext, Object endpoint) throws Exception {
if (isLogEnabled()) {
logMessage("------------START-----------");
logMessageSource("Request: ", getSource(messageContext.getRequest()));
logMessageSource("Response: ", getSource(messageContext.getResponse()));
logMessage("------------END-----------");
}
return true;
}
}