Printing query parameters in access log for light-4j application? - light-4j

My light-4j application is using AuditHandler to print access logs. The default format printed is:
{"timestamp":1580470146236,"endpoint":"/mmt/register#post","X-Correlation-Id":"123456","statusCode":200,"responseTime":70}
But, the client is hitting the API with query parameters: /mmt/register?id=2
How do I customize the access log so that it prints the query parameter also in the access log? {"timestamp":1580470146236,"endpoint":"/mmt/register#post?id=2","X-Correlation-Id":"123456","statusCode":200,"responseTime":70}
My current logback setting is:
<appender name="access-log" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>/opt/logs/Register/access.json</File>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>/opt/logs/Register/access.%d{yyyy-MM-dd}.%i.json
</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy
class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<!-- or whenever the file size reaches 1GB -->
<maxFileSize>1GB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<MaxHistory>50</MaxHistory>
</rollingPolicy>
<encoder>
<Pattern>%m%n</Pattern>
</encoder>
</appender>

The default AuditHandler only logs endpoint that is the path and method combination. The query parameter is not part of it. To log the query parameters, there are two options.
Customize the AuditHandler in the light-4j repo and replace it with the customized on in the handler.yml file if you want to log the query parameter in production.
If you just need to log the query parameter in the dev environment, then you can wire in the DumpHandler. It basically dumps everything on request and response including headers, query parameters, path parameters, cookies, and the request body. However, it slows down the system dramatically and it is not recommended to enable it in production. Take a look at the handler.yml file to uncomment the handler and uncomment the alias in the default chain.

Related

Missing logs from GKE on Cloud logging

I'm having an issue with some missing logs from GKE container in Cluod Logging.
I have an Spring boot application deployed on GKE with Log4j2. All the logs generated by the application are always writted to Cloud Logging, so if I execute 100 transactions in parallel using Jmeter I can search for all the logs in Cloud logging without problems (Logs at the beginning, middle and end of the rest controller).
Now I am migrating from Log4j2 to Logback to have a full integration with Cloud Logging, I'm following this guide: https://docs.spring.io/spring-cloud-gcp/docs/current/reference/html/logging.html
After the migration, updating only the log dependency from Log4j to Logback I can still see my logs on Cloud Logging but I'm having a weird issue with some missing logs.
For example if I send 30 parallel transactions using Jmeter I can see all the logs generated by the service, basically I'm searching for each message like this:
"This is a message "
"This is the mid of controller"
"End of trx, cleaning MDC context : "
Loggers looks like this:
Logger.info("Starting transaction: ", transactionId).
Logger.info("This is the mid of controller").
Logger.info("End of trx, cleaning MDC context : ", transactionId).
MDC.clear();
return response.
I'm searching for messages generated at the start of the rest controller, some logs at the middle of the controller and logs generated at the end of the controller, just before the "return reponse."
So if I send 30 trx in parallel using Jmeter I can find all the Loggers in Cloud Logging, but if I repeat the same 30 trx 1 min later I can find logs, but not all the logs. For example I can find:
30 of **Starting transaction:**,
22 of "This is the mid of controller"
2 of "End of trx, cleaning MDC context : "
Then if I repeat
20 of **Starting transaction:**,
0 of "This is the mid of controller"
0 of "End of trx, cleaning MDC context : "
If I wait 5 minutes and repeat
30 of **Starting transaction:**,
30 of "This is the mid of controller"
30 of "End of trx, cleaning MDC context : "
Even in some cases I can't literally find 0 logs for an specific transaction.
In all the cases the response of the service is always good, I mean even when I can't see all the logs I know the service is working fine because I can receive a 200 success and the expected response in the body. Also there are no inconsistencies in the response, everything is just working fine.
Sorry for the long intro but now the questions.
1 - Is Cloud Logging skipping similar logs? I'm always sending the same transaction in jmeter for all the cases, so the only difference between transactions is the transactionId (generated at the beginning of the rest controller)
2 - If I send a request manually using postman, I can find all the logs. Could Cloud Logging be skipping similar logs generated almost at the same time with parallel transactions?
I have tested the same cases on my local and everything is working fine, even if I send 100 transactions in parallel each second in a long loop I can find all the logs generated by the service (I'm wirtting the logs to a file), so I'm only having this issue in GKE.
Also I understand that #RestController is thread safe, so I'm not seeing inconsistencies in the logs or responses.
I'm using MDC with the configuration in Logback includeMDC, basically I'm adding the transactionId to the MDC context MDC.put("transactionId", transactionId), if I'm not wrong MDC is also thread safe, so it should not be the problem.
My logback file looks like this.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/cloud/gcp/autoconfigure/logging/logback-appender.xml"/>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<appender name="CONSOLE_JSON_APP" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.springframework.cloud.gcp.logging.StackdriverJsonLayout">
<includeTraceId>true</includeTraceId>
<includeSpanId>true</includeSpanId>
<includeLevel>true</includeLevel>
<includeThreadName>true</includeThreadName>
<includeMDC>true</includeMDC>
<includeLoggerName>true</includeLoggerName>
<includeContextName>true</includeContextName>
<includeMessage>true</includeMessage>
<includeFormattedMessage>true</includeFormattedMessage>
<includeExceptionInMessage>true</includeExceptionInMessage>
<includeException>true</includeException>
<serviceContext>
<service>APP-LOG</service>
</serviceContext>
</layout>
</encoder>
</appender>
<appender name="CONSOLE_JSON_EXT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.springframework.cloud.gcp.logging.StackdriverJsonLayout">
<projectId>${projectId}</projectId>
<includeTraceId>true</includeTraceId>
<includeSpanId>true</includeSpanId>
<includeLevel>true</includeLevel>
<includeThreadName>true</includeThreadName>
<includeMDC>true</includeMDC>
<includeLoggerName>true</includeLoggerName>
<includeContextName>true</includeContextName>
<includeMessage>true</includeMessage>
<includeFormattedMessage>true</includeFormattedMessage>
<includeExceptionInMessage>true</includeExceptionInMessage>
<includeException>true</includeException>
<serviceContext>
<service>EXT-LOG</service>
</serviceContext>
</layout>
</encoder>
</appender>
<!-- Loggers-->
<root level="INFO" name="info-log">
<appender-ref ref="LOCAL_EXTERNAL_DEP"/>
</root>
<logger name="com.example.test.service" level="INFO" additivity="false">
<appender-ref ref="LOCAL_APP" />
</logger>
</configuration>
The restController looks like this.
#RestController
public class TestServiceController {
#PostMapping("/evaluate")
public Response evaluate(#RequestBody Request request) {
UUID transactionId = UUID.randomUUID();
Logger.info("Starting transaction: ", transactionId ).
MDC.put("transactionId", transactionId.toString());
//Some java code here (Only simple things)
Logger.info("This is the mid of controller").
//Some java code here (Only simple things)
Logger.info("End of trx, cleaning MDC context : ", transactionId).
MDC.clear();
return transaction.getResponse();
}
}
At this moment my only guess is that Cloud Logging is skipping similar logs generated in a short period of time (Basically parallels executions).
Try adjusting the flushing settings. For example, set flushLevel to DEBUG. Docs about flushLevel: https://docs.spring.io/spring-cloud-gcp/docs/current/reference/html/logging.html#_log_via_api
I've seen the issue you described when applications aren't configured to flush logs directly to stdout/stderr.

Structured Stackdriver logs - adding MDC to logs

Added MDC to logs to be able track specific error logs in Stackdriver Dashboard and Logging Console. Current implementation is working fine on local machine but on cloud it is not - just don't include my MDC to log entry. The problem is that I cannot figure out what might be a problem at all.
Local logs output(contains "contextKey": "someValue"):
{"traceId":"615b35dc7f639027","spanId":"615b35dc7f639027","spanExportable":"false","contextKey":"someValue","timestampSeconds":1552311117,"timestampNanos":665000000,"severity":"ERROR","thread":"reactor-http-nio-3","logger":"com.example.someservice.controller.MyController", ...}
Kubernetes container log of the same service(no "contextKey": "someValue" in this log entry):
{"traceId":"8d7287fa0ebdacfce9b88097e290ecbf","spanId":"96967afbe05dbf0e","spanExportable":"false","X-B3-ParentSpanId":"224dcb9869488858","parentId":"224dcb9869488858","timestampSeconds":1552312549,"timestampNanos":752000000,"severity":"ERROR","thread":"reactor-http-epoll-2","logger":"com.example.someservice.controller.MyController","message":"Something went wrong","context":"default","logging.googleapis.com/trace":"projects/my-project/traces/8d7287fa0ebdacfce9b88097e290ecbf","logging.googleapis.com/spanId":"96967afbe05dbf0e"}
My logback.xml:
<configuration>
<appender name="CONSOLE_JSON" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.springframework.cloud.gcp.autoconfigure.logging.StackdriverJsonLayout">
<projectId>${projectId}</projectId>
</layout>
</encoder>
</appender>
<springProfile name="local,dev,test">
<root level="INFO">
<appender-ref ref="CONSOLE_JSON"/>
</root>
</springProfile>
</configuration>
Controller which trigger log creation with defined MDC:
#RestController
#RequestMapping("v1/my-way")
#Slf4j
public class MyController {
#GetMapping
public void read() {
MDC.put("contextKey", "someValue");
log.error("Something went wrong");
MDC.remove("contextKey")
}
}
Your MDC fields should be available at "labels" node of log entity. Did you check there?
Also at Google Console in Logs Viewer it's nice to set MDC fields to be shown in logs records. For that set "labels.YouCustomMDCField: at View Options -> Modify custom fields (at right top of the screen)
Update: However you're using different approach than me for log to Stackdriver. Im using
<appender name="STACKDRIVER" class="com.google.cloud.logging.logback.LoggingAppender">
<log>
${log_file_name}
</log>
</appender>
and it's do the trick (me also has Spring).

How can I configure my logback system so that it could create a log file as soon as a time based rolling cycle complete?

My application is based on Jdk8, groovy 2.4 as language and on the top of Spring Framework. As a logger I'm using 'logback' dependencies. (group: 'ch.qos.logback', name: 'logback-classic', version: '1.1.8'). Basically, RollingFileAppender is working fine for me, But currently, I have some additional requirement.
For an example,
Suppose the specific logger function is invoked at 2018-05-16 11:08:50, A record entered into error.log and no rolling file is created. Well, when the next execution occurs approximately after 6 minutes at 2018-05-16 11:15:05, A fresh file is created error.2018-05-16-11-08.log and error.log file is refreshed with only the new message. This behavior is as per the documentation. But currently, I'm in need of instant creation of log file as soon as the rolling complete. In that case, the new file error.2018-05-16-11-08.log which is created actually at 2018-05-16 11:15:05, I need to create the same file actually at 2018-05-16 11:09:00 (means as soon as the minute based rolling is complete).
<appender name="ERROR_LOG"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/error.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/error_log/error.%d{yyyy-MM-dd-HH-mm}.log
</fileNamePattern>
<maxHistory>10</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS}\t%msg%n</pattern>
</encoder>
</appender>
I know there is a way, simply eliminating input property. If I omit that property, The problem partially gets resolved for me. But, in that case, error.log will not be created which have to be retained.
Please, feel free to ask me anything regurding this issue.

Define Spring 5 Logging Level

I migrated a web-application (deployed on a Web-Server) from Spring 4 to Spring 5. That works fine. No problems in production environment.
But there is a issue in my development-environment:
The development server is running with the JVM option org.slf4j.simpleLogger.defaultLogLevel=debug. The problem is, Spring "picks up" that option and as a result Spring controllers talk a lot via SLF4J. The amount of debug-output overwhelmed other debug logging. I tried to set separate logging levels for Spring by JVM options like logging.level.org.springframework.web={info|error|warn|trace}.
Any ideas or solutions? Thank you in advance for your help.
EDIT (see Belistas comment) / Examples:
DEBUG org.springframework.web.servlet.view.JstlView - Added model object 'project' of type [.Project] to request in view with name 'specs-draft'
DEBUG org.springframework.web.servlet.handler.SimpleUrlHandlerMapping - Matching patterns for request [/resources/css/style.css] are [/resources/**]
Since everytime a object is added to the model Spring logs with level debug, there is a lot of output.
Amir Pashazadeh provided the hint that helps to solve the problem.
Using Logback instead of Simple Logger
Defining a logger for package "org.springframework" in the logback configuration file.
My solution was to ignore Spring debug logging and proceed Spring info logging only by a console output appender.
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%-5level %logger{16} - %msg%n</pattern>
</encoder>
</appender>
<logger name="org.springframework" level="info" additivity="false">
<appender-ref ref="STDOUT"/>
</logger>
That's it. Thank you all for your help.

JVM Debugging without a Debugger

Newish to JVM debugging.
I've worked on supporting other products based on VxWorks written in C/C++. Within that environment we were able to do symbol lookups on a live system and peek or poke memory to get an idea what the software was doing or to alter it when other "normal" configuration options weren't available.
Now I'm supporting java applications. For issues that aren't readily reproducible within our labs, we are reduced to recompiling with additional instrumentation and performing binary replacements to gather more information to understand what is happening within the sw.
These are always on applications that folks don't take kindly to restarting.
Is there any similar type of debugging that can be taken for JVM applications? These are running on customer sites where using a conventional debugger is not an option for a number of reasons.
Please no lectures on how the application is poorly designed for support-ability. That's a given, we're just a couple of guys in support that have to figure it out the best we can.
thanks,
Abe
I've been in similar situation, where stopping application in debugger triggered timeouts on lower layers, and adding instrumentation was annoying because of restarts.
For me the solution was to add more logging statements.
I've used slf4j API, with logback implementation, I've enabled JMX on JVM and used it to enable/disable logging as needed and/or change classes that were logged.
If you use slf4j/logback the right way, there is very little overhead for disabled log statement, so I was able to use them liberally.
This way I could turn on "debug instrumentation", without annoying users with restarts.
Now some code:
This is testbed for experiment
package pl.gov.mofnet.giif.logondemand;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class LogOnDemand {
private static final Logger log = LoggerFactory.getLogger(LogOnDemand.class);
public static void main(String []args) throws InterruptedException {
for (int i = 0; i < 1000; i++) {
log.debug("i: {}", i);
Thread.sleep(1000);
}
}
}
This file needs to be put in default package (or anywhere on classpath).
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<jmxConfigurator />
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>%date [%thread] %-5level %logger{25} - %msg%n</Pattern>
</layout>
</appender>
<root level="info">
<appender-ref ref="console" />
</root>
</configuration>
This code has compile time dependencies on org.slf4j:slf4j-api:1.7.7 and runtime dependency on ch.qos.logback:logback-classic:1.1.2
Run this code, in Java 7 you do not have to enable JMX explicitly if you connect to JMX from same machine. You will see that there is no output on console save for initial Logback configuration messages.
Start jconsole, connect to this process, in MBeans tab you will find ch.qos.logback.classic node, deep below it there are operations.
Edit parameters to setLoggerLevel, set p1 to package name of your class, in this case pl.gov.mofnet.giif and p2 to debug. Press setLoggerLevel button to run this operation, you should see log messages on console. To disable logging reset logger level to info or higher.

Resources