Until recently I was using spring-boot 1.3.5.RELEASE and the following worked.
#SpringBootApplication
public class MyApplication {
static {
MDC.put("service_name", "myapp");
}
public static void main(String[] args) {
SpringApplication.run(new Object[]{MyConfiguration.class}, args);
}
}
Note the MDC put. The service_name was then logged in each log line in the entire application using the logback logger.
This was true even in child threads e.g. MVC controllers.
We are now on spring version 1.4.1.RELEASE and the MDC logging of service_name only works in the main thread now, and not MVC controller threads.
"myapp" is still logged in the main thread:
2016-11-30 14:22:08,147 [main] INFO co.uk.me.MyApplication - myapp [,,] - Started MyApplication in 14.276 seconds (JVM running for 308.404)
But in a controller log line "myapp" is now missing.
2016-11-30 15:17:50,329 [http-nio-9007-exec-2] INFO co.uk.me.controller.MyController - [,,] - Received get <snip>
Before the change it looked like:
2016-11-30 15:17:50,329 [http-nio-9007-exec-2] INFO co.uk.me.controller.MyController - myapp [,,] - Received get <snip>
I can see in the debugger that the MDC context is empty at the start of the controller method.
Does anyone know what change has affected this behaviour? Maybe a change to spring MVC thread creation? Or a logback change?
Is there a way to set and keep an application-wide MDC property still in spring-boot?
Thanks
The MDC values are kept on thread locals, so only the main thread that starts your spring boot app has the value. MDC usually used for dynamic content and not for static (application name is not going to change). You can add a filter that will populate the MDC value on every incoming request.
I recommend to use something like this in you logback-spring.xml file:
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>%date [myapp] [%thread] %-5level %logger{36} - %msg%xEx%n</Pattern>
</encoder>
Related
I'm having an issue with some missing logs from GKE container in Cluod Logging.
I have an Spring boot application deployed on GKE with Log4j2. All the logs generated by the application are always writted to Cloud Logging, so if I execute 100 transactions in parallel using Jmeter I can search for all the logs in Cloud logging without problems (Logs at the beginning, middle and end of the rest controller).
Now I am migrating from Log4j2 to Logback to have a full integration with Cloud Logging, I'm following this guide: https://docs.spring.io/spring-cloud-gcp/docs/current/reference/html/logging.html
After the migration, updating only the log dependency from Log4j to Logback I can still see my logs on Cloud Logging but I'm having a weird issue with some missing logs.
For example if I send 30 parallel transactions using Jmeter I can see all the logs generated by the service, basically I'm searching for each message like this:
"This is a message "
"This is the mid of controller"
"End of trx, cleaning MDC context : "
Loggers looks like this:
Logger.info("Starting transaction: ", transactionId).
Logger.info("This is the mid of controller").
Logger.info("End of trx, cleaning MDC context : ", transactionId).
MDC.clear();
return response.
I'm searching for messages generated at the start of the rest controller, some logs at the middle of the controller and logs generated at the end of the controller, just before the "return reponse."
So if I send 30 trx in parallel using Jmeter I can find all the Loggers in Cloud Logging, but if I repeat the same 30 trx 1 min later I can find logs, but not all the logs. For example I can find:
30 of **Starting transaction:**,
22 of "This is the mid of controller"
2 of "End of trx, cleaning MDC context : "
Then if I repeat
20 of **Starting transaction:**,
0 of "This is the mid of controller"
0 of "End of trx, cleaning MDC context : "
If I wait 5 minutes and repeat
30 of **Starting transaction:**,
30 of "This is the mid of controller"
30 of "End of trx, cleaning MDC context : "
Even in some cases I can't literally find 0 logs for an specific transaction.
In all the cases the response of the service is always good, I mean even when I can't see all the logs I know the service is working fine because I can receive a 200 success and the expected response in the body. Also there are no inconsistencies in the response, everything is just working fine.
Sorry for the long intro but now the questions.
1 - Is Cloud Logging skipping similar logs? I'm always sending the same transaction in jmeter for all the cases, so the only difference between transactions is the transactionId (generated at the beginning of the rest controller)
2 - If I send a request manually using postman, I can find all the logs. Could Cloud Logging be skipping similar logs generated almost at the same time with parallel transactions?
I have tested the same cases on my local and everything is working fine, even if I send 100 transactions in parallel each second in a long loop I can find all the logs generated by the service (I'm wirtting the logs to a file), so I'm only having this issue in GKE.
Also I understand that #RestController is thread safe, so I'm not seeing inconsistencies in the logs or responses.
I'm using MDC with the configuration in Logback includeMDC, basically I'm adding the transactionId to the MDC context MDC.put("transactionId", transactionId), if I'm not wrong MDC is also thread safe, so it should not be the problem.
My logback file looks like this.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/cloud/gcp/autoconfigure/logging/logback-appender.xml"/>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<appender name="CONSOLE_JSON_APP" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.springframework.cloud.gcp.logging.StackdriverJsonLayout">
<includeTraceId>true</includeTraceId>
<includeSpanId>true</includeSpanId>
<includeLevel>true</includeLevel>
<includeThreadName>true</includeThreadName>
<includeMDC>true</includeMDC>
<includeLoggerName>true</includeLoggerName>
<includeContextName>true</includeContextName>
<includeMessage>true</includeMessage>
<includeFormattedMessage>true</includeFormattedMessage>
<includeExceptionInMessage>true</includeExceptionInMessage>
<includeException>true</includeException>
<serviceContext>
<service>APP-LOG</service>
</serviceContext>
</layout>
</encoder>
</appender>
<appender name="CONSOLE_JSON_EXT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.springframework.cloud.gcp.logging.StackdriverJsonLayout">
<projectId>${projectId}</projectId>
<includeTraceId>true</includeTraceId>
<includeSpanId>true</includeSpanId>
<includeLevel>true</includeLevel>
<includeThreadName>true</includeThreadName>
<includeMDC>true</includeMDC>
<includeLoggerName>true</includeLoggerName>
<includeContextName>true</includeContextName>
<includeMessage>true</includeMessage>
<includeFormattedMessage>true</includeFormattedMessage>
<includeExceptionInMessage>true</includeExceptionInMessage>
<includeException>true</includeException>
<serviceContext>
<service>EXT-LOG</service>
</serviceContext>
</layout>
</encoder>
</appender>
<!-- Loggers-->
<root level="INFO" name="info-log">
<appender-ref ref="LOCAL_EXTERNAL_DEP"/>
</root>
<logger name="com.example.test.service" level="INFO" additivity="false">
<appender-ref ref="LOCAL_APP" />
</logger>
</configuration>
The restController looks like this.
#RestController
public class TestServiceController {
#PostMapping("/evaluate")
public Response evaluate(#RequestBody Request request) {
UUID transactionId = UUID.randomUUID();
Logger.info("Starting transaction: ", transactionId ).
MDC.put("transactionId", transactionId.toString());
//Some java code here (Only simple things)
Logger.info("This is the mid of controller").
//Some java code here (Only simple things)
Logger.info("End of trx, cleaning MDC context : ", transactionId).
MDC.clear();
return transaction.getResponse();
}
}
At this moment my only guess is that Cloud Logging is skipping similar logs generated in a short period of time (Basically parallels executions).
Try adjusting the flushing settings. For example, set flushLevel to DEBUG. Docs about flushLevel: https://docs.spring.io/spring-cloud-gcp/docs/current/reference/html/logging.html#_log_via_api
I've seen the issue you described when applications aren't configured to flush logs directly to stdout/stderr.
I am working on a web application based on spring boot and want to use log4j2 as the logger implementation.
Everything works fine with the logging configuration defined in a log4j2-spring.xml file.
What is not working: I want to use property placeholders in the log4j2-spring.xml file that should be resolved from properties defined in the application.yml file used for configuring spring boot.
Is this possible? If yes, how?
Direct substitution of properties in log4j2-spring.xml via property placeholder is not possible as the log4j2-spring.xml is outside the ambit of Spring, and used purely for configuration purpose.
However, you can leverage the Log4j2 out-of-box feature of property substitution as outlined here.
Step 1 -
Specify the property name and its variable in log4j2-spring.xml as below
<Configuration status="warn">
<Properties>
<Property name="someProp">${bundle:test:someKey}</Property>
</Properties>
<!--other configs -->
</Configuration>
Step 2 - Use the above defined property in the log configuration e.g. suffix to log file name
<Appenders>
<File name="file" fileName="/path/to/logs/app-${someProp}.log">
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5p %-40c{1.} - %m%n"/>
</File>
</Appenders>
Step 3 - Create a bundle (viz. properties file) to hold the properties value e.g. test.properties
# properties for log4j2
someKey=someValue
someKey1=someValue1
In your case this file will contain the values in yaml which you seek to use in log4j2 configuration. In case those properties are used in application as well, they will be duplicated in yaml and the bundle (i.e. properties file) which should be acceptable compromise given spring can not inject them in log4j2 configuration.
Let know in comments in case of any more information is required.
I've faced similiar problem with injecting Spring Boot YAML properties into log4j xml configuration, and I found a solution for Spring Boot 1.5.X (and probably 2.0, I didn't test it) which is a little bit hacky and operates on system properties lookup but it certainly works.
Let say you have profile "dev" in your application and some property to inject, then your application-dev.yml looks like this:
property:
toInject: someValue
In your xml configuration log4j2-spring-dev.xml you put something like this:
<Properties>
<property name="someProp">${sys:property.toInject}</property>
</Properties>
Now you have to somehow transfer this spring property to system property. You have to do that after application environment will be prepared and before logging system will initialize. In Spring Boot there is a listener LoggingApplicationListener, which initialize whole logging system and it's triggered by event ApplicationEnvironmentPreparedEvent, so let's create listener with order with higher precedence than LoggingApplicationListener:
public class LoggingListener implements ApplicationListener, Ordered {
#Override
public int getOrder() {
return LoggingApplicationListener.DEFAULT_ORDER - 1;
}
#Override
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ApplicationEnvironmentPreparedEvent) {
ConfigurableEnvironment environment = ((ApplicationEnvironmentPreparedEvent) event).getEnvironment();
List<String> activeProfiles = Arrays.asList(environment.getActiveProfiles());
if (!activeProfiles.contains("dev")) {
return;
}
String someProp = environment.getProperty("property.toInject")
validateProperty(someProp);
System.setProperty("property.toInject", someProp);
}
}
Now register this listener in your application:
public static void main(String[] args) {
SpringApplication application = new SpringApplication(MyApplication.class);
application.addListeners(new LoggingListener());
application.run(args);
}
And that's it. Your Spring Boot properties should be "injected" in your log4j2 configuration file. This solution works with classpath properties and --spring.config.location properties. Note, it would not work with with some external configuration system like Spring Cloud Config.
Hope it helps
If you use mvn, you could use the mvn resource plugin. This will let you achieve your goal in build time.
Link: https://maven.apache.org/plugins/maven-resources-plugin/examples/filter.html
I need to log to splunk from AWS Lambda using Java8 runtime. It uses spring framework and I added logback splunk appender to the project. There are no errors and the logs don't seem to show up in splunk. The splunk admin mentioned that there are no requests received on splunk server. When I tried to invoke the REST API manually, the log show up in splunk. So the connectivity from AWS Lambda to splunk server is good. The splunk appender seem to be invoking the API in async fashion and I added a 50seconds sleep at the end of the AWS Lambda code to see whether it is an issue with VM exiting before the async step completes. No luck yet. How do I debug further?
Code snippet:-
public class LambdaApp implements RequestHandler<String, Object>
{
private static final Logger LOGGER = LoggerFactory.getLogger(LambdaApp.class);
private static final Logger SPLUNK_LOGGER = LoggerFactory.getLogger("splunk.logger");
#Override
public Object handleRequest(String event, Context context)
{
SPLUNK_LOGGER.info("AWS Lambda start");
try {
Thread.sleep(50000);
} catch(InterruptedException ex) {
Thread.currentThread().interrupt();
}
return "handled";
}
Maven dependency:-
<dependency>
<groupId>com.splunk.logging</groupId>
<artifactId>splunk-library-javalogging</artifactId>
<version>1.5.2</version>
</dependency>
Logback configuration:-
<appender name="http" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>https://a.b.c.d:8088</url>
<token>valid-token</token>
<disableCertificateValidation>true</disableCertificateValidation>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>{%msg}</pattern>
</layout>
</appender>
<logger name ="splunk.logger" level="DEBUG">
<appender-ref ref="http" />
</logger>
The first step is to add batch_size_count to rule out any issues with the HttpEventCollectorLogbackAppender not flushing to Splunk.
<appender name="http" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>https://a.b.c.d:8088</url>
<token>valid-token</token>
<batch_size_count>1</batch_size_count>
<disableCertificateValidation>true</disableCertificateValidation>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>{%msg}</pattern>
</layout>
</appender>
You should also verify that you are using Splunk 6.3+ on the receiving end since HTTP Event Collector requires a minimum of v6.3
You have to specify a valid / existing Splunk index or the log entry will be dropped silently on the floor.
Setting batch_size_count to 1 will make sure that every log entry gets flushed. Unfortunately everything's written asynchronously no matter how you configure it (even though "sequential" is the default sendMode), so flushing isn't sufficient to make sure it's actually flushed. For that you need to set terminationTimeout to a value (in milliseconds) that's "long enough", but not too long to significantly impact your lambda.
Note that terminationTimeout is applied on each flush and not just during whatever they think is "termination". They also use busy polling to implement it. It was apparently implemented by someone that doesn't really understand how to write multi-threaded code or the implications of a thread spinning every few milliseconds to poll the state of another thread.
Is there a way to tell if Spring has loaded my #Controller?
I'm requesting a URL but I'm not hitting my controller and I can't figure out why
I'm loading controllers by doing a component scan
<context:component-scan base-package="com.example.app.web"/>
Other controllers in the same package as my failing controller are working fine.
My controller code is:
#Controller
#RequestMapping(value = "/app/administration/ecosystem")
public class AppEcosystemController {
#Autowired
EcosystemManagerService ecosystemManagerService;
#RequestMapping(value = "/Foo", method = RequestMethod.GET)
public String getEcosystem() {
/* Implementation */
}
The first thing I'd like to do is to be sure that this controller is getting picked up by the component scan.
Any suggestions?
Just enable logging for your application, you can find this information at INFO level
For example in my application I have a controller named UserController.
The following log4j.properties does the trick.
log4j.rootLogger=INFO, FILE
log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File=../logs/rest-json.log
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
I can see in the log that RequestMappingHandlerMapping mapped my controller (scroll all the way to the right).
07:28:36,255 INFO RequestMappingHandlerMapping:182 - Mapped "{[/rest/**/users/{id}],methods=[GET],params=[],headers=[],consumes=[],produces=[text/xml || application/json],custom=[]}" onto public org..domain.User org.ramanh.controller.UserController.getUser(java.lang.String)
07:28:36,255 INFO RequestMappingHandlerMapping:182 - Mapped "{[/rest/**/users],methods=[POST],params=[],headers=[],consumes=[],produces=[text/xml || application/json],custom=[]}" onto public void org..controller.UserController.addUser(org...domain.User)
If you are still unsure I would suggest adding a method annotated with #PostConstruct.
You could easily look up the message in the log or place a break point in this method.
#PostConstruct
protected void iamAlive(){
log.info(“Hello AppEcosystemController”)
}
If you find that your controller is initialized correctly but still the url is not accessible.I would test the following
You are getting 404 error - maybe you are not pointing to the correct
url (do not forget to add the application as prefix to the url)
You are getting 404 error - Dispatcher servlet mapping in web.xml doesn't meet
the url above
You are getting 403/401 – maybe you are using
spring security and it’s blocking the url
You are getting 406 – your
content type definition is conflicting with your request
You are getting 50x – something is buggy in your code
I made an ApplicationContextDumper. Add it into application context, it will dump all beans and their dependencies in the current context and parent contexts (if any) into log file when the application context initialization finishes. It also lists the beans which aren’t referenced.
It was inspired by this answer.
You could start out with enabling debug logging for Spring as outlined here.
I'd also recommend leveraging the MVC testing support, which you'll find in the spring-test jar. Details on how to use it can be found here and here.
Having spring application (actually grails app) that runs apache-activemq server as spring bean and couple of apache-camel routes. Application use hibernate to work with database. The problem is simple. Activemq+Camel starts up BEFORE grails injects special methods into hibernate domain objects (actually save/update methods etc). So, if activemq already has some data on startup - camel starts processing messages w/o having grails DAO methods injected. This fails with grails.lang.MissingMethodException. Must delay activemq/camel startup before Grails injects special methods into domain objects.
If all these are defined as spring bean, you can use
<bean id="activeMqBean" depends-on="anotherBean" />
This will make sure anotherBean is initialized before activeMqBean
can you move MQ managment into a plugin? It would increase modularity and if you declare in plugin-descriptor
def loadAfter = ['hibernate']
you should have the desired behavior. Works for JBPM plugin
I am not sure in your case but lazy loading may also help e.g.
<bean id="lazybean" class="com.xxx.YourBean" lazy-init="true">
A lazily-initialized bean indicates to the IoC container to create bean instance when it is first requested. This can help you delay the loading of beans you want.
I know this question is pretty old, but I am now facing the same problem in the year 2015 - and this thread does not offer a solution for me.
I came up with a custom processor bean having a CountDownLatch, which I count down after bootstrapping the application. So the messages will be idled until the app has started fully and its working for me.
/**
* bootstrap latch processor
*/
#Log4j
class BootstrapLatchProcessor implements Processor {
private final CountDownLatch latch = new CountDownLatch(1)
#Override
void process(Exchange exchange) throws Exception {
if(latch.count > 0){
log.info "waiting for bootstrapped # ${exchange.fromEndpoint}"
latch.await()
}
exchange.out = exchange.in
}
/**
* mark the application as bootstrapped
*/
public void setBootstrapped(){
latch.countDown()
}
}
Then use it as a bean in your application and call the method setBootstrapped in your Bootstrap.groovy
Then in your RouteBuilder you put the processor between your endpoint and destination for all routes you expect messages coming in before the app has started:
from("activemq:a.in ").processRef('bootstrapProcessor').to("bean:handlerService?method=handle")