How can I make the MDC available during logging in a spring controller advice? - spring-boot

I have a spring-boot web application which makes use of logback's MDC context to enrich the log with custom data. I have the following implementation in place which makes available some custom data associated with "customKey" and it properly gets logged after adding %X{customKey} to the logging pattern in the logback configuration:
public class MDCFilter extends OncePerRequestFilter implements Ordered {
#Override
protected void doFilterInternal(HttpServletRequest httpServletRequest,
HttpServletResponse httpServletResponse,
FilterChain filterChain) throws ServletException, IOException {
try {
MDC.put("customKey", "someValue");
filterChain.doFilter(httpServletRequest, httpServletResponse);
} catch(Throwable t) {
LOG.error("Uncaught exception occurred", t);
throw t;
} finally {
MDC.remove("customKey");
}
}
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE - 4;
}
}
This works fine as long as no uncaught exceptions are being thrown. To handle these I have a controller advice in place. Sadly the MDC is not available during logging in the controller advice anymore since it has already been cleaned up. If I understand correctly spring determines the responsible ExceptionHandler by using the HandlerExceptionResolverComposite - implementation which registers itself with the lowest precedence - hence it comes last after the MDC has already been cleaned up.
My qurestion now is: How should I register my filter so that the MDC is still available during logging in the controller advice?
I think one option would be to remove the MDC.remove(...) call from the finally block of the filter and instead implement a ServletRequestListener which does the cleanup of the MDC in the requestDestroyed - method. But since the filter is used in multiple web modules I would need to make sure that the ServletRequestListener is also declared in every existing and prospective module along with the MDCFilter which seems kind of error-prone to me. Moreover I would prefer it if the filter responsible for adding data to the MDC also takes care of its removal.

Related

Does CompletableFuture have a corresponding Local context?

In the olden days, we had ThreadLocal for programs to carry data along with the request path since all request processing was done on that thread and stuff like Logback used this with MDC.put("requestId", getNewRequestId());
Then Scala and functional programming came along and Futures came along and with them came Local.scala (at least I know the twitter Futures have this class). Future.scala knows about Local.scala and transfers the context through all the map/flatMap, etc. etc. functionality such that I can still do Local.set("requestId", getNewRequestId()); and then downstream after it has travelled over many threads, I can still access it with Local.get(...)
Soooo, my question is in Java, can I do the same thing with the new CompletableFuture somewhere with LocalContext or some object (not sure of the name) and in this way, I can modify Logback MDC context to store it in that context instead of a ThreadLocal such that I don't lose the request id and all my logs across the thenApply, thenAccept, etc. etc. still work just fine with logging and the -XrequestId flag in Logback configuration.
EDIT:
As an example. If you have a request come in and you are using Log4j or Logback, in a filter, you will set MDC.put("requestId", requestId) and then in your app, you will log many log statements line this:
log.info("request came in for url="+url);
log.info("request is complete");
Now, in the log output it will show this:
INFO {time}: requestId425 request came in for url=/mypath
INFO {time}: requestId425 request is complete
This is using a trick of ThreadLocal to achieve this. At Twitter, we use Scala and Twitter Futures in Scala along with a Local.scala class. Local.scala and Future.scala are tied together in that we can achieve the above scenario still which is very nice and all our log statements can log the request id so the developer never has to remember to log the request id and you can trace through a single customers request response cycle with that id.
I don't see this in Java :( which is very unfortunate as there are many use cases for that. Perhaps there is something I am not seeing though?
If you come across this, just poke the thread here
http://mail.openjdk.java.net/pipermail/core-libs-dev/2017-May/047867.html
to implement something like twitter Futures which transfer Locals (Much like ThreadLocal but transfers state).
See the def respond() method in here and how it calls Locals.save() and Locals.restort()
https://github.com/simonratner/twitter-util/blob/master/util-core/src/main/scala/com/twitter/util/Future.scala
If Java Authors would fix this, then the MDC in logback would work across all 3rd party libraries. Until then, IT WILL NOT WORK unless you can change the 3rd party library(doubtful you can do that).
My solution theme would be to (It would work with JDK 9+ as a couple of overridable methods are exposed since that version)
Make the complete ecosystem aware of MDC
And for that, we need to address the following scenarios:
When all do we get new instances of CompletableFuture from within this class? → We need to return a MDC aware version of the same rather.
When all do we get new instances of CompletableFuture from outside this class? → We need to return a MDC aware version of the same rather.
Which executor is used when in CompletableFuture class? → In all circumstances, we need to make sure that all executors are MDC aware
For that, let's create a MDC aware version class of CompletableFuture by extending it. My version of that would look like below
import org.slf4j.MDC;
import java.util.Map;
import java.util.concurrent.*;
import java.util.function.Function;
import java.util.function.Supplier;
public class MDCAwareCompletableFuture<T> extends CompletableFuture<T> {
public static final ExecutorService MDC_AWARE_ASYNC_POOL = new MDCAwareForkJoinPool();
#Override
public CompletableFuture newIncompleteFuture() {
return new MDCAwareCompletableFuture();
}
#Override
public Executor defaultExecutor() {
return MDC_AWARE_ASYNC_POOL;
}
public static <T> CompletionStage<T> getMDCAwareCompletionStage(CompletableFuture<T> future) {
return new MDCAwareCompletableFuture<>()
.completeAsync(() -> null)
.thenCombineAsync(future, (aVoid, value) -> value);
}
public static <T> CompletionStage<T> getMDCHandledCompletionStage(CompletableFuture<T> future,
Function<Throwable, T> throwableFunction) {
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return getMDCAwareCompletionStage(future)
.handle((value, throwable) -> {
setMDCContext(contextMap);
if (throwable != null) {
return throwableFunction.apply(throwable);
}
return value;
});
}
}
The MDCAwareForkJoinPool class would look like (have skipped the methods with ForkJoinTask parameters for simplicity)
public class MDCAwareForkJoinPool extends ForkJoinPool {
//Override constructors which you need
#Override
public <T> ForkJoinTask<T> submit(Callable<T> task) {
return super.submit(MDCUtility.wrapWithMdcContext(task));
}
#Override
public <T> ForkJoinTask<T> submit(Runnable task, T result) {
return super.submit(wrapWithMdcContext(task), result);
}
#Override
public ForkJoinTask<?> submit(Runnable task) {
return super.submit(wrapWithMdcContext(task));
}
#Override
public void execute(Runnable task) {
super.execute(wrapWithMdcContext(task));
}
}
The utility methods to wrap would be such as
public static <T> Callable<T> wrapWithMdcContext(Callable<T> task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.call();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static Runnable wrapWithMdcContext(Runnable task) {
//save the current MDC context
Map<String, String> contextMap = MDC.getCopyOfContextMap();
return () -> {
setMDCContext(contextMap);
try {
return task.run();
} finally {
// once the task is complete, clear MDC
MDC.clear();
}
};
}
public static void setMDCContext(Map<String, String> contextMap) {
MDC.clear();
if (contextMap != null) {
MDC.setContextMap(contextMap);
}
}
Below are some guidelines for usage:
Use the class MDCAwareCompletableFuture rather than the class CompletableFuture.
A couple of methods in the class CompletableFuture instantiates the self version such as new CompletableFuture.... For such methods (most of the public static methods), use an alternative method to get an instance of MDCAwareCompletableFuture. An example of using an alternative could be rather than using CompletableFuture.supplyAsync(...), you can choose new MDCAwareCompletableFuture<>().completeAsync(...)
Convert the instance of CompletableFuture to MDCAwareCompletableFuture by using the method getMDCAwareCompletionStage when you get stuck with one because of say some external library which returns you an instance of CompletableFuture. Obviously, you can't retain the context within that library but this method would still retain the context after your code hits the application code.
While supplying an executor as a parameter, make sure that it is MDC Aware such as MDCAwareForkJoinPool. You could create MDCAwareThreadPoolExecutor by overriding execute method as well to serve your use case. You get the idea!
You can find a detailed explanation of all of the above here in a post about the same.

How to list all component log detail level via jython script in Websphere (8.x)?

is it possible to list all currently (at runtime) available components for which you can change the log level?
So you don't have to know the exact name beforehand for some deployed application.
E.g a command listing all available loggers for server1 in a websphere cluster.
Thank you,
ralf
If you just want to export all loggers you could write very simple servlet/jsp to print all registered loggers, like this (I know, its not jython, but maybe it will be still useful for you):
#WebServlet("/LoggerTest")
public class LoggerTest extends HttpServlet {
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
LogManager logManager = LogManager.getLogManager();
Enumeration<String> loggerNames = logManager.getLoggerNames();
while (loggerNames.hasMoreElements()) {
String loggerName = (String) loggerNames.nextElement();
System.out.println(loggerName);
}
}
}

Dropwizard intercept bad json and return custom error message

I want to intercept a bad JSON input and return custom error messages using Dropwizard application. I followed the approach of defining a custom exception mapper as mentioned here : http://gary-rowe.com/agilestack/2012/10/23/how-to-implement-a-runtimeexceptionmapper-for-dropwizard/ . But it did not work for me. This same question has been asked here https://groups.google.com/forum/#!topic/dropwizard-user/r76Ny-pCveA but unanswered.
Any help would be highly appreciated.
My code below and I am registering it in dropwizard as environment.jersey().register(RuntimeExceptionMapper.class);
#Provider
public class RuntimeExceptionMapper implements ExceptionMapper<RuntimeException> {
private static Logger logger = LoggerFactory.getLogger(RuntimeExceptionMapper.class);
#Override
public Response toResponse(RuntimeException runtime) {
logger.error("API invocation failed. Runtime : {}, Message : {}", runtime, runtime.getMessage());
return Response.serverError().type(MediaType.APPLICATION_JSON).entity(new Error()).build();
}
}
Problem 1:
The exception being thrown by Jackson doesn't extends RuntimeException, but it does extend Exception. This doesn't matter though. (See Problem 2)
Problem 2:
DropwizardResourceConfig, registers it's own JsonProcessingExceptionMapper. So you should already see results similar to
{
"message":"Unrecognized field \"field\" (class d.s.h.c.MyClass),..."
}
Now if you want to override this, then you should create a more specific exception mapper. When working with exception mappers the most specific one will be chosen. JsonProcessingException is subclassed by JsonMappingException and JsonProcessingException, so you will want to create an exception mapper for each of these. Then register them. I am not sure how to unregister the Dropwizard JsonProcessingExceptionMapper, otherwise we could just create a mapper for JsonProcessingException, which will save us the hassle of create both.
Update
So you can remove the Dropwizard mapper, if you want, with the following
Set<Object> providers = environment.jersey().getResourceConfig().getSingletons();
Iterator it = providers.iterator();
while (it.hasNext()) {
Object val = it.next();
if (val instanceof JsonProcessingExceptionMapper) {
it.remove();
break;
}
}
Then you are free to use your own mapper, JsonProcessingException

Handling exceptions in Spring MVC along with Rest API

I am using #ControllerAdvice annotation for defining exceptions at application level. Now the problem is I am having two #ControllerAdvice classes, one for REST and one for the normal web app. When I define #ExceptionHandler for Exception.class in both, only the first one is considered. How do I separate both? Or how can I catch an Exception and determine from where it has occured? Is there a way or else do I need to use controller-specific exception handlers?
I resolved this issue by creating a custom exceptions for my application and giving one exception handler method for each of them with #exception handler.
I also used aspects to make sure that every exception is converted to any of the custom exceptions.
#Aspect
#Component
public class ExceptionInterceptor {
#AfterThrowing(pointcut = "within(x.y.package..*)", throwing = "t")
public void toRuntimeException(Throwable t)
throws ApplicationException1, ApplicationException2,ApplicationException3 {
if (t instanceof ApplicationException1) {
throw (ApplicationException1) t;
} else if (t instanceof ApplicationException2) {
throw (ApplicationException2) t;
} else
throw (ApplicationException3) t;
}
}
These will transfer control to #controlleradvice.
I noticed this have been left for a month or so, so it might be old. But this article may help http://www.baeldung.com/2013/01/31/exception-handling-for-rest-with-spring-3-2/.
The section 3.5 is probably what you are looking for, a custom Exception Resolver.

Server-side schema validation with JAX-WS

I have JAX-WS container-less service (published via Endpoint.publish() right from main() method). I want my service to validate input messages. I have tried following annotation: #SchemaValidation(handler=MyErrorHandler.class) and implemented an appropriate class. When I start the service, I get the following:
Exception in thread "main" javax.xml.ws.WebServiceException:
Annotation #com.sun.xml.internal.ws.developer.SchemaValidation(outbound=true,
inbound=true, handler=class mypackage.MyErrorHandler) is not recognizable,
atleast one constructor of class
com.sun.xml.internal.ws.developer.SchemaValidationFeature
should be marked with #FeatureConstructor
I have found few solutions on the internet, all of them imply the use of WebLogic container. I can't use container in my case, I need embedded service. Can I still use schema validation?
The #SchemaValidation annotation is not defined in the JAX-WS spec, but validation is left open. This means you need something more than only the classes in the jdk.
As long as you are able to add some jars to your classpath, you can set this up pretty easily using metro (which is also included in WebLogic. This is why you find solutions that use WebLogic as container.). To be more precise, you need to add two jars to your classpath. I'd suggest to
download the most recent metro release.
Unzip it somewhere.
Add the jaxb-api.jar and jaxws-api.jar to your classpath. You can do this for example by putting them into the JAVA_HOME/lib/endorsed or by manually adding them to your project. This largely depends on the IDE or whatever you are using.
Once you have done this, your MyErrorHandler should work even if it is deployed via Endpoint.publish(). At least I have this setup locally and it compiles and works.
If you are not able to modify your classpath and need validation, you will have to validate the request manually using JAXB.
Old question, but I solved the problem using the correct package and minimal configuration, as well using only provided services from WebLogic. I was hitting the same problem as you.
Just make sure you use correct java type as I described here.
As I am planning to expand to a tracking mechanism I also implemented the custom error handler.
Web Service with custom validation handler
import com.sun.xml.ws.developer.SchemaValidation;
#Stateless
#WebService(portName="ValidatedService")
#SchemaValidation(handler=MyValidator.class)
public class ValidatedService {
public ValidatedResponse operation(#WebParam(name = "ValidatedRequest") ValidatedRequest request) {
/* do business logic */
return response;
}
}
Custom Handler to log and store error in database
public class MyValidator extends ValidationErrorHandler{
private static java.util.logging.Logger log = LoggingHelper.getServerLogger();
#Override
public void warning(SAXParseException exception) throws SAXException {
handleException(exception);
}
#Override
public void error(SAXParseException exception) throws SAXException {
handleException(exception);
}
#Override
public void fatalError(SAXParseException exception) throws SAXException {
handleException(exception);
}
private void handleException(SAXParseException e) throws SAXException {
log.log(Level.SEVERE, "Validation error", e);
// Record in database for tracking etc
throw e;
}
}

Resources