How to use a gRPC interceptor to attach/update logging MDC in a Spring-Boot app - spring-boot

Problem
I have a Spring-Boot application in which I am also starting a gRPC server/service. Both the servlet and gRPC code send requests to a common object to process the request. When the request comes in I want to update the logging to display a unique 'ID' so I can track the request through the system.
On the Spring side I have setup a 'Filter' which updates the logging MDC to add some data to the log request (see this example). this works fine
On the gRPC side I have created an 'ServerInterceptor' and added it to the service, while the interceptor gets called the code to update the MDC does not stick, so when a request comes through the gRPC service I do not get the ID printed in the log. I realize this has to do with the fact that I'm intercepting the call in one thread and it's being dispatched by gRPC in another, what I can't seem to figure out is how to either intercept the call in the thread doing the work or add the MDC information so it is properly propagated to the thread doing the work.
What I've tried
I have done a lot of searches and was quite surprised to not find this asked/answered, I can only assume my query skills are lacking :(
I'm fairly new to gRPC and this is the first Interceptor I'm writing. I've tried adding the interceptor several different ways (via ServerInterceptors.intercept, BindableService instance.intercept).
I've looked at LogNet's Spring Boot gRPC Starter, but I'm not sure this would solve the issue.
Here is the code I have added in my interceptor class
#Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(final ServerCall<ReqT, RespT> call, final Metadata headers, final ServerCallHandler<ReqT, RespT> next) {
try {
final String mdcData = String.format("[requestID=%s]",
UUID.randomUUID().toString());
MDC.put(MDC_DATA_KEY, mdcData);
return next.startCall(call, headers);
} finally {
MDC.clear();
}
}
Expected Result
When a request comes in via the RESTful API I see log output like this
2019-04-09 10:19:16.331 [requestID=380e28db-c8da-4e35-a097-4b8c90c006f4] INFO 87100 --- [nio-8080-exec-1] c.c.es.xxx: processing request step 1
2019-04-09 10:19:16.800 [requestID=380e28db-c8da-4e35-a097-4b8c90c006f4] INFO 87100 --- [nio-8080-exec-1] c.c.es.xxx: processing request step 2
2019-04-09 10:19:16.803 [requestID=380e28db-c8da-4e35-a097-4b8c90c006f4] INFO 87100 --- [nio-8080-exec-1] c.c.es.xxx: Processing request step 3
...
I'm hoping to get similar output when the request comes through the gRPC service.
Thanks

Since no one replied, I kept trying and came up with the following solution for my interceptCall function. I'm not 100% sure why this works, but it works for my use case.
private class LogInterceptor implements ServerInterceptor {
#Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(final ServerCall<ReqT, RespT> call,
final Metadata headers,
final ServerCallHandler<ReqT, RespT> next) {
Context context = Context.current();
final String requestId = UUID.randomUUID().toString();
return Contexts.interceptCall(context, call, headers, new ServerCallHandler<ReqT, RespT>() {
#Override
public ServerCall.Listener<ReqT> startCall(ServerCall<ReqT, RespT> call, Metadata headers) {
return new ForwardingServerCallListener.SimpleForwardingServerCallListener<ReqT>(next.startCall(call, headers)) {
/**
* The actual service call happens during onHalfClose().
*/
#Override
public void onHalfClose() {
try (final CloseableThreadContext.Instance ctc = CloseableThreadContext.put("requestID",
UUID.randomUUID().toString())) {
super.onHalfClose();
}
}
};
}
});
}
}
In my application.properties I added the following (which I already had)
logging.pattern.level=[%X] %-5level
The '%X' tells the logging system to print all of the CloseableThreadContext key/values.
Hopefully this may help someone else.

MDC stores data in ThreadLocal variable and you are right about - "I realize this has to do with the fact that I'm intercepting the call in one thread and it's being dispatched by gRPC in another". Check #Eric Anderson answer about the right way to use ThradLocal in the post -
https://stackoverflow.com/a/56842315/2478531
Here is a working example -
public class GrpcMDCInterceptor implements ServerInterceptor {
private static final String MDC_DATA_KEY = "Key";
#Override
public <R, S> ServerCall.Listener<R> interceptCall(
ServerCall<R, S> serverCall,
Metadata metadata,
ServerCallHandler<R, S> next
) {
log.info("Setting user context, metadata {}", metadata);
final String mdcData = String.format("[requestID=%s]", UUID.randomUUID().toString());
MDC.put(MDC_DATA_KEY, mdcData);
try {
return new WrappingListener<>(next.startCall(serverCall, metadata), mdcData);
} finally {
MDC.clear();
}
}
private static class WrappingListener<R>
extends ForwardingServerCallListener.SimpleForwardingServerCallListener<R> {
private final String mdcData;
public WrappingListener(ServerCall.Listener<R> delegate, String mdcData) {
super(delegate);
this.mdcData = mdcData;
}
#Override
public void onMessage(R message) {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onMessage(message);
} finally {
MDC.clear();
}
}
#Override
public void onHalfClose() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onHalfClose();
} finally {
MDC.clear();
}
}
#Override
public void onCancel() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onCancel();
} finally {
MDC.clear();
}
}
#Override
public void onComplete() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onComplete();
} finally {
MDC.clear();
}
}
#Override
public void onReady() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onReady();
} finally {
MDC.clear();
}
}
}
}

Related

creating Opentelemetry Context using trace-id and span-id of remote parent

I have micro service which support open tracing and that injecting trace-id and span-id in to header. Other micro service support open telemetry. how can I create parent span using trace-id and span-id in second micro service?
Thanks,
You can use W3C Trace Context specifications to achieve this. We need to send traceparent(Ex: 00-8652a752089f33e2659dff28d683a18f-7359b90f4355cfd9-01) from producer via HTTP headres ( or you can create it using the trace-id and span-id in the consumer). Then we can extract the remote context and create the span with traceparent.
This is the consumer controller. TextMapGetter used to map that traceparent data to the Context. ExtractModel is just a custom class.
#GetMapping(value = "/second")
public String sencondTest(#RequestHeader(value = "traceparent") String traceparent){
try {
Tracer tracer = openTelemetry.getTracer("cloud.events.second");
TextMapGetter<ExtractModel> getter = new TextMapGetter<>() {
#Override
public String get(ExtractModel carrier, String key) {
if (carrier.getHeaders().containsKey(key)) {
return carrier.getHeaders().get(key);
}
return null;
}
#Override
public Iterable<String> keys(ExtractModel carrier) {
return carrier.getHeaders().keySet();
}
};
ExtractModel model = new ExtractModel();
model.addHeader("traceparent", traceparent);
Context extractedContext = openTelemetry.getPropagators().getTextMapPropagator()
.extract(Context.current(), model, getter);
try (Scope scope = extractedContext.makeCurrent()) {
// Automatically use the extracted SpanContext as parent.
Span serverSpan = tracer.spanBuilder("CloudEvents Server")
.setSpanKind(SpanKind.SERVER)
.startSpan();
try {
Thread.sleep(150);
} finally {
serverSpan.end();
}
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
return "Server Received!";
}
Then when we configuring the OpenTelemetrySdk need to set W3CTraceContextPropagator in Context Propagators.
// Use W3C Propagator(to extract span from HTTP headers) since we use the W3C specifications
TextMapPropagator textMapPropagator = W3CTraceContextPropagator.getInstance();
OpenTelemetrySdk openTelemetrySdk = OpenTelemetrySdk.builder()
.setTracerProvider(tracerProvider)
.setPropagators(ContextPropagators.create(textMapPropagator))
.buildAndRegisterGlobal();
Here is my customer ExtractModel class
public class ExtractModel {
private Map<String, String> headers;
public void addHeader(String key, String value) {
if (this.headers == null){
headers = new HashMap<>();
}
headers.put(key, value);
}
public Map<String, String> getHeaders() {
return headers;
}
public void setHeaders(Map<String, String> headers) {
this.headers = headers;
}
}
You can find more details in the official documentation for manual instrumentation.
Generally you have to propogate the span-id and trace-id if it is available in header. Any request you get in your microservice, check if the headers have span-id and trace-id in them. If yes,extract them and use them in your service.
If it is not present then you create a new one and use it in your service and also add it to requests that go out of your microservice.

Is there a way to record response times of feign client

#FeignClient(...)
public interface SomeClient {
#RequestMapping(value = "/someUrl", method = POST, consumes = "application/json")
ResponseEntity<String> createItem(...);
}
Is there a way to find the response times for createItem api call?
We are using spring boot, actuator, prometheus.
We have straight forward as well as a customized way for logging the feign clients request and response (including the response time). We have to inject the feign.Logger.Level bean, that's it.
THE DEFAULT/ STRAIGHT FORWARD WAY
#Bean
Logger.Level feignLoggerLevel() {
return Logger.Level.BASIC;
}
there are BASIC,FULL,HEADERS,NONE(default) logging levels are available for more details
The above bean injection will give you the logging of feign request and response in the below format:
REQUEST:
refer
log(configKey, "---> %s %s HTTP/1.1", request.httpMethod().name(), request.url());
ex:2019-09-26 12:50:12.163 [DEBUG] [http-nio-4200-exec-5] [com.sample.FeignClient:72] [FeignClient#getUser] ---> END HTTP (0-byte body)
where the configkey means FeignClientClassName#FeignClientCallingMethodName ex: ApiClient#apiMethod.
RESPONSE
refer
log(configKey, "<--- HTTP/1.1 %s%s (%sms)", status, reason, elapsedTime);
ex:2019-09-26 12:50:12.163 [DEBUG] [http-nio-4200-exec-5] [com.sample.FeignClient:72] [FeignClient#getUser] <--- HTTP/1.1 200 OK (341ms)
the elapsedTime is what the response time taken for the API call.
NOTE: If you prefer the default way of the feign client logging then we have to consider the underlying application logging level as well because the feign.Slf4jLogger class logging with the feign request and response details with the DEBUG level (refer). If the underlying logging level above DEBUG then you may need to specify the explicit logger for the feign logging package/class otherwise it will not work.
THE CUSTOMIZED WAY
If you prefer logging with your customized format then you can extend the feign.Logger class and customize your logging. For a typical example if I want to log the header details of request and response in a single line as a list(by default Logger.Level.HEADERS prints the header in multiple lines):
package com.test.logging.feign;
import feign.Logger;
import feign.Request;
import feign.Response;
import lombok.extern.slf4j.Slf4j;
import java.io.IOException;
import static feign.Logger.Level.HEADERS;
#Slf4j
public class customFeignLogger extends Logger {
#Override
protected void logRequest(String configKey, Level logLevel, Request request) {
if (logLevel.ordinal() >= HEADERS.ordinal()) {
super.logRequest(configKey, logLevel, request);
} else {
int bodyLength = 0;
if (request.requestBody().asBytes() != null) {
bodyLength = request.requestBody().asBytes().length;
}
log(configKey, "---> %s %s HTTP/1.1 (%s-byte body) %s", request.httpMethod().name(), request.url(), bodyLength, request.headers());
}
}
#Override
protected Response logAndRebufferResponse(String configKey, Level logLevel, Response response, long elapsedTime)
throws IOException {
if (logLevel.ordinal() >= HEADERS.ordinal()) {
super.logAndRebufferResponse(configKey, logLevel, response, elapsedTime);
} else {
int status = response.status();
Request request = response.request();
log(configKey, "<--- %s %s HTTP/1.1 %s (%sms) %s", request.httpMethod().name(), request.url(), status, elapsedTime, response.headers());
}
return response;
}
#Override
protected void log(String configKey, String format, Object... args) {
log.debug(format(configKey, format, args));
}
protected String format(String configKey, String format, Object... args) {
return String.format(methodTag(configKey) + format, args);
}
}
also we have to inject the customFeignLogger class bean
#Bean
public customFeignLogger customFeignLogging() {
return new customFeignLogger();
}
If you are building FeignClient by yourself then you can build it with the customized logger:
Feign.builder().logger(new customFeignLogger()).logLevel(Level.BASIC).target(SomeFeignClient.class,"http://localhost:8080");
Add the following annotation to your project.
package com.example.annotation
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
public #interface DebugTracking {
#Aspect
#Component
public static class DebugTrackingAspect {
#Around("#annotation(com.example.annotation.DebugTracking)")
public Object trackExecutionTime(ProceedingJoinPoint joinPoint) throws Throwable {
StopWatch stopWatch = new StopWatch();
stopWatch.start(joinPoint.toShortString());
Exception exceptionThrown = null;
try {
// Execute the joint point as usual
return joinPoint.proceed();
} catch (Exception ex) {
exceptionThrown = ex;
throw ex;
} finally {
stopWatch.stop();
System.out.println(String.format("%s took %dms.", stopWatch.getLastTaskName(), stopWatch.getLastTaskTimeMillis()));
if (exceptionThrown != null) {
System.out.println(String.format("Exception thrown: %s", exceptionThrown.getMessage()));
exceptionThrown.printStackTrace();
}
}
}
}
}
Then annotate the methods you want to track in your #FeignClient with #DebugTracking.
I'm using the following (with Spring and Lombok) :
#Configuration // from Spring
#Slf4j // from Lombok
public class MyFeignConfiguration {
#Bean // from Spring
public MyFeignClient myFeignClient() {
return Feign.builder()
.logger(new Logger() {
#Override
protected void log(String configKey, String format, Object... args) {
LOG.info( String.format(methodTag(configKey) + format, args)); // LOG is the Lombok Slf4j object
}
})
.logLevel(Logger.Level.BASIC) // see https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-feign.html#_feign_logging
.target(MyFeignClient.class,"http://localhost:8080");
}
}
correct way doing this is using custom logger as pointed above. Using #Aspect is wrong. With that you create additional wrapper around the service. Feign already records this metric. Get that metric from feign.

in SpringBoot 2.x(2.1.0) the custom filter process once but produce duplicated data

I'm new in using SpringBoot 2.1.0 with JSP (with some reasons) to develop web applications.
I'm using a filter to save access info into the database, which are mapping to one type of url.
But there're some problems:
1. When I click the link on a menu, the page is new opened in the browser, but logs output twice, it's indicate that doFilterInternal method executed twice, this situation is NOT correct;
2018-12-13 13:43:07.405 WARN 14912 --- [nio-8096-exec-2] c.y.l.c.filters.rpt.AccessMenuFilter : ---------------------------- Access Once ----------------------------------------
2018-12-13 13:43:07.405 WARN 14912 --- [nio-8096-exec-3] c.y.l.c.filters.rpt.AccessMenuFilter : ---------------------------- Access Once ----------------------------------------
2. Then I right click the mouse on opened page in step one, and choose refresh the iframe, logs output only once, it's indicate that doFilterInternal method executed once, this situation is correct, in step one it should execute once too.
2018-12-13 13:44:02.118 WARN 14912 --- [nio-8096-exec-1] c.y.l.c.filters.rpt.AccessMenuFilter : ---------------------------- Access Once ----------------------------------------
insert two records into Database in step one, one record in step two
The filter extends to OncePerRequestFilter, see from other posts, it may causes the call twice, but why in step 2 the filter call once.
I post the main codes below:
POM.xml
https://github.com/richard20427176/pom-config/blob/master/pom.xml
Below is main of SpringBootConfig code:
#SpringBootConfiguration
public class SpringBootConfig implements WebMvcConfigurer {
#Override
public void configurePathMatch(PathMatchConfigurer configurer) {
configurer.setUseSuffixPatternMatch(false);
// configurer.setUseTrailingSlashMatch(false);
configurer.setUseRegisteredSuffixPatternMatch(true);
}
#Override
public void configureContentNegotiation(ContentNegotiationConfigurer configurer) {
configurer.favorPathExtension(true)
.favorParameter(true)
.parameterName("format")
.ignoreAcceptHeader(true)
.defaultContentType(MediaType.TEXT_HTML)
.mediaType("html", MediaType.TEXT_HTML)
.mediaType("json", MediaType.APPLICATION_JSON)
.mediaType("xls", MediaType.valueOf("application/vnd.ms-excel"))
.mediaType("xlsx", MediaType.valueOf("application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"));
}
#Override
public void configureViewResolvers(ViewResolverRegistry registry) {
Set<String> modelKeys=new HashSet<>();
modelKeys.add("list");
modelKeys.add("table");
registry.jsp("/views/", ".jsp");
registry.enableContentNegotiation(new MappingJackson2JsonView());
XlsView xlsView=new XlsView();
xlsView.setModelKeys(modelKeys);
registry.enableContentNegotiation(xlsView);
XlsxView xlsxView=new XlsxView();
xlsxView.setModelKeys(modelKeys);
registry.enableContentNegotiation(xlsxView);
}
}
And below is Filter Config code:
#Configuration
public class FilterConfig implements WebMvcConfigurer {
#Bean
public FilterRegistrationBean shiroDelegatingFilterProxy() {
DelegatingFilterProxy proxy = new DelegatingFilterProxy();
proxy.setTargetFilterLifecycle(true);
proxy.setTargetBeanName("shiroFilter");
FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
filterRegistrationBean.setFilter(proxy);
return filterRegistrationBean;
}
}
The last, below is implement of the Filter code:
#Component
public class AccessMenuFilter extends OncePerRequestFilter {
private static final Logger LOGGER= LoggerFactory.getLogger(AccessMenuFilter.class);
#Autowired
private MonitorService monitorService;
#Autowired
private MenuService menuService;
private Set<MenuIsMonitorVo> monitorMenus=new HashSet<>();
private Map<String, RequestMatcher> menuRequestMatcherMap=new ConcurrentHashMap<>();
#Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
try {
String pageNumber = request.getParameter(Page.PAGE_NUMBER_REQUEST_PARAM_NAME);
// If page no greater than 2, then skip
if(StringUtils.isBlank(pageNumber) || pageNumber.compareTo("1")<=0) {
for(Map.Entry<String,RequestMatcher> entry:menuRequestMatcherMap.entrySet()) {
if (entry.getValue().matches(request)) {
String username = ShiroBaseService.getLoginUser().getUsername();
UserAgent userAgent=UserAgent.parseUserAgentString(request.getHeader(HttpHeaders.USER_AGENT));
String browser= WebUtil.getBrowserName(userAgent);
CreateAccessMenuDto createAccessMenuDto = new CreateAccessMenuDto();
createAccessMenuDto.setMenuId(entry.getKey());
createAccessMenuDto.setUserName(username);
createAccessMenuDto.setOsName(userAgent.getOperatingSystem().getName());
createAccessMenuDto.setBrowserName(browser);
createAccessMenuDto.setIpAddress(RemoteIpHelper.getRemoteIpFrom(request));
createAccessMenuDto.setRequestLocale(request.getLocale().getDisplayName());
createAccessMenuDto.setCreateTime(new Date());
monitorService.asyncCreateAccessMenu(createAccessMenuDto);
LOGGER.warn("---------------------------- Access Once ----------------------------------------");
LOGGER.debug("Successfully add user access log:[SessionId:{};Username:{};platform:{};Browser:{};IPAddress:{};MenuId:{}]. The request url is {}",
request.getSession(false).getId(),
username,
userAgent.getOperatingSystem().getName(),
browser,
RemoteIpHelper.getRemoteIpFrom(request),
entry.getKey(),
request.getRequestURL());
break;
}
}
}
} catch (Exception ex) {
LOGGER.error("User Access fail due to the reason:"+ex.getMessage());
} finally {
filterChain.doFilter(request,response);
}
}
#Override
protected void initFilterBean() throws ServletException {
if (monitorMenus != null && monitorMenus.size() > 0) {
RequestMatcher matcher;
for (MenuIsMonitorVo menu : monitorMenus) {
if (menu.getIsMonitor().equals("1")) {
String pattern = menu.getMenuUrl();
if (!pattern.startsWith("/")) {
pattern = "/" + pattern;
}
if (pattern.indexOf("?") != -1) {
pattern = pattern.substring(0, pattern.indexOf("?"));
}
LOGGER.info("Add menu[MenuId:{},pattern:{}] to access log monitor candidate map.", menu.getMenuId(), pattern);
matcher = new AntPathRequestMatcher(pattern);
menuRequestMatcherMap.put(menu.getMenuId(), matcher);
}
}
} else {
monitorMenus = menuService.menuIsMonitor().stream().collect(Collectors.toSet());
}
}
}
I hope anyone can help me and thanks very much.
I'm quite sure that the OPTIONS requests are doing those extra filter invocations for you.
Please check http://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS for more information. It should be visible in the network tab of your browser.

Spring WS (DefaultWsdl11Definition) HTTP status code with void

We have a (working) SOAP web service based on Spring WS with DefaultWsdl11Definition.
This is basically what it looks like:
#Endpoint("name")
public class OurEndpoint {
#PayloadRoot(namespace = "somenamespace", localPart = "localpart")
public void onMessage(#RequestPayload SomePojo pojo) {
// do stuff
}
}
It is wired in Spring and it is correctly processing all of our SOAP requests. The only problem is that the method returns a 202 Accepted. This is not what the caller wants, he'd rather have us return 204 No Content (or if that is not possible an empty 200 OK).
Our other endpoints have a valid response object, and do return 200 OK. It seems void causes 202 when 204 might be more appropriate?
Is it possible to change the response code in Spring WS? We can't seem to find the correct way to do this.
Things we tried and didn't work:
Changing the return type to:
HttpStatus.NO_CONTENT
org.w3c.dom.Element <- not accepted
Adding #ResponseStatus <- this is for MVC, not WS
Any ideas?
Instead of what I wrote in the comments it is possibly the easiest to create a delegation kind of solution.
public class DelegatingMessageDispatcher extends MessageDispatcher {
private final WebServiceMessageReceiver delegate;
public DelegatingMessageDispatcher(WebServiceMessageReceiver delegate) {
this.delegate = delegate;
}
public void receive(MessageContext messageContext) throws Exception {
this.delegate.receive(messageContext);
if (!messageContext.hasResponse()) {
TransportContext tc = TransportContextHolder.getTransportContext();
if (tc != null && tc.getConnection() instanceof HttpServletConnection) {
((HttpServletConnection) tc.getConnection()).getHttpServletResponse().setStatus(200);
}
}
}
}
Then you need to configure a bean named messageDispatcher which would wrap the default SoapMessageDispatcher.
#Bean
public MessageDispatcher messageDispatcher() {
return new DelegatingMessageDispatcher(soapMessageDispatcher());
}
#Bean
public MessageDispatcher soapMessageDispatcher() {
return new SoapMessageDispatcher();
}
Something like that should do the trick. Now when response is created (In the case of a void return type), the status as you want is send back to the client.
When finding a proper solutions we've encountered some ugly problems:
Creating custom adapters/interceptors is problematic because the handleResponse method isn't called by Spring when you don't have a response (void)
Manually setting the status code doesn't work because HttpServletConnection keeps a boolean statusCodeSet which doesn't get updated
But luckily we managed to get it working with the following changes:
/**
* If a web service has no response, this handler returns: 204 No Content
*/
public class NoContentInterceptor extends EndpointInterceptorAdapter {
#Override
public void afterCompletion(MessageContext messageContext, Object o, Exception e) throws Exception {
if (!messageContext.hasResponse()) {
TransportContext tc = TransportContextHolder.getTransportContext();
if (tc != null && tc.getConnection() instanceof HttpServletConnection) {
HttpServletConnection connection = ((HttpServletConnection) tc.getConnection());
// First we force the 'statusCodeSet' boolean to true:
connection.setFaultCode(null);
// Next we can set our custom status code:
connection.getHttpServletResponse().setStatus(204);
}
}
}
}
Next we need to register this interceptor, this can be easily done using Spring's XML:
<sws:interceptors>
<bean class="com.something.NoContentInterceptor"/>
</sws:interceptors>
A big thanks to #m-deinum for pointing us in the right direction!
To override the afterCompletion method really helped me out in the exact same situation. And for those who use code based Spring configuration, hereĀ“s how one can add the interceptor for a specific endpoint.
Annotate the custom interceptor with #Component, next register the custom interceptor to a WsConfigurerAdapter like this:
#EnableWs
#Configuration
public class EndpointConfig extends WsConfigurerAdapter {
/**
* Add our own interceptor for the specified WS endpoint.
* #param interceptors
*/
#Override
public void addInterceptors(List<EndpointInterceptor> interceptors) {
interceptors.add(new PayloadRootSmartSoapEndpointInterceptor(
new NoContentInterceptor(),
"NAMESPACE",
"LOCAL_PART"
));
}
}
NAMESPACE and LOCAL_PART should correspond to the endpoint.
If someone ever wanted to set custom HTTP status when returning non-void response, here is solution:
Spring Boot WS-Server - Custom Http Status

Google Web Toolkit (GWT) EventBus event firing/handling

Background Story:
I am developing a GWT application, using the standard MVP design pattern, and also using RPC to get data from my custom data handling servlet (does a lot behind the scenes). Anyway, my goal is to create a very simple custom caching mechanism, that stores the data returned from the RPC callback in a static cache POJO. (The callback also sends a custom event using the SimpleEventBus to all registered handlers.) Then when I request the data again, I'll check the cache before doing the RPC server call again. (And also send a custom event using the EventBus).
The Problem:
When I send the event from the RPC callback, everything works fine. The problem is when I send the event outside the RPC callback when I just send the cached object. For some reason this event doesn't make it to my registered handler. Here is some code:
public void callServer(final Object source)
{
if(cachedResponse != null)
{
System.err.println("Getting Response from Cache for: "+ source.getClass().getName());
//Does this actually fire the event?
eventBus.fireEventFromSource(new ResponseEvent(cachedResponse),source);
}
else
{
System.err.println("Getting Response from Server for: "+ source.getClass().getName());
service.callServer(new AsyncCallback<String>(){
#Override
public void onFailure(Throwable caught) {
System.err.println("RPC Call Failed.");
}
#Override
public void onSuccess(String result) {
cachedResponse = result;
eventBus.fireEventFromSource(new ResponseEvent(cachedResponse),source);
}
});
}
}
Now I have two Activities, HelloActivity and GoodbyeActivity (taken from: GWT MVP code)
They also print out messages when the handler is called. Anyway, this is the output I get from the logs: (Not correct)
Getting Response from Cache for: com.hellomvp.client.activity.HelloActivity
Response in GoodbyeActivity from: com.hellomvp.client.activity.HelloActivity
Getting Response from Cache for: com.hellomvp.client.activity.GoodbyeActivity
Response in HelloActivity from: com.hellomvp.client.activity.GoodbyeActivity
What I expect to get is this:
Getting Response from Cache for: com.hellomvp.client.activity.HelloActivity
Response in HelloActivity from: com.hellomvp.client.activity.HelloActivity
Getting Response from Cache for: com.hellomvp.client.activity.GoodbyeActivity
Response in GoodbyeActivity from: com.hellomvp.client.activity.GoodbyeActivity
And I will get this expected output if I change the above code to the following: (This is the entire file this time...)
package com.hellomvp.client;
import com.google.gwt.core.client.GWT;
import com.google.gwt.event.shared.EventBus;
import com.google.gwt.user.client.rpc.AsyncCallback;
import com.hellomvp.events.ResponseEvent;
public class RequestManager {
private EventBus eventBus;
private String cachedResponse;
private HelloServiceAsync service = GWT.create(HelloService.class);
public RequestManager(EventBus eventBus)
{
this.eventBus = eventBus;
}
public void callServer(final Object source)
{
if(cachedResponse != null)
{
System.err.println("Getting Response from Cache for: "+ source.getClass().getName());
service.doNothing(new AsyncCallback<Void>(){
#Override
public void onFailure(Throwable caught) {
System.err.println("RPC Call Failed.");
}
#Override
public void onSuccess(Void result) {
eventBus.fireEventFromSource(new ResponseEvent(cachedResponse),source);
}
});
}
else
{
System.err.println("Getting Response from Server for: "+ source.getClass().getName());
service.callServer(new AsyncCallback<String>(){
#Override
public void onFailure(Throwable caught) {
System.err.println("RPC Call Failed.");
}
#Override
public void onSuccess(String result) {
cachedResponse = result;
eventBus.fireEventFromSource(new ResponseEvent(cachedResponse),source);
}
});
}
}
}
So the point it out, the only change is that I created a new RPC call that does nothing, and send the event in its callback, with the cached data instead, and it causes the application to work as expected.
So the Question:
What am I doing wrong? I don't understand why 'eventBus.fireEvent(...)' Needs to be in an RPC Callback to work properly. I'm thinking this is a threading issue, but I have searched Google in vain for anything that would help.
I have an entire Eclipse project that showcases this issue that I'm having, it can be found at: Eclipse Problem Project Example
Edit: Please note that using eventBus.fireEventFromSource(...) is only being used for debugging purposes, since in my actual GWT Application I have more than one registered Handler for the events. So how do you use EventBus properly?
If I understand your problem correctly you are expecting calls to SimpleEventBus#fireEventFromSource to be routed only to the source object. This is not the case - the event bus will always fire events to all registered handlers. In general the goal of using an EventBus is to decouple the sources of events from their handlers - basing functionality on the source of an event runs counter to this goal.
To get the behavior you want pass an AsyncCallback to your caching RPC client instead of trying to use the EventBus concept in a way other than intended. This has the added benefit of alerting the Activity in question when the RPC call fails:
public class RequestManager {
private String cachedResponse = null;
private HelloServiceAsync service = GWT.create(HelloService.class);
public void callServer(final AsyncCallback<String> callback) {
if (cachedResponse != null) {
callback.onSuccess(cachedResponse);
} else {
service.callServer(new AsyncCallback<String>(){
#Override
public void onFailure(Throwable caught) {
callback.onFailure(caught);
}
#Override
public void onSuccess(String result) {
cachedResponse = result;
callback.onSuccess(cachedResponse);
}
});
}
}
}
And in the Activity:
clientFactory.getRequestManager().callServer(new AsyncCallback<String>() {
#Override
public void onFailure(Throwable caught) {
// Handle failure.
}
#Override
public void onSuccess(String result) {
helloView.showResponse(result);
}
});

Resources