Spring sleuth and spring integration generating the same traceId for all outbound API requests - spring-boot

I'm building a springboot app which use spring integration and spring sleuth as well. The app reads from a csv file and for each record in the csv file a call is made to an API using spring resttemplate. Each time a file is read, all corresponding calls to the API are having the same X-B3-TraceId. They do have a different spanId.
I would like to have different X-B3-TraceId for each call to the API. I believe the spring integration is setting a traceId for each file read operation and using the same throughout during each call to the API.
#Bean
public IntegrationFlow bridgeFlow() {
return IntegrationFlows.from(ABC_SERVICE_QUEUE_CHANNEL)
.bridge(e -> e.poller(Pollers.fixedDelay(period).maxMessagesPerPoll(MAX_MSG_PER_POLL)))
.handle(someService, "someMethod")
.route(router())
.get();
}
"someMethod" has the call to the API using resttemplate as,
ResponseEntity<String> response = restTemplate.exchange(someUrl, HttpMethod.POST, requestEntity, String.class);
I tried manually setting the X-B3-TraceId headers but that seem to be getting overridden

If you'd like to override some header with your own value, you should use something like this in your IntegrationFlow:
.enrichHeaders(s -> s.header("X-B3-TraceId", "some_value", true))
The last argument is important here to be set to true.
See its JavaDocs:
#param overwrite true to overwrite an existing header.
On the other hand it is not clear from your code snippet how that X-B3-TraceId header appears in the message at all. Who sets it for us? And where is your file reader and splitter for records in it?

Related

Add sleuth trace id to request header

I have a spring boot application having dependency of spring-cloud-starter-sleuth-3.0.3 and spring-cloud-sleuth-zipkin-3.0.3.
I have a requirement that i need to pass trace-id to request header while calling API from webclient.
Demo webClient
#Slf4j
#Component
#RequiredArgsConstructor
public class DemoApiClient {
private final WebClient demoWebClient;
private final DemoProperties demoProperties;
private final Tracer tracer;
public Mono<DemoDetail> retrieveDemoDetail(String demo){
return demoWebClient
.get()
.uri(uriBuilder->uriBuilder
.path(demoProperties.getLookupPath())
.build(demo))
.header("trace-id", tracer.currentSpan().context().traceId())
.accept(APPLICATION_JSON)
.retrieve()
.bodyToMono(DemoDetail.class)
.doOnError(e -> log.error("Could not find demo", e));
}
}
tracer.currentSpan() is coming as null , hence NPE is thrown.
As per document, approach is given to add trace-id to header of response
https://docs.spring.io/spring-cloud-sleuth/docs/3.0.3/reference/html/howto.html#how-to-add-headers-to-the-http-server-response.
However, i need correct approach to add trace-id to request header.
WebClient is instrumented, please see the docs: WebClient integration, so tracing information should be propagated out of the box over the wire.
If you want to do this manually (I don't recommend), you need to check what you do "above" this method that prevents the tracing information to be propagated. E.g.: you are switching threads, coming from an imperative context to Reactor, etc. You can work this around by getting the tracing information before the switch and either propagate it (see Scope) or inject it into this method.
Also, you are not sending the whole trace context just the traceId so please check the docs and let Sleuth propagate the tracing information for you.
If you are creating a WebClient bean, you are not switching threads, or going back and forth between imperative and reactive, and you still don't see the tracing information in your header (propagated by Sleuth), you can try modifying the instrumentation mechanism, I recommend using DECORATE_QUEUES.
Also, Sleuth 3.1.x is out, you can try upgrading it.

Reactive rest client headers injection in Quarkus

I am following the guide of the new quarkus-resteasy-reactive-jackson extension to use it in an existing Quarkus application deployed in production.
In the Custom headers support section it's introduced the ClientHeadersFactory interface to allow injecting headers in a request, but you are forced to return a sync response. One can not use Uni<MultivaluedMap<String, String>>, which is of what is desired in my case, because I need to add a token in the header, and this token is retrieved by a request to another rest endpoint that returns a Uni<Token>.
How can I achieve this in the new implementation? If not possible, is there a workaround?
It's not possible to use Uni<MultivaluedMap<...>> in ClientHeadersFactory in Quarkus 2.2.x (and older versions). We may add such a feature in the near future.
Currently, you can #HeaderParam directly. Your code could probably look as follows:
Uni<String> token = tokenService.getToken();
token.onItem().transformToUni(tokenValue -> client.doTheCall(tokenValue));
Where the client interface would be something like:
#Path("/")
public interface MyClient {
#GET
Uni<Foo> doTheCall(#HeaderParam("token") String tokenValue);
}

Calling a non-reactive legacy service from reactive spring boot app?

I am working heavily with a webflux based spring boot application.
the problem I am facing, is that there is one service I have to call to, which is a traditional spring boot app, and is not reactive!
Here is an example endpoint which is close to the idea of said legacy system :
#RequestMapping(value = "/people/**", method = RequestMethod.GET)
public ResponseEntity<InputStreamResource> getPerson(HttpServletRequest request) {
String pattern = (String) request.getAttribute(HandlerMapping.BEST_MATCHING_PATTERN_ATTRIBUTE);
String key = new AntPathMatcher().extractPathWithinPattern(pattern, request.getRequestURI());
return personService.getPersonByKey(key);
}
I KNOW I can't achieve true reactive goodness with this, is there a happy medium of non blocking and blocking I can achieve here?
Thanks
When you use WebClient to call the service from your Spring WebFlux application, then it will work in Reactive non blocking way. Meaning you can achieve true reactive goodness on your application. The thread will not be blocked until the upstream service returns the response.
Below is an example code for calling a service using WebClient:
WebClient webClient = WebClient.create("http://localhost:8080");
Mono<Person> result = webClient.get()
.uri("/people/{id}")
.retrieve()
.bodyToMono(Person.class);

How to inject Feign Client with out using Spring Boot and call a REST Endpoint

I have two Java processes - which get spawned from the same Jar using different run configurations
Process A - Client UI component , Developed Using Spring bean xml based approach. No Spring Boot is there.
Process B - A new Springboot Based component , hosts REST End points.
Now from Process A , on various button click how can I call the REST end points on Process B using Feign Client.
Note - Since Process A is Spring XML based , right at the moment we can not convert that to Spring boot. Hence #EnableFeignClients can not be used to initialise the Feign Clients
So Two questions
1) If the above is possible how to do it ?
2) Till Process A is moved to Spring boot - is Feign still an easier option than spring REST template ?
Feign is a Java to HTTP client binder inspired by Retrofit, JAXRS-2.0, and WebSockets and you can easily use feign without spring boot. And Yes, feign still better option to use because Feign Simplify the HTTP API Clients using declarative way as Spring REST does.
1) Define http methods and endpoints in interface.
#Headers({"Content-Type: application/json"})
public interface NotificationClient {
#RequestLine("POST")
String notify(URI uri, #HeaderMap Map<String, Object> headers, NotificationBody body);
}
2) Create Feign client using Feign.builder() method.
Feign.builder()
.encoder(new JacksonEncoder())
.decoder(customDecoder())
.target(Target.EmptyTarget.create(NotificationClient.class));
There are various decoders available in feign to simplify your tasks.
You are able to just initialise Feign in any code (without spring) just like in the readme example:
public static void main(String... args) {
GitHub github = Feign.builder()
.decoder(new GsonDecoder())
.target(GitHub.class, "https://api.github.com");
...
}
Please take a look at the getting started guide: feign on github

SpringBoot get InputStream and OutputStream from websocket

we want to integrate third party library(Eclipse XText LSP) into our SpringBoot webapp.
This library works "interactively" with the user (like chat). XText API requires input and output stream to work. We want to use WebSocket to let users interact with this library smoothly (send/retrieve json messages).
We have a problem with SpringBoot because SpringBoot support for WebSocket doesn't expose input/output streams. We wrote custom TextWebSocketHandler (subclass) but none of it's methods provide access to in/out streams.
We also tried with HandshakeInterceptor (to obtain in/out streams after handshake ) but with no success.
Can we use SpringBoot WebSocket API in this scenario or should we use some lower level (Servlet?) API ?
Regards Daniel
I am not sure if this will fit your architecture or not, but I have achieved this by using Spring Boot's STOMP support and wiring it into a custom org.eclipse.lsp4j.jsonrpc.RemoteEndpoint, rather than using a lower level API.
The approach was inspired by reading through the code provided in org.eclipse.lsp4j.launch.LSPLauncher.
JSON handler
Marhalling and unmarshalling the JSON needs to be done with the API provided with the xtext language server, rather than Jackson (which would be used by the Spring STOMP integration)
Map<String, JsonRpcMethod> supportedMethods = new LinkedHashMap<String, JsonRpcMethod>();
supportedMethods.putAll(ServiceEndpoints.getSupportedMethods(LanguageClient.class));
supportedMethods.putAll(languageServer.supportedMethods());
jsonHandler = new MessageJsonHandler(supportedMethods);
jsonHandler.setMethodProvider(remoteEndpoint);
Response / notifications
Responses and notifications are sent by a message consumer which is passed to the remoteEndpoint when constructed. The message must be marshalled by the jsonHandler so as to prevent Jackson doing it.
remoteEndpoint = new RemoteEndpoint(new MessageConsumer() {
#Override
public void consume(Message message) {
simpMessagingTemplate.convertAndSendToUser('user', '/lang/message',
jsonHandler.serialize(message));
}
}, ServiceEndpoints.toEndpoint(languageServer));
Requests
Requests can be received by using a #MessageMapping method that takes the whole #Payload as a String to avoid Jackson unmarshalling it. You can then unmarshall yourself and pass the message to the remoteEndpoint.
#MessageMapping("/lang/message")
public void incoming(#Payload String message) {
remoteEndpoint.consume(jsonHandler.parseMessage(message));
}
There may be a better way to do this, and I'll watch this question with interest, but this is an approach that I have found to work.

Resources