creating Opentelemetry Context using trace-id and span-id of remote parent - open-telemetry

I have micro service which support open tracing and that injecting trace-id and span-id in to header. Other micro service support open telemetry. how can I create parent span using trace-id and span-id in second micro service?
Thanks,

You can use W3C Trace Context specifications to achieve this. We need to send traceparent(Ex: 00-8652a752089f33e2659dff28d683a18f-7359b90f4355cfd9-01) from producer via HTTP headres ( or you can create it using the trace-id and span-id in the consumer). Then we can extract the remote context and create the span with traceparent.
This is the consumer controller. TextMapGetter used to map that traceparent data to the Context. ExtractModel is just a custom class.
#GetMapping(value = "/second")
public String sencondTest(#RequestHeader(value = "traceparent") String traceparent){
try {
Tracer tracer = openTelemetry.getTracer("cloud.events.second");
TextMapGetter<ExtractModel> getter = new TextMapGetter<>() {
#Override
public String get(ExtractModel carrier, String key) {
if (carrier.getHeaders().containsKey(key)) {
return carrier.getHeaders().get(key);
}
return null;
}
#Override
public Iterable<String> keys(ExtractModel carrier) {
return carrier.getHeaders().keySet();
}
};
ExtractModel model = new ExtractModel();
model.addHeader("traceparent", traceparent);
Context extractedContext = openTelemetry.getPropagators().getTextMapPropagator()
.extract(Context.current(), model, getter);
try (Scope scope = extractedContext.makeCurrent()) {
// Automatically use the extracted SpanContext as parent.
Span serverSpan = tracer.spanBuilder("CloudEvents Server")
.setSpanKind(SpanKind.SERVER)
.startSpan();
try {
Thread.sleep(150);
} finally {
serverSpan.end();
}
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
return "Server Received!";
}
Then when we configuring the OpenTelemetrySdk need to set W3CTraceContextPropagator in Context Propagators.
// Use W3C Propagator(to extract span from HTTP headers) since we use the W3C specifications
TextMapPropagator textMapPropagator = W3CTraceContextPropagator.getInstance();
OpenTelemetrySdk openTelemetrySdk = OpenTelemetrySdk.builder()
.setTracerProvider(tracerProvider)
.setPropagators(ContextPropagators.create(textMapPropagator))
.buildAndRegisterGlobal();
Here is my customer ExtractModel class
public class ExtractModel {
private Map<String, String> headers;
public void addHeader(String key, String value) {
if (this.headers == null){
headers = new HashMap<>();
}
headers.put(key, value);
}
public Map<String, String> getHeaders() {
return headers;
}
public void setHeaders(Map<String, String> headers) {
this.headers = headers;
}
}
You can find more details in the official documentation for manual instrumentation.

Generally you have to propogate the span-id and trace-id if it is available in header. Any request you get in your microservice, check if the headers have span-id and trace-id in them. If yes,extract them and use them in your service.
If it is not present then you create a new one and use it in your service and also add it to requests that go out of your microservice.

Related

Micrometer - WebMvcTagsContributor not adding custom tags

I'm trying to add custom tags - the path variables and their values from each request - to each metric micrometer generates. I'm using spring-boot with java 16.
From my research i've found that creating a bean of type WebMvcTagsContributor alows me to do just that.
This is the code
public class CustomWebMvcTagsContributor implements WebMvcTagsContributor {
private static int PRINT_ERROR_COUNTER = 0;
#Override
public Iterable<Tag> getTags(HttpServletRequest request, HttpServletResponse response,
Object handler,
Throwable exception) {
return Tags.of(getAllTags(request));
}
private static List<Tag> getAllTags(HttpServletRequest request) {
Object attributesMapObject = request.getAttribute(View.PATH_VARIABLES);
if (isNull(attributesMapObject)) {
attributesMapObject = request.getAttribute(HandlerMapping.URI_TEMPLATE_VARIABLES_ATTRIBUTE);
if (isNull(attributesMapObject)) {
attributesMapObject = extractPathVariablesFromURI(request);
}
}
if (nonNull(attributesMapObject)) {
return getPathVariablesTags(attributesMapObject);
}
return List.of();
}
private static Object extractPathVariablesFromURI(HttpServletRequest request) {
Long currentUserId = SecurityUtils.getCurrentUserId().orElse(null);
try {
URI uri = new URI(request.getRequestURI());
String path = uri.getPath(); //get the path
UriTemplate uriTemplate = new UriTemplate((String) request.getAttribute(
HandlerMapping.BEST_MATCHING_PATTERN_ATTRIBUTE)); //create template
return uriTemplate.match(path); //extract values form template
} catch (Exception e) {
log.warn("[Error on 3rd attempt]", e);
}
return null;
}
private static List<Tag> getPathVariablesTags(Object attributesMapObject) {
try {
Long currentUserId = SecurityUtils.getCurrentUserId().orElse(null);
if (nonNull(attributesMapObject)) {
var attributesMap = (Map<String, Object>) attributesMapObject;
List<Tag> tags = attributesMap.entrySet().stream()
.map(stringObjectEntry -> Tag.of(stringObjectEntry.getKey(),
String.valueOf(stringObjectEntry.getValue())))
.toList();
log.warn("[CustomTags] [{}]", CommonUtils.toJson(tags));
return tags;
}
} catch (Exception e) {
if (PRINT_ERROR_COUNTER < 5) {
log.error("[Error while getting attributes map object]", e);
PRINT_ERROR_COUNTER++;
}
}
return List.of();
}
#Override
public Iterable<Tag> getLongRequestTags(HttpServletRequest request, Object handler) {
return null;
}
}
#Bean
public WebMvcTagsContributor webMvcTagsContributor() {
return new CustomWebMvcTagsContributor();
}
In order to test this, i've created a small spring boot app, added an endpoint to it. It works just fine.
The problem is when I add this code to the production app.
The metrics generates are the default ones and i can't figure out why.
What can I check to see why the tags are not added?
local test project
http_server_requests_seconds_count {exception="None", method="GET",id="123",outcome="Success",status="200",test="test",uri="/test/{id}/compute/{test}",)1.0
in prod - different (& bigger) app
http_server_requests_seconds_count {exception="None", method="GET",outcome="Success",status="200",uri="/api/{something}/test",)1.0
What i've tried and didn't work
Created a bean that implemented WebMvcTagsProvider - this one had an odd behaviour - it wasn't creating metrics for endpoints that had path variables in the path - though in my local test project it worked as expected
I added that log there in order to see what the extra tags are but doesn't seem to reach there as i don't see anything in the logs - i know, you might say that the current user id stops it, but it's not that.

I want to connect the Spring application with the external bukkit server through the REST API method

I want to control the bukkit server through the spring web application.
For example, send a command to the console, receive his response, etc
I'm trying to figure out a way, but I can't find a good one.
How shall I do it?
Even if third-party plugins are imported through the database, I want to find a way to do basic bukkit control.
First, you need to decide how to send the request to the server. It seems to me that in your case, the easiest is run the built-in java web server (HttpServer) to receive commands, and then process them.
If you need synchronous actions, then you can always do callSyncMethod
To receive command output, simply create your own implementation of CommandSender with overridden sendMessage methods
For example, how do command execution endpoint
JavaPlugin plugin = /** get plugin **/;
HttpServer server = HttpServer.create(new InetSocketAddress("localhost", 8001), 0);
server.createContext("/executeCommand", exchange -> {
if (!exchange.getRequestMethod().equals("POST")) {
exchange.getResponseBody().write("Method not supported".getBytes(StandardCharsets.UTF_8));
return;
}
// In this example body is command
String body = new String(exchange.getRequestBody().readAllBytes(), StandardCharsets.UTF_8);
StringBuilder builder = new StringBuilder();
// You also need override many another methods to compile code,but just leave it empty
CommandSender sender = new CommandSender() {
#Override
public void sendMessage(#NotNull String message) {
builder.append(message);
}
#Override
public void sendMessage(#NotNull String... messages) {
for (String message : messages) {
builder.append(message + "\n");
}
}
#Override
public boolean isOp() {
return true;
}
#Override
public boolean hasPermission(#NotNull String name) {
return true;
}
#Override
public #NotNull String getName() {
return "WebServerExecutor";
}
};
// Waiting command execute finish
Bukkit.getScheduler().callSyncMethod(plugin, () -> Bukkit.dispatchCommand(sender, body)).get();
byte[] response = builder.toString().getBytes(StandardCharsets.UTF_8);
exchange.getResponseBody().write(response);
});
server.start()

ListenerExecutionFailedException Nullpointer when trying to index kafka payload through new ElasticSearch Java API Client

I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".

How to use a gRPC interceptor to attach/update logging MDC in a Spring-Boot app

Problem
I have a Spring-Boot application in which I am also starting a gRPC server/service. Both the servlet and gRPC code send requests to a common object to process the request. When the request comes in I want to update the logging to display a unique 'ID' so I can track the request through the system.
On the Spring side I have setup a 'Filter' which updates the logging MDC to add some data to the log request (see this example). this works fine
On the gRPC side I have created an 'ServerInterceptor' and added it to the service, while the interceptor gets called the code to update the MDC does not stick, so when a request comes through the gRPC service I do not get the ID printed in the log. I realize this has to do with the fact that I'm intercepting the call in one thread and it's being dispatched by gRPC in another, what I can't seem to figure out is how to either intercept the call in the thread doing the work or add the MDC information so it is properly propagated to the thread doing the work.
What I've tried
I have done a lot of searches and was quite surprised to not find this asked/answered, I can only assume my query skills are lacking :(
I'm fairly new to gRPC and this is the first Interceptor I'm writing. I've tried adding the interceptor several different ways (via ServerInterceptors.intercept, BindableService instance.intercept).
I've looked at LogNet's Spring Boot gRPC Starter, but I'm not sure this would solve the issue.
Here is the code I have added in my interceptor class
#Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(final ServerCall<ReqT, RespT> call, final Metadata headers, final ServerCallHandler<ReqT, RespT> next) {
try {
final String mdcData = String.format("[requestID=%s]",
UUID.randomUUID().toString());
MDC.put(MDC_DATA_KEY, mdcData);
return next.startCall(call, headers);
} finally {
MDC.clear();
}
}
Expected Result
When a request comes in via the RESTful API I see log output like this
2019-04-09 10:19:16.331 [requestID=380e28db-c8da-4e35-a097-4b8c90c006f4] INFO 87100 --- [nio-8080-exec-1] c.c.es.xxx: processing request step 1
2019-04-09 10:19:16.800 [requestID=380e28db-c8da-4e35-a097-4b8c90c006f4] INFO 87100 --- [nio-8080-exec-1] c.c.es.xxx: processing request step 2
2019-04-09 10:19:16.803 [requestID=380e28db-c8da-4e35-a097-4b8c90c006f4] INFO 87100 --- [nio-8080-exec-1] c.c.es.xxx: Processing request step 3
...
I'm hoping to get similar output when the request comes through the gRPC service.
Thanks
Since no one replied, I kept trying and came up with the following solution for my interceptCall function. I'm not 100% sure why this works, but it works for my use case.
private class LogInterceptor implements ServerInterceptor {
#Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(final ServerCall<ReqT, RespT> call,
final Metadata headers,
final ServerCallHandler<ReqT, RespT> next) {
Context context = Context.current();
final String requestId = UUID.randomUUID().toString();
return Contexts.interceptCall(context, call, headers, new ServerCallHandler<ReqT, RespT>() {
#Override
public ServerCall.Listener<ReqT> startCall(ServerCall<ReqT, RespT> call, Metadata headers) {
return new ForwardingServerCallListener.SimpleForwardingServerCallListener<ReqT>(next.startCall(call, headers)) {
/**
* The actual service call happens during onHalfClose().
*/
#Override
public void onHalfClose() {
try (final CloseableThreadContext.Instance ctc = CloseableThreadContext.put("requestID",
UUID.randomUUID().toString())) {
super.onHalfClose();
}
}
};
}
});
}
}
In my application.properties I added the following (which I already had)
logging.pattern.level=[%X] %-5level
The '%X' tells the logging system to print all of the CloseableThreadContext key/values.
Hopefully this may help someone else.
MDC stores data in ThreadLocal variable and you are right about - "I realize this has to do with the fact that I'm intercepting the call in one thread and it's being dispatched by gRPC in another". Check #Eric Anderson answer about the right way to use ThradLocal in the post -
https://stackoverflow.com/a/56842315/2478531
Here is a working example -
public class GrpcMDCInterceptor implements ServerInterceptor {
private static final String MDC_DATA_KEY = "Key";
#Override
public <R, S> ServerCall.Listener<R> interceptCall(
ServerCall<R, S> serverCall,
Metadata metadata,
ServerCallHandler<R, S> next
) {
log.info("Setting user context, metadata {}", metadata);
final String mdcData = String.format("[requestID=%s]", UUID.randomUUID().toString());
MDC.put(MDC_DATA_KEY, mdcData);
try {
return new WrappingListener<>(next.startCall(serverCall, metadata), mdcData);
} finally {
MDC.clear();
}
}
private static class WrappingListener<R>
extends ForwardingServerCallListener.SimpleForwardingServerCallListener<R> {
private final String mdcData;
public WrappingListener(ServerCall.Listener<R> delegate, String mdcData) {
super(delegate);
this.mdcData = mdcData;
}
#Override
public void onMessage(R message) {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onMessage(message);
} finally {
MDC.clear();
}
}
#Override
public void onHalfClose() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onHalfClose();
} finally {
MDC.clear();
}
}
#Override
public void onCancel() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onCancel();
} finally {
MDC.clear();
}
}
#Override
public void onComplete() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onComplete();
} finally {
MDC.clear();
}
}
#Override
public void onReady() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onReady();
} finally {
MDC.clear();
}
}
}
}

spring webflux: purely functional way to attach websocket adapter to reactor-netty server

I am not able to figure out a way to attach a WebSocketHandlerAdapter to a reactor netty server.
Requirements:
I want to start a reactor netty server and attach http (REST) endpoints and websocket endpoints to the same server. I have gone through the documentation and some sample demo application mentioned in the documentation. They show how to attach a HttpHandlerAdapter to the the HttpServer using newHandler() function. But when it comes to websockets they switch back to using spring boot and annotation examples. I am not able to find how to attach websockets using functional endpoints.
Please point me in the right direction on how to implement this.
1. how do I attach the websocket adapter to the netty server?
2. Should I use HttpServer or TcpServer?
Note:
1. I am not using spring boot.
2. I am not using annotations.
3. Trying to achieve this only using functional webflux end points.
Sample code:
public HandlerMapping webSocketMapping()
{
Map<String, WebSocketHandler> map = new HashMap<>();
map.put("/echo", new EchoTestingWebSocketHandler());
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setUrlMap(map);
mapping.setOrder(-1);
return mapping;
}
public WebSocketHandlerAdapter wsAdapter()
{
HandshakeWebSocketService wsService = new HandshakeWebSocketService(new ReactorNettyRequestUpgradeStrategy());
return new WebSocketHandlerAdapter(wsService);
}
protected void startServer(String host, int port)
{
HttpServer server = HttpServer.create(host, port);
server.newHandler(wsAdapter()).block(); //how do I attach the websocket adapter to the netty server
}
Unfortunately, there is no easy way to do that without running up whole SpringBootApplication. Otherwise, you will be required to write whole Spring WebFlux handlers hierarchy by your self. Consider to compose your functional routing with SpringBootApplication:
#SpringBootApplication
public class WebSocketApplication {
public static void main(String[] args) {
SpringApplication.run(WebSocketApplication.class, args);
}
#Bean
public RouterFunction<ServerResponse> routing() {
return route(
POST("/api/orders"),
r -> ok().build()
);
}
#Bean
public HandlerMapping wsHandlerMapping() {
HashMap<String, WebSocketHandler> map = new HashMap<>();
map.put("/ws", new WebSocketHandler() {
#Override
public Mono<Void> handle(WebSocketSession session) {
return session.send(
session.receive()
.map(WebSocketMessage::getPayloadAsText)
.map(tMessage -> "Response From Server: " + tMessage)
.map(session::textMessage)
);
}
});
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setUrlMap(map);
mapping.setOrder(-1);
return mapping;
}
#Bean
HandlerAdapter wsHandlerAdapter() {
return new WebSocketHandlerAdapter();
}
}
Incase if SpringBoot infra is not the case
try to consider direct interaction with ReactorNetty instead. Reactor Netty Provides pritty good abstraction around native Netty and you may interacti with it in the same functional maner:
ReactorHttpHandlerAdapter handler =
new ReactorHttpHandlerAdapter(yourHttpHandlers);
HttpServer.create()
.startRouterAndAwait(routes -> {
routes.ws("/pathToWs", (in, out) -> out.send(in.receive()))
.file("/static/**", ...)
.get("**", handler)
.post("**", handler)
.put("**", handler)
.delete("**", handler);
}
);
I deal with it this way. and use native reactor-netty
routes.get(rootPath, (req, resp)->{
// doFilter check the error
return this.doFilter(request, response, new RequestAttribute())
.flatMap(requestAttribute -> {
WebSocketServerHandle handleObject = injector.getInstance(GameWsHandle.class);
return response
.header("content-type", "text/plain")
.sendWebsocket((in, out) ->
this.websocketPublisher3(in, out, handleObject, requestAttribute)
);
});
})
private Publisher<Void> websocketPublisher3(WebsocketInbound in, WebsocketOutbound out, WebSocketServerHandle handleObject, RequestAttribute requestAttribute) {
return out
.withConnection(conn -> {
// on connect
handleObject.onConnect(conn.channel());
conn.channel().attr(AttributeKey.valueOf("request-attribute")).set(requestAttribute);
conn.onDispose().subscribe(null, null, () -> {
conn.channel().close();
handleObject.disconnect(conn.channel());
// System.out.println("context.onClose() completed");
}
);
// get message
in.aggregateFrames()
.receiveFrames()
.map(frame -> {
if (frame instanceof TextWebSocketFrame) {
handleObject.onTextMessage((TextWebSocketFrame) frame, conn.channel());
} else if (frame instanceof BinaryWebSocketFrame) {
handleObject.onBinaryMessage((BinaryWebSocketFrame) frame, conn.channel());
} else if (frame instanceof PingWebSocketFrame) {
handleObject.onPingMessage((PingWebSocketFrame) frame, conn.channel());
} else if (frame instanceof PongWebSocketFrame) {
handleObject.onPongMessage((PongWebSocketFrame) frame, conn.channel());
} else if (frame instanceof CloseWebSocketFrame) {
conn.channel().close();
handleObject.disconnect(conn.channel());
}
return "";
})
.blockLast();
});
}

Resources