Propagating traceIDs in gRPC inter service communication - microservices

I am working on a few gRPC micro services and using context to pass in any headers and metadata. I am using opentracing for tracing purposes and one of my gRPC services calls other gRPC service at which point I am having issues with propagating the context as it is not retaining the metadata and also traceID.
My code is as below
func A(ctx context.Context) {
metadata:=extractMetadata(ctx)
conn := &grpc.ClientConn{}
zipkinCtx := opentracing.SpanFromContext(ctx).Context().(gozipkin.SpanContext)
client := pb.NewDClient(conn)
reply, err := client.LookupProperty(metadata.NewOutgoingContext(context.Background(), metadata.New(metadata)))
}
In the above code I am calling the service D for which I had to recreate a new context with the metadata which I am ok with but I am not sure how I can propagate the tracIds to the service D

Not knowing your frameworks, I think propagating this on GRPC metadata requires your server to explicitly parse the metadata on the receiving call. GRPC documentation shows an example of this:
func (s *server) LookupProperty(ctx context.Context, in *pb.SomeRequest) (*pb.SomeResponse, err) {
md, ok := metadata.FromIncomingContext(ctx)
// do something with metadata
}
Using this the server should now have access to the Traceid; contained in ‘md’.

I'm not absolutely sure, but it seems like traceID must be a custom field in GRPC Metadata. Also have a look at GRPC Interceptors that enable Opentracing support both for server and client side: https://github.com/grpc-ecosystem/go-grpc-middleware/tree/master/tracing/opentracing. Perhaps you won't have to write your own.

Related

Can Protobuf serialized data be sent from gRPC server to client?

In my Grpc server application, I already have the Protobuf serialized data that it received from a different server. Is there a way to avoid deserialization in the gRPC server and avoid creating the Protobuf response object and directly send theProtobuf serialized data to the client so client can do the deserialization ?
my grpc server API currently does this to create Protobuf response.
class SampleServiceImpl final : public SampleService::Service
{
Status SampleAPI(ServerContext* context, const Request* request, Response* response) override
{ .......
//already has Protobuf serialized data"serialized_response_msg_buff"
proto::Response response;
any.ParseFromArray(serialized_response_msg_buff, serialized_response_msg_len);
any.UnpackTo(response);....
}
}
In my case, the Response object is very heavy and will end up doing deserialization twice, one in the above code and once by gRPC service when it sends to the client.
Is there a way to avoid the above deserialization and directly pass Protobuf serialized data to the gRPC client?

How to pass request context to GRPC Go endpoints from Ruby

I'm calling GRPC endpoints from Ruby.
The proto endpoints are for example rpc SayHello (HelloRequest) returns (HelloReply), and implemented in Go, so they take context when calling from Go. The generated client supports client.SayHello(ctx context.Context, request *HelloRequest).
However when calling from Ruby it's just client.SayHello(request), where request's type is HelloRequest. How do I pass in the context? I want to do this because the Go endpoint implementations use the context in a variety of ways such as logging.

Transcode HTTP header into Grpc metadata for each request

I'am building an API-Gateway that proxies HTTP traffic to Grpc services. All incoming HTTP requests can have JWT in Authorization header. I need to transcode this JWT to Grpc metadata at each request and send it with Grpc request. I am using grpc-kotlin library with grpc code generator for kotlin suspend functions for client stub.
I have write this WebFilter to put header into ReactorContext:
#Component
class UserMetadataWebFilter : WebFilter {
override fun filter(exchange: ServerWebExchange, chain: WebFilterChain): Mono<Void> {
exchange.request.headers[HttpHeaders.AUTHORIZATION]?.firstOrNull()?.let { authorizationHeader ->
return chain.filter(exchange).contextWrite { Context.of("myHeader", authorizationHeader) }
}
return chain.filter(exchange)
}
}
And it can be used in controller methods like this:
identityProviderClient.createUser(protobufRequest,
coroutineContext[ReactorContext]?.context?.get("myHeader") ?: Metadata())
I want to create Grpc client interceptor or something another to automaticly set Grpc metadata from coroutine context. I have many Grpc client calls and I believe that is to write this code for every call is not good practice.
I know about envoy-proxy, but I need apply specific logic to my requests, that's why envoy-proxy is not my choice.
How should I transcode Http header(s) into grpc client call metadata? Thanks.
ClientInterceptor seems appropriate. Intercept the channel, see utility function:
https://grpc.github.io/grpc-java/javadoc/io/grpc/ClientInterceptors.html#intercept-io.grpc.Channel-io.grpc.ClientInterceptor...-

SpringBoot get InputStream and OutputStream from websocket

we want to integrate third party library(Eclipse XText LSP) into our SpringBoot webapp.
This library works "interactively" with the user (like chat). XText API requires input and output stream to work. We want to use WebSocket to let users interact with this library smoothly (send/retrieve json messages).
We have a problem with SpringBoot because SpringBoot support for WebSocket doesn't expose input/output streams. We wrote custom TextWebSocketHandler (subclass) but none of it's methods provide access to in/out streams.
We also tried with HandshakeInterceptor (to obtain in/out streams after handshake ) but with no success.
Can we use SpringBoot WebSocket API in this scenario or should we use some lower level (Servlet?) API ?
Regards Daniel
I am not sure if this will fit your architecture or not, but I have achieved this by using Spring Boot's STOMP support and wiring it into a custom org.eclipse.lsp4j.jsonrpc.RemoteEndpoint, rather than using a lower level API.
The approach was inspired by reading through the code provided in org.eclipse.lsp4j.launch.LSPLauncher.
JSON handler
Marhalling and unmarshalling the JSON needs to be done with the API provided with the xtext language server, rather than Jackson (which would be used by the Spring STOMP integration)
Map<String, JsonRpcMethod> supportedMethods = new LinkedHashMap<String, JsonRpcMethod>();
supportedMethods.putAll(ServiceEndpoints.getSupportedMethods(LanguageClient.class));
supportedMethods.putAll(languageServer.supportedMethods());
jsonHandler = new MessageJsonHandler(supportedMethods);
jsonHandler.setMethodProvider(remoteEndpoint);
Response / notifications
Responses and notifications are sent by a message consumer which is passed to the remoteEndpoint when constructed. The message must be marshalled by the jsonHandler so as to prevent Jackson doing it.
remoteEndpoint = new RemoteEndpoint(new MessageConsumer() {
#Override
public void consume(Message message) {
simpMessagingTemplate.convertAndSendToUser('user', '/lang/message',
jsonHandler.serialize(message));
}
}, ServiceEndpoints.toEndpoint(languageServer));
Requests
Requests can be received by using a #MessageMapping method that takes the whole #Payload as a String to avoid Jackson unmarshalling it. You can then unmarshall yourself and pass the message to the remoteEndpoint.
#MessageMapping("/lang/message")
public void incoming(#Payload String message) {
remoteEndpoint.consume(jsonHandler.parseMessage(message));
}
There may be a better way to do this, and I'll watch this question with interest, but this is an approach that I have found to work.

Dynamic provider for a Marshalling web service outbound gateway

Is it possible to set a dynamic provider for a Marshalling web service outbound gateway?
I mean, if I try for example: http://100.0.0.1 and it not works, I would like to try http://100.0.0.2 instead
My current configuration:
MarshallingWebServiceOutboundGateway gw = new MarshallingWebServiceOutboundGateway(provider, jaxb2Marshaller(), jaxb2Marshaller());
Yes, that's true. Since MarshallingWebServiceOutboundGateway allows to inject DestinationProvider, you feel free to provide any custom implementation.
For your fault-tolerant use-case you should do: new URLConnection(url).connect() to test connection to the target server in that your DestinationProvider implementation.
UPDATE
But If I how can I test new URLConnection(url).connect() if I have https credentials, certificate or any kind of security
Well, another good solution from the Spring Integration is load-balancing and several subscribers to the same DirectChannel:
#Bean
public MessageChannel wsChannel() {
return new DirectChannel(null);
}
to switch of the default RoundRobinLoadBalancingStrategy.
And after that you can have several #ServiceActivator(inputChannel="wsChannel"). When the first one is fail, the message is sent to the second and so on, until the good result or the fall for each URL.

Resources