spring webflux: purely functional way to attach websocket adapter to reactor-netty server - spring

I am not able to figure out a way to attach a WebSocketHandlerAdapter to a reactor netty server.
Requirements:
I want to start a reactor netty server and attach http (REST) endpoints and websocket endpoints to the same server. I have gone through the documentation and some sample demo application mentioned in the documentation. They show how to attach a HttpHandlerAdapter to the the HttpServer using newHandler() function. But when it comes to websockets they switch back to using spring boot and annotation examples. I am not able to find how to attach websockets using functional endpoints.
Please point me in the right direction on how to implement this.
1. how do I attach the websocket adapter to the netty server?
2. Should I use HttpServer or TcpServer?
Note:
1. I am not using spring boot.
2. I am not using annotations.
3. Trying to achieve this only using functional webflux end points.
Sample code:
public HandlerMapping webSocketMapping()
{
Map<String, WebSocketHandler> map = new HashMap<>();
map.put("/echo", new EchoTestingWebSocketHandler());
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setUrlMap(map);
mapping.setOrder(-1);
return mapping;
}
public WebSocketHandlerAdapter wsAdapter()
{
HandshakeWebSocketService wsService = new HandshakeWebSocketService(new ReactorNettyRequestUpgradeStrategy());
return new WebSocketHandlerAdapter(wsService);
}
protected void startServer(String host, int port)
{
HttpServer server = HttpServer.create(host, port);
server.newHandler(wsAdapter()).block(); //how do I attach the websocket adapter to the netty server
}

Unfortunately, there is no easy way to do that without running up whole SpringBootApplication. Otherwise, you will be required to write whole Spring WebFlux handlers hierarchy by your self. Consider to compose your functional routing with SpringBootApplication:
#SpringBootApplication
public class WebSocketApplication {
public static void main(String[] args) {
SpringApplication.run(WebSocketApplication.class, args);
}
#Bean
public RouterFunction<ServerResponse> routing() {
return route(
POST("/api/orders"),
r -> ok().build()
);
}
#Bean
public HandlerMapping wsHandlerMapping() {
HashMap<String, WebSocketHandler> map = new HashMap<>();
map.put("/ws", new WebSocketHandler() {
#Override
public Mono<Void> handle(WebSocketSession session) {
return session.send(
session.receive()
.map(WebSocketMessage::getPayloadAsText)
.map(tMessage -> "Response From Server: " + tMessage)
.map(session::textMessage)
);
}
});
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setUrlMap(map);
mapping.setOrder(-1);
return mapping;
}
#Bean
HandlerAdapter wsHandlerAdapter() {
return new WebSocketHandlerAdapter();
}
}
Incase if SpringBoot infra is not the case
try to consider direct interaction with ReactorNetty instead. Reactor Netty Provides pritty good abstraction around native Netty and you may interacti with it in the same functional maner:
ReactorHttpHandlerAdapter handler =
new ReactorHttpHandlerAdapter(yourHttpHandlers);
HttpServer.create()
.startRouterAndAwait(routes -> {
routes.ws("/pathToWs", (in, out) -> out.send(in.receive()))
.file("/static/**", ...)
.get("**", handler)
.post("**", handler)
.put("**", handler)
.delete("**", handler);
}
);

I deal with it this way. and use native reactor-netty
routes.get(rootPath, (req, resp)->{
// doFilter check the error
return this.doFilter(request, response, new RequestAttribute())
.flatMap(requestAttribute -> {
WebSocketServerHandle handleObject = injector.getInstance(GameWsHandle.class);
return response
.header("content-type", "text/plain")
.sendWebsocket((in, out) ->
this.websocketPublisher3(in, out, handleObject, requestAttribute)
);
});
})
private Publisher<Void> websocketPublisher3(WebsocketInbound in, WebsocketOutbound out, WebSocketServerHandle handleObject, RequestAttribute requestAttribute) {
return out
.withConnection(conn -> {
// on connect
handleObject.onConnect(conn.channel());
conn.channel().attr(AttributeKey.valueOf("request-attribute")).set(requestAttribute);
conn.onDispose().subscribe(null, null, () -> {
conn.channel().close();
handleObject.disconnect(conn.channel());
// System.out.println("context.onClose() completed");
}
);
// get message
in.aggregateFrames()
.receiveFrames()
.map(frame -> {
if (frame instanceof TextWebSocketFrame) {
handleObject.onTextMessage((TextWebSocketFrame) frame, conn.channel());
} else if (frame instanceof BinaryWebSocketFrame) {
handleObject.onBinaryMessage((BinaryWebSocketFrame) frame, conn.channel());
} else if (frame instanceof PingWebSocketFrame) {
handleObject.onPingMessage((PingWebSocketFrame) frame, conn.channel());
} else if (frame instanceof PongWebSocketFrame) {
handleObject.onPongMessage((PongWebSocketFrame) frame, conn.channel());
} else if (frame instanceof CloseWebSocketFrame) {
conn.channel().close();
handleObject.disconnect(conn.channel());
}
return "";
})
.blockLast();
});
}

Related

Is it possible to get PathPattern as a Bean in the SpringBoot web and reuse it in user code?

Is it possible to get PathPattern in the SpringBoot web as a Bean and reuse it in user code?
For example, if the url is : /user/1990/lily, it return the url patten on the Controller: /user/{year}/{name}.
This said:
Patterns are parsed on startup and re-used at runtime for efficient
URL matching
Reactor-netty metrics need a uriTagValue to avoid cardinality explosion,
public class Application {
public static void main(String[] args) {
Metrics.globalRegistry
.config()
.meterFilter(MeterFilter.maximumAllowableTags("reactor.netty.http.server", "URI", 100, MeterFilter.deny()));
DisposableServer server =
HttpServer.create()
.metrics(true, s -> { // HERE is the uriTagValue, it's a smaple of how to handle url mapping.
if (s.startsWith("/stream/")) {
return "/stream/{n}";
}
else if (s.startsWith("/bytes/")) {
return "/bytes/{n}";
}
return s;
})
.route(r ->
r.get("/stream/{n}",
(req, res) -> res.sendString(Mono.just(req.param("n"))))
.get("/bytes/{n}",
(req, res) -> res.sendString(Mono.just(req.param("n")))))
.bindNow();
server.onDispose()
.block();
}
}
Config the Netty to enable metrics in a SpringBoot WebFlux app:
#Configuration
public class NettyWebServerConfig {
#Bean
public ReactiveWebServerFactory reactiveWebServerFactory() {
NettyReactiveWebServerFactory factory = new NettyReactiveWebServerFactory();
factory.addServerCustomizers(httpServer -> httpServer
.wiretap(true)
.metrics(true, s -> "") // enable metrics, ignore all uri, if SpringBoot Web expose URI-Match-Patterns as Bean, we can use it here.
);
return factory;
}
}
My wondering is that is it possible to get PathPattern as a Bean in the SpringBoot web and reuse it in reactor-netty metrics code? As simmple as: bestPattern.matchAndExtract(lookupPath)
I tested PathContainer.parsePath(s);, it seems doesn't work.
With this setup, you are not using Spring WebFlux but actually Reactor Netty directly. PathContainer and PathPattern are then irrevelant here.
I don't think reactor-netty is storing anywhere the matching UriPathTemplate when considering the HttpPredicate.

creating Opentelemetry Context using trace-id and span-id of remote parent

I have micro service which support open tracing and that injecting trace-id and span-id in to header. Other micro service support open telemetry. how can I create parent span using trace-id and span-id in second micro service?
Thanks,
You can use W3C Trace Context specifications to achieve this. We need to send traceparent(Ex: 00-8652a752089f33e2659dff28d683a18f-7359b90f4355cfd9-01) from producer via HTTP headres ( or you can create it using the trace-id and span-id in the consumer). Then we can extract the remote context and create the span with traceparent.
This is the consumer controller. TextMapGetter used to map that traceparent data to the Context. ExtractModel is just a custom class.
#GetMapping(value = "/second")
public String sencondTest(#RequestHeader(value = "traceparent") String traceparent){
try {
Tracer tracer = openTelemetry.getTracer("cloud.events.second");
TextMapGetter<ExtractModel> getter = new TextMapGetter<>() {
#Override
public String get(ExtractModel carrier, String key) {
if (carrier.getHeaders().containsKey(key)) {
return carrier.getHeaders().get(key);
}
return null;
}
#Override
public Iterable<String> keys(ExtractModel carrier) {
return carrier.getHeaders().keySet();
}
};
ExtractModel model = new ExtractModel();
model.addHeader("traceparent", traceparent);
Context extractedContext = openTelemetry.getPropagators().getTextMapPropagator()
.extract(Context.current(), model, getter);
try (Scope scope = extractedContext.makeCurrent()) {
// Automatically use the extracted SpanContext as parent.
Span serverSpan = tracer.spanBuilder("CloudEvents Server")
.setSpanKind(SpanKind.SERVER)
.startSpan();
try {
Thread.sleep(150);
} finally {
serverSpan.end();
}
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
return "Server Received!";
}
Then when we configuring the OpenTelemetrySdk need to set W3CTraceContextPropagator in Context Propagators.
// Use W3C Propagator(to extract span from HTTP headers) since we use the W3C specifications
TextMapPropagator textMapPropagator = W3CTraceContextPropagator.getInstance();
OpenTelemetrySdk openTelemetrySdk = OpenTelemetrySdk.builder()
.setTracerProvider(tracerProvider)
.setPropagators(ContextPropagators.create(textMapPropagator))
.buildAndRegisterGlobal();
Here is my customer ExtractModel class
public class ExtractModel {
private Map<String, String> headers;
public void addHeader(String key, String value) {
if (this.headers == null){
headers = new HashMap<>();
}
headers.put(key, value);
}
public Map<String, String> getHeaders() {
return headers;
}
public void setHeaders(Map<String, String> headers) {
this.headers = headers;
}
}
You can find more details in the official documentation for manual instrumentation.
Generally you have to propogate the span-id and trace-id if it is available in header. Any request you get in your microservice, check if the headers have span-id and trace-id in them. If yes,extract them and use them in your service.
If it is not present then you create a new one and use it in your service and also add it to requests that go out of your microservice.

How to send keyed message to Kafka using Spring Cloud Stream Supplier

I want to use Spring Cloud Stream to produce keyed (message with specific key) messages to Kafka.
#SpringBootApplication
public class SpringCloudStreamKafkaApplication {
public static void main(String[] args) {
SpringApplication.run(SpringCloudStreamKafkaApplication.class, args);
}
#Bean
Supplier<DataRecord> process(){
return () -> new DataRecord(42L);
}
}
What do I need to change in the Supplier code to provide key?
Is it possible in new style of API (using lambdas)?
Thank you
Return a Message<?> and set the KafkaHeaders.MESSAGE_KEY header:
#Bean
Supplier<Message<String>> process() {
return () -> MessageBuilder.withPayload("foo")
.setHeader(KafkaHeaders.MESSAGE_KEY, "bar".getBytes())
.build();
}
(assumes the default key serializer (byte[]).
EDIT
This will be called endlessly.
If you want to send a finite stream, I believe you have to switch to the reactive model.
#Bean
Supplier<Flux<Message<String>>> processFinite() {
Message<String> msg1 = MessageBuilder.withPayload("foo")
.setHeader(KafkaHeaders.MESSAGE_KEY, "bar".getBytes())
.build();
Message<String> msg2 = MessageBuilder.withPayload("baz")
.setHeader(KafkaHeaders.MESSAGE_KEY, "qux".getBytes())
.build();
return () -> {
return Flux.just(msg1, msg2);
};
}
There is also Flux.fromStream(myStream).
Which will end at the end of the stream.
EDIT2
You can also use the StreamBridge.
https://docs.spring.io/spring-cloud-stream/docs/3.1.4/reference/html/spring-cloud-stream.html#_sending_arbitrary_data_to_an_output_e_g_foreign_event_driven_sources

anyone can explain how does this code send message to specified user

#Bean
public WebSocketHandler webSocketHandler() {
TopicProcessor<String> messageProcessor = this.messageProcessor();
Flux<String> messages = messageProcessor.replay(0).autoConnect();
Flux<String> outputMessages = Flux.from(messages);
return (session) -> {
System.out.println(session);
session.receive().map(WebSocketMessage::getPayloadAsText).subscribe(messageProcessor::onNext, (e) -> {
e.printStackTrace();
});
return session.getHandshakeInfo().getPrincipal().flatMap((p) -> {
session.getAttributes().put("username", p.getName());
return session.send(outputMessages.filter((payload) -> this.filterUser(session, payload))
.map((payload) -> this.generateMessage(session, payload)));
}).switchIfEmpty(Mono.defer(() -> {
return Mono.error(new BadCredentialsException("Bad Credentials."));
})).then();
};
}
I am trying to build a online chating system with webflux,and have found a example through github.as a beginner in reactor development,I am confused about how does this code send a message to single user.
this is the way i think of in springmvc
put all the active websocketsession into map
check every message if the field username in message equals the username stored in session,use this session send msg
private static Map clients = new ConcurrentHashMap();
public void sendMessageTo(String message, String ToUserName) throws IOException {
for (WebSocket item : clients.values()) {
if (item.username.equals(ToUserName) ) {
item.session.sendText(message);
break;
}
}
}
can you explain how does the code in the webflux code above works?
i know all the messages are stored in the outputMessages and subcribed.
when a new message be emitted,how does it find the correct session ?
My guess is that the WebSocketHandler is an interface containing only one method handle WebSocketHandler
which in turn i believe makes it a FunctionalInterface that can be used as a lambda.
(session) -> { ... }
So when a session is established with a client, and the client sends a websocket event. The server will look for the WebSocketHandler and populate it with the session from the client that sent the event.
If you find this confusing you can just implement the interface.
class ExampleHandler implements WebSocketHandler {
#Override
public Mono<Void> handle(WebSocketSession session) {
Mono<Void> input = session.receive()
.doOnNext(message -> {
// ...
})
.concatMap(message -> {
// ...
})
.then();
Flux<String> source = ... ;
Mono<Void> output = session.send(source.map(session::textMessage));
return Mono.zip(input, output).then();
}
}
#Bean
public WebSocketHandler webSocketHandler() {
return new ExampleHandler();
}

Spring Integration server with Java DSL

I am looking for an example of a Spring Integration 4.3.14 TCP server that responds to a message using the Java DSL not XML.
The 4.3.14 requirment is set by corporate policy which also avoids XML.
The end requirment is to receive a formated text payload form a PLC and respond with likewise. The PLC code is legacy and not at all well defined and simular payloads can have diferent formats.
The easy way to deal with the input payload is to treat it as a string and deal with it in Java code.
I have a basic recive working but cant work out how to send the reply, read a lot of examples and such but now think the mind is just confued so a simple working example would be ideal.
Many thanks
Here you go...
#SpringBootApplication
public class So50412811Application {
public static void main(String[] args) {
SpringApplication.run(So50412811Application.class, args).close();
}
#Bean
public TcpNetServerConnectionFactory cf() {
return new TcpNetServerConnectionFactory(1234);
}
#Bean
public TcpInboundGateway gateway() {
TcpInboundGateway gw = new TcpInboundGateway();
gw.setConnectionFactory(cf());
return gw;
}
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(gateway())
.transform(Transformers.objectToString())
.<String, String>transform(String::toUpperCase)
.get();
}
// client
#Bean
public ApplicationRunner runner() {
return args -> {
Socket socket = SocketFactory.getDefault().createSocket("localhost", 1234);
socket.getOutputStream().write("foo\r\n".getBytes()); // default CRLF deserializer
InputStream is = socket.getInputStream();
int in = 0;
while (in != 0x0a) {
in = is.read();
System.out.print((char) in);
}
socket.close();
};
}
}

Resources