reactor.netty.ReactorNetty$InternalNettyException: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory - spring-webclient

Software versions in use:
spring-webflux-5.3.4,
reactor-core-3.4.4,
spring-data-mongodb-3.1.6
Am building a spring boot application that uses spring webclient to
invoke an image service that will serve a pdf image back.
The returned pdf is then stored in mongodb using spring's ReactiveGridfsTemplate.
For performance testing am having the service return 120 MB pdf all the
time.
First invocation of the service and storing the returned pdf in mongodb works fine and happens in under 10 seconds.
However, second invocation onward, I start getting the following error while storing the returned pdf in mongodb. Can someone advise on what am doing wrong?
Caused by: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 1056964615, max: 1073741824)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:776)
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:731)
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:645)
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:621)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:204)
at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:188)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:138)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:128)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:378)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:150)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:825)
Code to build webclient:
WebClient webClient = WebClient.builder().filter(WebClientFilter.logRequest())// for logging request
.filter(WebClientFilter.logResponse()) // for logging response
.exchangeStrategies(ExchangeStrategies.builder()
.codecs(configurer -> configurer.defaultCodecs().maxInMemorySize(5242880)).build())
.build();
Code to invoke image service using webclient:
Flux<DataBuffer> imageFlux = webClient.method(httpmethod).uri(uri)
.bodyValue((payloadBody == null) ? StringUtils.EMPTY : payloadBody.toPayloadBody())
.accept(MediaType.ALL).exchangeToFlux(response -> {
logger.log(Level.DEBUG, "DefaultHttpClient exchangeToFlux got response with status code {}",response.statusCode());
if (response.statusCode().is4xxClientError() || response.statusCode().is5xxServerError()) {
logger.log(Level.ERROR,
"DefaultHttpClient exchangeToFlux encountered error {} throwing service exception",
response.statusCode());
return Flux.error(new ServiceException(response.bodyToMono(String.class).flatMap(body -> {
return Mono.just(body);
}), response.rawStatusCode()));
}
return response.bodyToFlux(DataBuffer.class);
});
Code to store pdf in mongodb returned by image service using spring's ReactiveGridfsTemplate:
imageFlux is what I receive above.
protected Mono<ObjectId> getMono(Flux<DataBuffer> imageFlux , DocumentContext documentContext) {
return reactiveGridFsTmpl.store(imageFlux, new java.util.Date() + ApplicationConstants.PDF_EXTENSION,
<org.bson.Document object with attributes from application>);
}
Here's how am firing the store call by subscribing to Mono returned by getMono(....). Within onComplete and onError have tried to release data buffer
Mono<ObjectId> imageObjectId = getMono(imageFlux, documentContext);
imageObjectId.subscribe(new Subscriber<ObjectId>() {
#Override
public void onComplete() {
logger.log(Level.DEBUG, SUBSCRIPTION_ON_COMPLETE);
DataBufferUtils.release(imageFlux.blockFirst()); --> Attempt to release databuffer
logger.log(Level.DEBUG, SUBSCRIPTION_ON_COMPLETE_RELEASE_DATABUFFER);
}
#Override
public void onError(Throwable t) {
logger.log(Level.ERROR, SUBSCRIPTION_ON_ERROR + t);
if (t instanceof ServiceException) {
logger.log(Level.ERROR, "DocumentDao caught ServiceException.");
flagErrorRecord((ServiceException) t, documentContext);
}
DataBufferUtils.release(imageFlux.blockFirst()); --> Attempt to release databuffer
logger.log(Level.ERROR, SUBSCRIPTION_ON_ERROR_RELEASE_DATABUFFER);
}
#Override
public void onNext(ObjectId t) {
logger.log(Level.DEBUG, SUBSCRIPTION_ON_NEXT + t.toString());
}
#Override
public void onSubscribe(Subscription s) {
logger.log(Level.DEBUG, SUBSCRIPTION_ON_SUBSCRIBE);
s.request(1);
}
});

try to change the directMemory using the JAVA_OPTS Environment variable.
JBP_CONFIG_JAVA_OPTS: '{ java_opts: "-XX:MaxDirectMemorySize=2048m" }'
I see that 1G is not sufficient. so try to set it at 2G

Related

How to resolve memory Leak in Spring cloud gateway

I am using spring cloud gateway in my service and using below RequestDecorator as a wrapper in my LoggingFilter.
public class RequestDecorator extends ServerHttpRequestDecorator {
private final List<DataBuffer> dataBuffers = new ArrayList<>();
public RequestDecorator(ServerHttpRequest delegate) {
super(delegate);
super.getBody()
.map(
dataBuffer -> {
dataBuffers.add(dataBuffer);
return dataBuffer;
})
.subscribe();
}
#Override
public Flux<DataBuffer> getBody() {
return copy();
}
private Flux<DataBuffer> copy() {
return Flux.fromIterable(dataBuffers)
.map(dataBuffer -> dataBuffer.factory().wrap(dataBuffer.asByteBuffer()));
}
}
When the service is getting used by Jmeter for performance test, I got below memory Leak errors in the logs.
i.n.u.ResourceLeakDetector : - LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:403)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53)
io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)
io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75)
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:785)
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499)
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base/java.lang.Thread.run(Thread.java:834)
After checking few contents online, I found the following comment -
"If you are using DataBuffer you might get the same error. Spring has DataBufferUtils library to release the resource."
DataBufferUtils.release(dataBuffer);
But I would like to know how do I exactly use this in my decorator class as I am using this wrapper in my LoggingFilter.
Can anyone please advise ?

Aws xray segement executor not running in background

I have an api like this
#GetMapping("/async-test")
public String api2() throws InterruptedException {
log.info("Before async");
myService.myAsync();
log.info("after async");
return "nothing important";
}
And myService.myAsync implementation like this
#Async
public void myAsync() {
CompletableFuture.supplyAsync(() -> {
try {
log.info("before thread ....");
Thread.sleep(3000);
log.info("after thread ....");
} catch (InterruptedException e) {
throw new IllegalStateException(e);
}
log.info("before returning result ....");
return "Result of the asynchronous computation";
}, SegmentContextExecutors.newSegmentContextExecutor());
}
The issue here is that API waits for 3 seconds to reply when using SegmentContextExecutors.newSegmentContextExecutor(), but if I remove the segment executor, the function run in the background successfully, but for sure I lose the segment trace id for logging.
So what should be done to fix this ?
Tech stack
Spring boot v.2.7.2
Java 11
Aws x-ray v.2.11.2

Is there a way to record response times of feign client

#FeignClient(...)
public interface SomeClient {
#RequestMapping(value = "/someUrl", method = POST, consumes = "application/json")
ResponseEntity<String> createItem(...);
}
Is there a way to find the response times for createItem api call?
We are using spring boot, actuator, prometheus.
We have straight forward as well as a customized way for logging the feign clients request and response (including the response time). We have to inject the feign.Logger.Level bean, that's it.
THE DEFAULT/ STRAIGHT FORWARD WAY
#Bean
Logger.Level feignLoggerLevel() {
return Logger.Level.BASIC;
}
there are BASIC,FULL,HEADERS,NONE(default) logging levels are available for more details
The above bean injection will give you the logging of feign request and response in the below format:
REQUEST:
refer
log(configKey, "---> %s %s HTTP/1.1", request.httpMethod().name(), request.url());
ex:2019-09-26 12:50:12.163 [DEBUG] [http-nio-4200-exec-5] [com.sample.FeignClient:72] [FeignClient#getUser] ---> END HTTP (0-byte body)
where the configkey means FeignClientClassName#FeignClientCallingMethodName ex: ApiClient#apiMethod.
RESPONSE
refer
log(configKey, "<--- HTTP/1.1 %s%s (%sms)", status, reason, elapsedTime);
ex:2019-09-26 12:50:12.163 [DEBUG] [http-nio-4200-exec-5] [com.sample.FeignClient:72] [FeignClient#getUser] <--- HTTP/1.1 200 OK (341ms)
the elapsedTime is what the response time taken for the API call.
NOTE: If you prefer the default way of the feign client logging then we have to consider the underlying application logging level as well because the feign.Slf4jLogger class logging with the feign request and response details with the DEBUG level (refer). If the underlying logging level above DEBUG then you may need to specify the explicit logger for the feign logging package/class otherwise it will not work.
THE CUSTOMIZED WAY
If you prefer logging with your customized format then you can extend the feign.Logger class and customize your logging. For a typical example if I want to log the header details of request and response in a single line as a list(by default Logger.Level.HEADERS prints the header in multiple lines):
package com.test.logging.feign;
import feign.Logger;
import feign.Request;
import feign.Response;
import lombok.extern.slf4j.Slf4j;
import java.io.IOException;
import static feign.Logger.Level.HEADERS;
#Slf4j
public class customFeignLogger extends Logger {
#Override
protected void logRequest(String configKey, Level logLevel, Request request) {
if (logLevel.ordinal() >= HEADERS.ordinal()) {
super.logRequest(configKey, logLevel, request);
} else {
int bodyLength = 0;
if (request.requestBody().asBytes() != null) {
bodyLength = request.requestBody().asBytes().length;
}
log(configKey, "---> %s %s HTTP/1.1 (%s-byte body) %s", request.httpMethod().name(), request.url(), bodyLength, request.headers());
}
}
#Override
protected Response logAndRebufferResponse(String configKey, Level logLevel, Response response, long elapsedTime)
throws IOException {
if (logLevel.ordinal() >= HEADERS.ordinal()) {
super.logAndRebufferResponse(configKey, logLevel, response, elapsedTime);
} else {
int status = response.status();
Request request = response.request();
log(configKey, "<--- %s %s HTTP/1.1 %s (%sms) %s", request.httpMethod().name(), request.url(), status, elapsedTime, response.headers());
}
return response;
}
#Override
protected void log(String configKey, String format, Object... args) {
log.debug(format(configKey, format, args));
}
protected String format(String configKey, String format, Object... args) {
return String.format(methodTag(configKey) + format, args);
}
}
also we have to inject the customFeignLogger class bean
#Bean
public customFeignLogger customFeignLogging() {
return new customFeignLogger();
}
If you are building FeignClient by yourself then you can build it with the customized logger:
Feign.builder().logger(new customFeignLogger()).logLevel(Level.BASIC).target(SomeFeignClient.class,"http://localhost:8080");
Add the following annotation to your project.
package com.example.annotation
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
public #interface DebugTracking {
#Aspect
#Component
public static class DebugTrackingAspect {
#Around("#annotation(com.example.annotation.DebugTracking)")
public Object trackExecutionTime(ProceedingJoinPoint joinPoint) throws Throwable {
StopWatch stopWatch = new StopWatch();
stopWatch.start(joinPoint.toShortString());
Exception exceptionThrown = null;
try {
// Execute the joint point as usual
return joinPoint.proceed();
} catch (Exception ex) {
exceptionThrown = ex;
throw ex;
} finally {
stopWatch.stop();
System.out.println(String.format("%s took %dms.", stopWatch.getLastTaskName(), stopWatch.getLastTaskTimeMillis()));
if (exceptionThrown != null) {
System.out.println(String.format("Exception thrown: %s", exceptionThrown.getMessage()));
exceptionThrown.printStackTrace();
}
}
}
}
}
Then annotate the methods you want to track in your #FeignClient with #DebugTracking.
I'm using the following (with Spring and Lombok) :
#Configuration // from Spring
#Slf4j // from Lombok
public class MyFeignConfiguration {
#Bean // from Spring
public MyFeignClient myFeignClient() {
return Feign.builder()
.logger(new Logger() {
#Override
protected void log(String configKey, String format, Object... args) {
LOG.info( String.format(methodTag(configKey) + format, args)); // LOG is the Lombok Slf4j object
}
})
.logLevel(Logger.Level.BASIC) // see https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-feign.html#_feign_logging
.target(MyFeignClient.class,"http://localhost:8080");
}
}
correct way doing this is using custom logger as pointed above. Using #Aspect is wrong. With that you create additional wrapper around the service. Feign already records this metric. Get that metric from feign.

How to use a gRPC interceptor to attach/update logging MDC in a Spring-Boot app

Problem
I have a Spring-Boot application in which I am also starting a gRPC server/service. Both the servlet and gRPC code send requests to a common object to process the request. When the request comes in I want to update the logging to display a unique 'ID' so I can track the request through the system.
On the Spring side I have setup a 'Filter' which updates the logging MDC to add some data to the log request (see this example). this works fine
On the gRPC side I have created an 'ServerInterceptor' and added it to the service, while the interceptor gets called the code to update the MDC does not stick, so when a request comes through the gRPC service I do not get the ID printed in the log. I realize this has to do with the fact that I'm intercepting the call in one thread and it's being dispatched by gRPC in another, what I can't seem to figure out is how to either intercept the call in the thread doing the work or add the MDC information so it is properly propagated to the thread doing the work.
What I've tried
I have done a lot of searches and was quite surprised to not find this asked/answered, I can only assume my query skills are lacking :(
I'm fairly new to gRPC and this is the first Interceptor I'm writing. I've tried adding the interceptor several different ways (via ServerInterceptors.intercept, BindableService instance.intercept).
I've looked at LogNet's Spring Boot gRPC Starter, but I'm not sure this would solve the issue.
Here is the code I have added in my interceptor class
#Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(final ServerCall<ReqT, RespT> call, final Metadata headers, final ServerCallHandler<ReqT, RespT> next) {
try {
final String mdcData = String.format("[requestID=%s]",
UUID.randomUUID().toString());
MDC.put(MDC_DATA_KEY, mdcData);
return next.startCall(call, headers);
} finally {
MDC.clear();
}
}
Expected Result
When a request comes in via the RESTful API I see log output like this
2019-04-09 10:19:16.331 [requestID=380e28db-c8da-4e35-a097-4b8c90c006f4] INFO 87100 --- [nio-8080-exec-1] c.c.es.xxx: processing request step 1
2019-04-09 10:19:16.800 [requestID=380e28db-c8da-4e35-a097-4b8c90c006f4] INFO 87100 --- [nio-8080-exec-1] c.c.es.xxx: processing request step 2
2019-04-09 10:19:16.803 [requestID=380e28db-c8da-4e35-a097-4b8c90c006f4] INFO 87100 --- [nio-8080-exec-1] c.c.es.xxx: Processing request step 3
...
I'm hoping to get similar output when the request comes through the gRPC service.
Thanks
Since no one replied, I kept trying and came up with the following solution for my interceptCall function. I'm not 100% sure why this works, but it works for my use case.
private class LogInterceptor implements ServerInterceptor {
#Override
public <ReqT, RespT> ServerCall.Listener<ReqT> interceptCall(final ServerCall<ReqT, RespT> call,
final Metadata headers,
final ServerCallHandler<ReqT, RespT> next) {
Context context = Context.current();
final String requestId = UUID.randomUUID().toString();
return Contexts.interceptCall(context, call, headers, new ServerCallHandler<ReqT, RespT>() {
#Override
public ServerCall.Listener<ReqT> startCall(ServerCall<ReqT, RespT> call, Metadata headers) {
return new ForwardingServerCallListener.SimpleForwardingServerCallListener<ReqT>(next.startCall(call, headers)) {
/**
* The actual service call happens during onHalfClose().
*/
#Override
public void onHalfClose() {
try (final CloseableThreadContext.Instance ctc = CloseableThreadContext.put("requestID",
UUID.randomUUID().toString())) {
super.onHalfClose();
}
}
};
}
});
}
}
In my application.properties I added the following (which I already had)
logging.pattern.level=[%X] %-5level
The '%X' tells the logging system to print all of the CloseableThreadContext key/values.
Hopefully this may help someone else.
MDC stores data in ThreadLocal variable and you are right about - "I realize this has to do with the fact that I'm intercepting the call in one thread and it's being dispatched by gRPC in another". Check #Eric Anderson answer about the right way to use ThradLocal in the post -
https://stackoverflow.com/a/56842315/2478531
Here is a working example -
public class GrpcMDCInterceptor implements ServerInterceptor {
private static final String MDC_DATA_KEY = "Key";
#Override
public <R, S> ServerCall.Listener<R> interceptCall(
ServerCall<R, S> serverCall,
Metadata metadata,
ServerCallHandler<R, S> next
) {
log.info("Setting user context, metadata {}", metadata);
final String mdcData = String.format("[requestID=%s]", UUID.randomUUID().toString());
MDC.put(MDC_DATA_KEY, mdcData);
try {
return new WrappingListener<>(next.startCall(serverCall, metadata), mdcData);
} finally {
MDC.clear();
}
}
private static class WrappingListener<R>
extends ForwardingServerCallListener.SimpleForwardingServerCallListener<R> {
private final String mdcData;
public WrappingListener(ServerCall.Listener<R> delegate, String mdcData) {
super(delegate);
this.mdcData = mdcData;
}
#Override
public void onMessage(R message) {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onMessage(message);
} finally {
MDC.clear();
}
}
#Override
public void onHalfClose() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onHalfClose();
} finally {
MDC.clear();
}
}
#Override
public void onCancel() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onCancel();
} finally {
MDC.clear();
}
}
#Override
public void onComplete() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onComplete();
} finally {
MDC.clear();
}
}
#Override
public void onReady() {
MDC.put(MDC_DATA_KEY, mdcData);
try {
super.onReady();
} finally {
MDC.clear();
}
}
}
}

spring webflux: purely functional way to attach websocket adapter to reactor-netty server

I am not able to figure out a way to attach a WebSocketHandlerAdapter to a reactor netty server.
Requirements:
I want to start a reactor netty server and attach http (REST) endpoints and websocket endpoints to the same server. I have gone through the documentation and some sample demo application mentioned in the documentation. They show how to attach a HttpHandlerAdapter to the the HttpServer using newHandler() function. But when it comes to websockets they switch back to using spring boot and annotation examples. I am not able to find how to attach websockets using functional endpoints.
Please point me in the right direction on how to implement this.
1. how do I attach the websocket adapter to the netty server?
2. Should I use HttpServer or TcpServer?
Note:
1. I am not using spring boot.
2. I am not using annotations.
3. Trying to achieve this only using functional webflux end points.
Sample code:
public HandlerMapping webSocketMapping()
{
Map<String, WebSocketHandler> map = new HashMap<>();
map.put("/echo", new EchoTestingWebSocketHandler());
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setUrlMap(map);
mapping.setOrder(-1);
return mapping;
}
public WebSocketHandlerAdapter wsAdapter()
{
HandshakeWebSocketService wsService = new HandshakeWebSocketService(new ReactorNettyRequestUpgradeStrategy());
return new WebSocketHandlerAdapter(wsService);
}
protected void startServer(String host, int port)
{
HttpServer server = HttpServer.create(host, port);
server.newHandler(wsAdapter()).block(); //how do I attach the websocket adapter to the netty server
}
Unfortunately, there is no easy way to do that without running up whole SpringBootApplication. Otherwise, you will be required to write whole Spring WebFlux handlers hierarchy by your self. Consider to compose your functional routing with SpringBootApplication:
#SpringBootApplication
public class WebSocketApplication {
public static void main(String[] args) {
SpringApplication.run(WebSocketApplication.class, args);
}
#Bean
public RouterFunction<ServerResponse> routing() {
return route(
POST("/api/orders"),
r -> ok().build()
);
}
#Bean
public HandlerMapping wsHandlerMapping() {
HashMap<String, WebSocketHandler> map = new HashMap<>();
map.put("/ws", new WebSocketHandler() {
#Override
public Mono<Void> handle(WebSocketSession session) {
return session.send(
session.receive()
.map(WebSocketMessage::getPayloadAsText)
.map(tMessage -> "Response From Server: " + tMessage)
.map(session::textMessage)
);
}
});
SimpleUrlHandlerMapping mapping = new SimpleUrlHandlerMapping();
mapping.setUrlMap(map);
mapping.setOrder(-1);
return mapping;
}
#Bean
HandlerAdapter wsHandlerAdapter() {
return new WebSocketHandlerAdapter();
}
}
Incase if SpringBoot infra is not the case
try to consider direct interaction with ReactorNetty instead. Reactor Netty Provides pritty good abstraction around native Netty and you may interacti with it in the same functional maner:
ReactorHttpHandlerAdapter handler =
new ReactorHttpHandlerAdapter(yourHttpHandlers);
HttpServer.create()
.startRouterAndAwait(routes -> {
routes.ws("/pathToWs", (in, out) -> out.send(in.receive()))
.file("/static/**", ...)
.get("**", handler)
.post("**", handler)
.put("**", handler)
.delete("**", handler);
}
);
I deal with it this way. and use native reactor-netty
routes.get(rootPath, (req, resp)->{
// doFilter check the error
return this.doFilter(request, response, new RequestAttribute())
.flatMap(requestAttribute -> {
WebSocketServerHandle handleObject = injector.getInstance(GameWsHandle.class);
return response
.header("content-type", "text/plain")
.sendWebsocket((in, out) ->
this.websocketPublisher3(in, out, handleObject, requestAttribute)
);
});
})
private Publisher<Void> websocketPublisher3(WebsocketInbound in, WebsocketOutbound out, WebSocketServerHandle handleObject, RequestAttribute requestAttribute) {
return out
.withConnection(conn -> {
// on connect
handleObject.onConnect(conn.channel());
conn.channel().attr(AttributeKey.valueOf("request-attribute")).set(requestAttribute);
conn.onDispose().subscribe(null, null, () -> {
conn.channel().close();
handleObject.disconnect(conn.channel());
// System.out.println("context.onClose() completed");
}
);
// get message
in.aggregateFrames()
.receiveFrames()
.map(frame -> {
if (frame instanceof TextWebSocketFrame) {
handleObject.onTextMessage((TextWebSocketFrame) frame, conn.channel());
} else if (frame instanceof BinaryWebSocketFrame) {
handleObject.onBinaryMessage((BinaryWebSocketFrame) frame, conn.channel());
} else if (frame instanceof PingWebSocketFrame) {
handleObject.onPingMessage((PingWebSocketFrame) frame, conn.channel());
} else if (frame instanceof PongWebSocketFrame) {
handleObject.onPongMessage((PongWebSocketFrame) frame, conn.channel());
} else if (frame instanceof CloseWebSocketFrame) {
conn.channel().close();
handleObject.disconnect(conn.channel());
}
return "";
})
.blockLast();
});
}

Resources