RSocket and Spring not handle multiple requests - spring

I play with RSocket together with Spring boot. I want to make simple request-response example. As example I took code from this link:
https://www.baeldung.com/spring-boot-rsocket#request-response
Source code:
https://github.com/eugenp/tutorials/tree/master/spring-5-webflux/src/main/java/com/baeldung/spring/rsocket
When I run example code without change I get error during request with Exception. This error is not point of this question, but I just want to show changes comapre to original source by baeldung.
[reactor-tcp-nio-1]
org.springframework.core.log.CompositeLog: [5927a44d-9] 500 Server
Error for HTTP GET "/current/pko"
io.rsocket.exceptions.ApplicationErrorException: No handler for
destination '' at
io.rsocket.exceptions.Exceptions.from(Exceptions.java:76) Suppressed:
reactor.core.publisher.FluxOnAssembly$OnAssemblyException: Error has
been observed at the following site(s): |_ checkpoint ⇢ Handler
com.baeldung.spring.rsocket.client.MarketDataRestController#current(String)
[DispatcherHandler] |_ checkpoint ⇢ HTTP GET "/current/pko"
[ExceptionHandlingWebHandler] Stack trace: at
io.rsocket.exceptions.Exceptions.from(Exceptions.java:76) at
io.rsocket.core.RSocketRequester.handleFrame(RSocketRequester.java:706)
at
io.rsocket.core.RSocketRequester.handleIncomingFrames(RSocketRequester.java:640)
at
reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
at
reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:242)
at
reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554)
at
reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630)
at
reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.onNext(FluxGroupBy.java:670)
at
reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:205)
at
reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:112)
at
reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:213)
at
reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:213)
at
reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:260)
at
reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:366)
at
reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:358)
at
reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:96)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) at
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
So I change client code from
#Configuration
public class ClientConfiguration {
#Bean
public RSocket rSocket() {
return RSocketFactory.connect()
.mimeType(MimeTypeUtils.APPLICATION_JSON_VALUE, MimeTypeUtils.APPLICATION_JSON_VALUE)
.frameDecoder(PayloadDecoder.ZERO_COPY)
.transport(TcpClientTransport.create(7000))
.start()
.block();
}
#Bean
RSocketRequester rSocketRequester(RSocketStrategies rSocketStrategies) {
return RSocketRequester.wrap(rSocket(), MimeTypeUtils.APPLICATION_JSON, MimeTypeUtils.APPLICATION_JSON, rSocketStrategies);
}
}
to
#Configuration
public class ClientConfiguration {
#Bean
RSocketRequester rSocketRequester(RSocketStrategies rSocketStrategies) {
return RSocketRequester.builder()
.rsocketStrategies(rSocketStrategies)
.connectTcp("localhost", 7000)
.block();
}
}
This small change help, an exception does not occur. Other issue that is point of that question iis that requests from client(requester) are processed one by one by server(responder). I create SOAPUI REST project and run GET request in 2 thread. It look like server use single thread. This is not what I expect to achieve.
To make it easy I will show whole solution.
Server :
Simple controller
#Controller
public class MarketDataRSocketController {
Logger logger = LoggerFactory.getLogger(MarketDataRSocketController.class);
private final MarketDataRepository marketDataRepository;
public MarketDataRSocketController(MarketDataRepository marketDataRepository) {
this.marketDataRepository = marketDataRepository;
}
#MessageMapping("currentMarketData")
public Mono<MarketData> currentMarketData(MarketDataRequest marketDataRequest) {
logger.info("Getting data for: "+marketDataRequest);
Mono<MarketData> result = marketDataRepository.getOne(marketDataRequest.getStock());
logger.info("Controller thread move forward: "+marketDataRequest);
return result;
}
#MessageExceptionHandler
public Mono<MarketData> handleException(Exception e) {
return Mono.just(MarketData.fromException(e));
}
}
In repository I add Thread.sleep(10000); just to simulate long running operation.
#Component
public class MarketDataRepository {
Logger logger = LoggerFactory.getLogger(MarketDataRSocketController.class);
private static final int BOUND = 100;
private Random random = new Random();
public Mono<MarketData> getOne(String stock) {
//return return Mono.just(getMarketDataResponse(stock)); original code from baeldung.
return Mono.just(stock).map(s -> getMarketDataResponse(s));
}
private MarketData getMarketDataResponse(String stock) {
logger.info("Repository thread go speel ZzzZZ");
try {
Thread.sleep(10000);
} catch (InterruptedException ex) {
ex.printStackTrace();
}
logger.info("Repository thread move forward");
return new MarketData(stock, random.nextInt(BOUND));
}
}
Client
Simple client configuration:
#Configuration
public class ClientConfiguration {
#Bean
RSocketRequester rSocketRequester(RSocketStrategies rSocketStrategies) {
return RSocketRequester.builder()
.rsocketStrategies(rSocketStrategies)
.connectTcp("localhost", 7000)
.block();
}
}
And simple REST Controller that I use in SOAP UI
#RestController
public class MarketDataRestController {
Logger logger = LoggerFactory.getLogger(MarketDataRestController.class);
private final Random random = new Random();
private final RSocketRequester rSocketRequester;
public MarketDataRestController(RSocketRequester rSocketRequester) {
this.rSocketRequester = rSocketRequester;
}
#GetMapping(value = "/current/{stock}")
public Publisher<MarketData> current(#PathVariable("stock") String stock) {
logger.info("Get REST call for stock : "+stock);
return rSocketRequester.route("currentMarketData")
.data(new MarketDataRequest(stock))
.retrieveMono(MarketData.class);
}
}
When i RUN server and client I get incomprehensible behavior to me. By SOAP UI I make single request in 2 threads.
In client log I get:
2021-09-01 11:30:14,614 INFO [reactor-http-nio-2] com.baeldung.spring.rsocket.client.MarketDataRestController: Get REST call for stock : pko
2021-09-01 11:30:14,691 INFO [reactor-http-nio-3] com.baeldung.spring.rsocket.client.MarketDataRestController: Get REST call for stock : pko
In server I get logs like:
Log from first shot:
// get data from client
2021-09-01 11:30:14,843 INFO [reactor-http-nio-3] com.baeldung.spring.rsocket.server.MarketDataRSocketController: Getting data for: MarketDataRequest(stock=pko)
// Log that Contoller thread go forward after call repository
2021-09-01 11:30:14,844 INFO [reactor-http-nio-3] com.baeldung.spring.rsocket.server.MarketDataRSocketController: Controller thread move forward: MarketDataRequest(stock=pko)
// Log that repository sleep thread
2021-09-01 11:30:14,862 INFO [reactor-http-nio-3] com.baeldung.spring.rsocket.server.MarketDataRepository: Repository thread go speel ZzzZZ
// Repository finish work
2021-09-01 11:30:24,863 INFO [reactor-http-nio-3] com.baeldung.spring.rsocket.server.MarketDataRepository: Repository thread move forward
Server is procesing only single call and just wait when repository finish job. Then process next request in similar way:
2021-09-01 11:30:24,874 INFO [reactor-http-nio-3] com.baeldung.spring.rsocket.server.MarketDataRSocketController: Getting data for: MarketDataRequest(stock=pko)
2021-09-01 11:30:24,874 INFO [reactor-http-nio-3] com.baeldung.spring.rsocket.server.MarketDataRSocketController: Controller thread move forward: MarketDataRequest(stock=pko)
2021-09-01 11:30:24,874 INFO [reactor-http-nio-3] com.baeldung.spring.rsocket.server.MarketDataRepository: Repository thread go speel ZzzZZ
2021-09-01 11:30:34,876 INFO [reactor-http-nio-3] com.baeldung.spring.rsocket.server.MarketDataRepository: Repository thread move forward
I don't understand why the server is processing calls one by one. Maybe there is some problem in code or maybe I'm not understand something right.
Thank you in advance.

In Reactor, by default, everything is running on the main thread. Calling Thread.sleep the main thread is blocking and the application freezes. if you would like to simulate a long-running operation you could use the delayElements operator:
.delayElements(Duration.ofSeconds(10));
Note: Reactor BlockHound detects and reports such blocking calls.

Related

How to resolve memory Leak in Spring cloud gateway

I am using spring cloud gateway in my service and using below RequestDecorator as a wrapper in my LoggingFilter.
public class RequestDecorator extends ServerHttpRequestDecorator {
private final List<DataBuffer> dataBuffers = new ArrayList<>();
public RequestDecorator(ServerHttpRequest delegate) {
super(delegate);
super.getBody()
.map(
dataBuffer -> {
dataBuffers.add(dataBuffer);
return dataBuffer;
})
.subscribe();
}
#Override
public Flux<DataBuffer> getBody() {
return copy();
}
private Flux<DataBuffer> copy() {
return Flux.fromIterable(dataBuffers)
.map(dataBuffer -> dataBuffer.factory().wrap(dataBuffer.asByteBuffer()));
}
}
When the service is getting used by Jmeter for performance test, I got below memory Leak errors in the logs.
i.n.u.ResourceLeakDetector : - LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:403)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)
io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53)
io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:120)
io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:75)
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:785)
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499)
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base/java.lang.Thread.run(Thread.java:834)
After checking few contents online, I found the following comment -
"If you are using DataBuffer you might get the same error. Spring has DataBufferUtils library to release the resource."
DataBufferUtils.release(dataBuffer);
But I would like to know how do I exactly use this in my decorator class as I am using this wrapper in my LoggingFilter.
Can anyone please advise ?

How to manage tracing with Spring WebClient in Reactive way?

I have a method in EventService that calls an API and handle errors if there is any.
private Mono<ApiResponse> send(String accountId) {
accountIdBaggageField.updateValue(accountId);
logger.info("making the call");
Mono<ApiResponse> res = apiClient.dispatchEvent(accountId);
return res.doOnError(e -> {
logger.error("Could not dispatch batch for events");
});
}
Here is the SleuthConfiguration class that defines accountIdBaggageField bean:
#Configuration
public class SleuthConfiguration {
#Bean
public BaggageField accountIdBaggageField() {
return BaggageField.create(LoggingContextVariables.MDC_ACCOUNT_ID);
}
#Bean
public BaggagePropagationCustomizer baggagePropagationCustomizer(BaggageField accountIdBaggageField) {
return factoryBuilder -> {
factoryBuilder.add(remote(accountIdBaggageField));
};
}
#Bean
public CorrelationScopeCustomizer correlationScopeCustomizer(BaggageField accountIdBaggageField) {
return builder -> {
builder.add(createCorrelationScopeConfig(accountIdBaggageField));
};
}
private CorrelationScopeConfig createCorrelationScopeConfig(BaggageField field) {
return CorrelationScopeConfig.SingleCorrelationField.newBuilder(field)
.flushOnUpdate()
.build();
}
}
Here is the ApiClient's dispatchEvents method:
public Mono<ApiResponse> dispatchEvent(String accountId) {
return webClient
.post()
.uri(properties.getEndpoints().getDispatchEvent(), Map.of("accountId", accountId))
.retrieve()
.onStatus(HttpStatus::isError, this::constructException)
.bodyToMono(ApiResponse.class)
.onErrorMap(WebClientRequestException.class, e -> new CGWException("Error during dispatching event to Connector Gateway", e));
}
Here is how I call the send method:
eventService.send("account1");
eventService.send("account2");
The problem here is that the accountIdBaggageField is first set to "account1" and then the io process is started when apiClient.dispatchEvents is called. Before the end of the io process (before getting a response from the api), the second call takes place and the accountIdBaggageField is set to "account2".
Then, when the response of the first request is returned, the error log in doOnError adds the accountId to the log as "account2" but needs to add it as "account1".
Here are the logs:
2023-01-09 11:50:56.791 INFO [account1] [Thread-1] c.t.e.s.impl.EventServiceImpl making the call
2023-01-09 11:50:56.812 INFO [account2] [Thread-1] c.t.e.s.impl.EventServiceImpl making the call
2023-01-09 11:50:58.241 INFO [account2] [reactor-http-nio-4] c.t.e.s.impl.EventServiceImpl Could not dispatch batch for events
2023-01-09 11:50:58.281 INFO [account2] [reactor-http-nio-6] c.t.e.s.impl.EventServiceImpl Could not dispatch batch for events
As can be seen in the logs, the log on line 3 should have been accountId1 instead of accountId2.
How can I fix this situation?

reactor.netty.ReactorNetty$InternalNettyException: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory

Software versions in use:
spring-webflux-5.3.4,
reactor-core-3.4.4,
spring-data-mongodb-3.1.6
Am building a spring boot application that uses spring webclient to
invoke an image service that will serve a pdf image back.
The returned pdf is then stored in mongodb using spring's ReactiveGridfsTemplate.
For performance testing am having the service return 120 MB pdf all the
time.
First invocation of the service and storing the returned pdf in mongodb works fine and happens in under 10 seconds.
However, second invocation onward, I start getting the following error while storing the returned pdf in mongodb. Can someone advise on what am doing wrong?
Caused by: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 1056964615, max: 1073741824)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:776)
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:731)
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:645)
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:621)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:204)
at io.netty.buffer.PoolArena.tcacheAllocateNormal(PoolArena.java:188)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:138)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:128)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:378)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:150)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:825)
Code to build webclient:
WebClient webClient = WebClient.builder().filter(WebClientFilter.logRequest())// for logging request
.filter(WebClientFilter.logResponse()) // for logging response
.exchangeStrategies(ExchangeStrategies.builder()
.codecs(configurer -> configurer.defaultCodecs().maxInMemorySize(5242880)).build())
.build();
Code to invoke image service using webclient:
Flux<DataBuffer> imageFlux = webClient.method(httpmethod).uri(uri)
.bodyValue((payloadBody == null) ? StringUtils.EMPTY : payloadBody.toPayloadBody())
.accept(MediaType.ALL).exchangeToFlux(response -> {
logger.log(Level.DEBUG, "DefaultHttpClient exchangeToFlux got response with status code {}",response.statusCode());
if (response.statusCode().is4xxClientError() || response.statusCode().is5xxServerError()) {
logger.log(Level.ERROR,
"DefaultHttpClient exchangeToFlux encountered error {} throwing service exception",
response.statusCode());
return Flux.error(new ServiceException(response.bodyToMono(String.class).flatMap(body -> {
return Mono.just(body);
}), response.rawStatusCode()));
}
return response.bodyToFlux(DataBuffer.class);
});
Code to store pdf in mongodb returned by image service using spring's ReactiveGridfsTemplate:
imageFlux is what I receive above.
protected Mono<ObjectId> getMono(Flux<DataBuffer> imageFlux , DocumentContext documentContext) {
return reactiveGridFsTmpl.store(imageFlux, new java.util.Date() + ApplicationConstants.PDF_EXTENSION,
<org.bson.Document object with attributes from application>);
}
Here's how am firing the store call by subscribing to Mono returned by getMono(....). Within onComplete and onError have tried to release data buffer
Mono<ObjectId> imageObjectId = getMono(imageFlux, documentContext);
imageObjectId.subscribe(new Subscriber<ObjectId>() {
#Override
public void onComplete() {
logger.log(Level.DEBUG, SUBSCRIPTION_ON_COMPLETE);
DataBufferUtils.release(imageFlux.blockFirst()); --> Attempt to release databuffer
logger.log(Level.DEBUG, SUBSCRIPTION_ON_COMPLETE_RELEASE_DATABUFFER);
}
#Override
public void onError(Throwable t) {
logger.log(Level.ERROR, SUBSCRIPTION_ON_ERROR + t);
if (t instanceof ServiceException) {
logger.log(Level.ERROR, "DocumentDao caught ServiceException.");
flagErrorRecord((ServiceException) t, documentContext);
}
DataBufferUtils.release(imageFlux.blockFirst()); --> Attempt to release databuffer
logger.log(Level.ERROR, SUBSCRIPTION_ON_ERROR_RELEASE_DATABUFFER);
}
#Override
public void onNext(ObjectId t) {
logger.log(Level.DEBUG, SUBSCRIPTION_ON_NEXT + t.toString());
}
#Override
public void onSubscribe(Subscription s) {
logger.log(Level.DEBUG, SUBSCRIPTION_ON_SUBSCRIBE);
s.request(1);
}
});
try to change the directMemory using the JAVA_OPTS Environment variable.
JBP_CONFIG_JAVA_OPTS: '{ java_opts: "-XX:MaxDirectMemorySize=2048m" }'
I see that 1G is not sufficient. so try to set it at 2G

Quarkus EventBus requestandforget - timeout error in logs

When trying to use Quarkus (version 2.9.2.Final) EventBus requestAndForget with a #ConsumeEvent method that returns void, the following exception occurs in the logs, even though the processing occurs without any problem.
OK
2022-06-07 09:44:04,064 ERROR [io.qua.mut.run.MutinyInfrastructure]
(vert.x-eventloop-thread-1) Mutiny had to drop the following
exception: (TIMEOUT,-1) Timed out after waiting 30000(ms) for a reply.
address: __vertx.reply.3, repliedAddress: receivedSomeEvent
The consumer code:
#ApplicationScoped
public class ConsumerManiac{
#ConsumeEvent(value = "receivedSomeEvent")
public void consume(SomeEvent someEvent ) {
System.out.println("OK");
}
}
The Producer code (a REST Endpoint):
public class SomeResource {
private final EventBus eventBus;
#Inject
public SomeResource (EventBus eventBus) {
this.eventBus = eventBus;
}
#POST
public Response send(#Valid SomeEvent someEvent) {
eventBus.requestAndForget("receivedSomeEvent", someEvent);
return Response.accepted().build();
}
}
If the consumer method is changed to return some value, then the exception in logs does not occur.
#ApplicationScoped
public class ConsumerManiac{
#ConsumeEvent(value = "receivedSomeEvent")
public String consume(SomeEvent someEvent ) {
System.out.println("OK");
return "ok";
}
}
Is there any missing piece of code that is missing so the exception does not occur (even though processing concludes without any problem)?
Reference: https://quarkus.io/guides/reactive-event-bus#implementing-fire-and-forget-interactions
Full stacktrace:
2022-06-07 09:44:04,064 ERROR [io.qua.mut.run.MutinyInfrastructure]
(vert.x-eventloop-thread-1) Mutiny had to drop the following
exception: (TIMEOUT,-1) Timed out after waiting 30000(ms) for a reply.
address: __vertx.reply.3, repliedAddress: receivedSomeEvent at
io.vertx.core.eventbus.impl.ReplyHandler.handle(ReplyHandler.java:76)
at
io.vertx.core.eventbus.impl.ReplyHandler.handle(ReplyHandler.java:24)
at
io.vertx.core.impl.VertxImpl$InternalTimerHandler.handle(VertxImpl.java:893)
at
io.vertx.core.impl.VertxImpl$InternalTimerHandler.handle(VertxImpl.java:860)
at io.vertx.core.impl.EventLoopContext.emit(EventLoopContext.java:50)
at
io.vertx.core.impl.DuplicatedContext.emit(DuplicatedContext.java:168)
at io.vertx.core.impl.AbstractContext.emit(AbstractContext.java:53)
at
io.vertx.core.impl.VertxImpl$InternalTimerHandler.run(VertxImpl.java:883)
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
at
io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:170)
at
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503) at
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:833)
I had to return arbitrary value to avoid this exception.

Spring 4 WebSocket + sockJS:. Getting "No matching method found" in trace and #MessageMapping handler not invoked

Controller code:
#Controller
public class SockController {
#MessageMapping(value="/chat")
public void chatReveived(Message message, Principal principal) {
...
LOGGER.debug("chatReveived message [{}]", message);
...
}
}
WebSocketConfig:
#Configuration
#EnableWebSocketMessageBroker
#EnableScheduling
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/queue/", "/topic/");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/portfolio").withSockJS();
}
}
Javascript:
var socket = new SockJS('/portfolio');
var stompClient = Stomp.over(socket);
stompClient.connect({}, function(frame) {
...
});
stompClient.send("/app/chat", {}, JSON.stringify(message))
With these code, the frontend is able to connect with server over WebSocket and send message. But #MessageMapping handler method chatReveived() doesn't get called.
frontend output:
Opening Web Socket...
Web Socket Opened...
>>> CONNECT
accept-version:1.1,1.0
heart-beat:10000,10000
<<< CONNECTED
user-name:1
heart-beat:0,0
version:1.1
>>> SEND
destination:/app/chat
content-length:35
{"from":{"userId":1},"text":"ssss"}
server output:
[21:19:26.551] TRACE org.springframework.web.socket.handler.LoggingWebSocketHandlerDecorator: TextMessage payload= SEND
desti.., byteCount=82, last=true], SockJsSession[id=mdibjok1, state=OPEN, sinceCreated=8504, sinceLastActive=8504]
-
[21:19:26.551] DEBUG org.springframework.messaging.simp.stomp.StompDecoder: Decoded [Payload byte[35]][Headers={stompCommand=SEND, nativeHeaders={content-length=[35], destination=[/app/chat]}, simpMessageType=MESSAGE, simpDestination=/app/chat, id=b7f01f0b-db3e-911d-60dc-c7275f8ef306, timestamp=1407201566551}]
-
[21:19:26.551] TRACE org.springframework.web.socket.messaging.StompSubProtocolHandler: Received message from client session=mdibjok1
-
[21:19:26.551] TRACE org.springframework.messaging.support.ExecutorSubscribableChannel: [clientInboundChannel] sending message id=0f482da7-fee0-d8f1-4b47-bd993eaee80d
-
[21:19:26.551] TRACE org.springframework.messaging.support.ChannelInterceptorChain: postSend (sent=true) message id 0f482da7-fee0-d8f1-4b47-bd993eaee80d
-
[21:19:26.552] DEBUG org.springframework.messaging.simp.annotation.support.SimpAnnotationMethodMessageHandler: Handling message, lookupDestination=/chat
-
[21:19:26.553] DEBUG org.springframework.messaging.simp.annotation.support.SimpAnnotationMethodMessageHandler: No matching method found
-
[21:19:26.553] TRACE org.springframework.messaging.simp.broker.SimpleBrokerMessageHandler: Ignoring message to destination=/app/chat
-
[21:19:26.554] TRACE org.springframework.messaging.simp.user.DefaultUserDestinationResolver: Ignoring message to /app/chat, not a "user" destination
Seems like it is not able to find the handler method. Any idea where I was wrong?
My environment is: Tomcat 8.0.9, Spring 4.0.6 RELEASE, Spring security 3.2.4.RELEASE, JDK 7
WebSocketConfig configuration class should be loaded as part of Servlet configuration, not Root configuration.
Make sure WebSocketConfig.class returned from your implementation of AbstractAnnotationConfigDispatcherServletInitializer.getRootConfigClasses()
If you put it in root context, everything will work fine, you can use SimpMessagingTemplate, broker relay, but not #MessageMapping in controllers.
You can set breakpoint in WebSocketAnnotationMethodMessageHandler.initControllerAdviceCache() and check what beanss are loaded in context. If no #Controller marked bean in that method, #MessageMapping will not work.
Try #ComponentScan in WebSocketConfig so that Spring can find the Controller.

Resources