Gateway for micro services without ports - spring-boot

I need an API Gateway who'll be the "hub" for all my applications, but none of them will have ports cause they'll never be accessed directly and I can't chose a port since I don't know if the server will have that port free. If it is possible, I didn't found a way of doing it. Is there a tutorial or some document with example of that?
I don't know if it's a bug or if I didn't understand how to do it but I didn't found much info about that googling around.
I have an old application, made in Spring 1.5.2 who's using Zuul dependencies who can make requests to micro services without ports, I think he uses the Eureka's instance ID, is this possible with Spring Cloud Gateway?
My API Gateway application.properties
server.port = 8888
spring.application.name = api-gateway
ribbon.ServerListRefreshInterval = 1
ribbon.eureka.enabled = true
ribbon.eureka.ReadTimeout = 60000
ribbon.eureka.ConnectTimeout = 300000
## EUREKA-SERVICE
eureka.client.serviceUrl.defaultZone = ${EUREKA_URI:http://localhost:8761/eureka}
eureka.instance.instance.preferIpAddress = true
eureka.instance.instance.instance-id = ${spring.application.name}:${server.port}:${random.int}
#eureka.hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds = 60000
hystrix.command.default.execution.timeout.enabled=false
spring.cloud.gateway.enabled = true
spring.cloud.gateway.x-forwarded.port-enabled = false
## ROUTE 0 -> PERSON-SERVICE
spring.cloud.gateway.routes.0.id = person
spring.cloud.gateway.routes.0.instance = person-service
spring.cloud.gateway.routes.0.uri = http://localhost
spring.cloud.gateway.routes.0.serviceUrl = http://localhost
spring.cloud.gateway.routes.0.predicates = Path=/person/api/**
spring.cloud.gateway.routes.0.ribbon.ReadTimeout = 150000
logging.level.org.springframework.cloud.gateway = DEBUG
logging.level.reactor.netty.http.client = DEBUG
My Person Service application.properties
## SERVIDOR
server.port=0
server.address=localhost
server.servlet.contextPath=/person/api
spring.application.name = person-service
## EUREKA
eureka.client.healthcheck.enabled=true
eureka.instance.preferIpAddress=1
eureka.instance.instance-id=${spring.application.name}:${server.port}:${random.int}
eureka.client.serviceUrl.defaultZone=${EUREKA_URI:http://localhost:8761/eureka}
The error log:
2021-01-28 10:00:25.402 DEBUG 5340 --- [ctor-http-nio-3] o.s.c.g.h.RoutePredicateHandlerMapping : Route matched: person
2021-01-28 10:00:25.403 DEBUG 5340 --- [ctor-http-nio-3] o.s.c.g.h.RoutePredicateHandlerMapping : Mapping [Exchange: GET http://localhost:8888/person/api/users] to Route{id='person', uri=http://localhost:80, order=0, predicate=Paths: [/person/api/**], match trailing slash: true, gatewayFilters=[], metadata={}}
2021-01-28 10:00:25.403 DEBUG 5340 --- [ctor-http-nio-3] o.s.c.g.h.RoutePredicateHandlerMapping : [5074d3a6-1] Mapped to org.springframework.cloud.gateway.handler.FilteringWebHandler#31dd80d9
2021-01-28 10:00:25.403 DEBUG 5340 --- [ctor-http-nio-3] o.s.c.g.handler.FilteringWebHandler : Sorted gatewayFilterFactories: [[GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RemoveCachedBodyFilter#aa4d8cc}, order = -2147483648], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.AdaptCachedBodyGlobalFilter#242a209e}, order = -2147482648], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyWriteResponseFilter#66213a0d}, order = -1], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardPathFilter#70c0a3d5}, order = 0], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.RouteToRequestUrlFilter#3cb8c8ce}, order = 10000], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ReactiveLoadBalancerClientFilter#1835d3ed}, order = 10150], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.WebsocketRoutingFilter#5c8e67b9}, order = 2147483646], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.NettyRoutingFilter#474c9131}, order = 2147483647], [GatewayFilterAdapter{delegate=org.springframework.cloud.gateway.filter.ForwardRoutingFilter#1fde0371}, order = 2147483647]]
2021-01-28 10:00:27.574 ERROR 5340 --- [ctor-http-nio-5] a.w.r.e.AbstractErrorWebExceptionHandler : [5074d3a6-1] 500 Server Error for HTTP GET "/person/api/users"
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: localhost/127.0.0.1:80
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP GET "/person/api/users" [ExceptionHandlingWebHandler]
Stack trace:
Caused by: java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_271]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715) ~[na:1.8.0_271]
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330) ~[netty-transport-4.1.58.Final.jar:4.1.58.Final]
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334) ~[netty-transport-4.1.58.Final.jar:4.1.58.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:707) ~[netty-transport-4.1.58.Final.jar:4.1.58.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[netty-transport-4.1.58.Final.jar:4.1.58.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[netty-transport-4.1.58.Final.jar:4.1.58.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[netty-transport-4.1.58.Final.jar:4.1.58.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.58.Final.jar:4.1.58.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.58.Final.jar:4.1.58.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.58.Final.jar:4.1.58.Final]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_271]
P.S: Sorry if my English is bad, it's still a WIP!

Okay, now it works somehow.
First, my "person" service wasn't at the same version, it was running Spring 2.3.1 and not Spring 2.4.2
Also, looks like when you have a "RestTemplate" config class annotated with "#LoadBalanced", Spring Boot treat your application as another layer of the Load Balancer and you can't access it just by puttin spring.cloud.gateway.routes.0.uri = lb://PERSON-SERVICE on the properties. Removing the "#LoadBalanced" from my Config class did the trick.
So, what you'll need to run this:
1- An API running as your Eureka Server
2- API Gateway with those properties
spring.cloud.gateway.routes.0.id = pessoa
spring.cloud.gateway.routes.0.uri = lb://PESSOA-SERVICE
spring.cloud.gateway.routes.0.predicates = Path=/pessoa/api/**
3- A micro service running with this name, like the properties below
server.port=0
server.servlet.contextPath=/person/api
spring.application.name = person-service
4- The class that have "#SpringBootApplication" need to be annotated with #EnableDiscoveryClient. If you're using JUST "#EnableEurekaClient" it won't work!
And that's enough to use a micro service without port.
Remember to run a mvn clean just to be sure!

Related

STOMP over WebSockets: Spring Boot expects JSON; NodeJs STOMP.js client fails to connect

When trying out STOMP over WebSockets, I noticed inconsistencies between different implementations, namely between a Spring Boot Java implementation and a NodeJs client written with STOMP.js.
When debugging into it, the difference is that in the Spring Boot app, the CONNECT message is expected to be a JSON array. For instance, this message is sent by their test client (written in JavaScript using the SocksJS library):
["CONNECT\naccept-version:1.1,1.0\nheart-beat:10000,10000\n\n\u0000"]
In contrast, my NodeJs STOMP.js test client (code is below) sends the following frame:
CONNECT
accept-version:1.0,1.1,1.2
heart-beat:4000,4000
^#
Unfortunately, I am not experienced with STOMP, but after reading through the specification, I did not understand why Spring Boot expects the data to be represented as a JSON array. Is this a known problem?
To demonstrate, let me share two example runs. One successful run to connect to RabbitMQ, followed by a failed attempt to connect against the Java Spring Boot app. (A reproducible setup with the code can be found at the end.)
Connect to RabbitMQ instance, which is configure to use STOMP over WebSockets (running on ws://localhost:15674/ws):
$ node client.js
Opening Web Socket...
Web Socket Opened...
>>> CONNECT
accept-version:1.0,1.1,1.2
heart-beat:4000,4000
Received data
<<< CONNECTED
server:RabbitMQ/3.8.8
session:session-WkKD6rN5BNc_ObKpziikYA
heart-beat:4000,4000
version:1.2
connected to server RabbitMQ/3.8.8
send PING every 4000ms
check PONG every 4000ms
onConnect called
<<< PONG
Received data
<<<
<<< PONG
>>> PING
Received data
<<<
Now connect (unsuccessfully) against the Spring Boot app (ws://localhost:5555/chat/123/k2qn3dl7/websocket):
node client.js
Opening Web Socket...
Web Socket Opened...
>>> CONNECT
accept-version:1.0,1.1,1.2
heart-beat:4000,4000
Received data
<<< o
Received data
<<< c[1007,""]
Connection closed to ws://localhost:5555/chat/123/k2qn3dl7/websocket
STOMP: scheduling reconnection in 5000ms
Opening Web Socket...
Web Socket Opened...
>>> CONNECT
accept-version:1.0,1.1,1.2
heart-beat:4000,4000
Received data
<<< o
^C
The reason why it fails is that Jackson (the JSON parser) failed to parse that payload:
CONNECT
accept-version:1.0,1.1,1.2
heart-beat:4000,4000
^#
As said, in the client that comes with the Spring Boot example, the payload looked like that:
["CONNECT\naccept-version:1.1,1.0\nheart-beat:10000,10000\n\n\u0000"]
Here is the full error in the Spring Boot app:
2021-07-22 13:58:59.546 INFO 74313 --- [nio-5555-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 1 ms
2021-07-22 13:58:59.594 ERROR 74313 --- [nio-5555-exec-1] s.w.s.s.t.s.WebSocketServerSockJsSession : Broken data received. Terminating WebSocket connection abruptly
com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'CONNECT': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (String)"CONNECT
accept-version:1.0,1.1,1.2
heart-beat:4000,4000
"; line: 1, column: 8]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:2337) ~[jackson-core-2.12.3.jar:2.12.3]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:720) ~[jackson-core-2.12.3.jar:2.12.3]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2903) ~[jackson-core-2.12.3.jar:2.12.3]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1949) ~[jackson-core-2.12.3.jar:2.12.3]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:781) ~[jackson-core-2.12.3.jar:2.12.3]
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:4684) ~[jackson-databind-2.12.3.jar:2.12.3]
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4586) ~[jackson-databind-2.12.3.jar:2.12.3]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3548) ~[jackson-databind-2.12.3.jar:2.12.3]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3516) ~[jackson-databind-2.12.3.jar:2.12.3]
at org.springframework.web.socket.sockjs.frame.Jackson2SockJsMessageCodec.decode(Jackson2SockJsMessageCodec.java:64) ~[spring-websocket-5.3.8.jar:5.3.8]
at org.springframework.web.socket.sockjs.transport.session.WebSocketServerSockJsSession.handleMessage(WebSocketServerSockJsSession.java:187) ~[spring-websocket-5.3.8.jar:5.3.8]
at org.springframework.web.socket.sockjs.transport.handler.SockJsWebSocketHandler.handleTextMessage(SockJsWebSocketHandler.java:93) ~[spring-websocket-5.3.8.jar:5.3.8]
at org.springframework.web.socket.handler.AbstractWebSocketHandler.handleMessage(AbstractWebSocketHandler.java:43) ~[spring-websocket-5.3.8.jar:5.3.8]
at org.springframework.web.socket.adapter.standard.StandardWebSocketHandlerAdapter.handleTextMessage(StandardWebSocketHandlerAdapter.java:114) ~[spring-websocket-5.3.8.jar:5.3.8]
at org.springframework.web.socket.adapter.standard.StandardWebSocketHandlerAdapter.access$000(StandardWebSocketHandlerAdapter.java:43) ~[spring-websocket-5.3.8.jar:5.3.8]
at org.springframework.web.socket.adapter.standard.StandardWebSocketHandlerAdapter$3.onMessage(StandardWebSocketHandlerAdapter.java:85) ~[spring-websocket-5.3.8.jar:5.3.8]
at org.springframework.web.socket.adapter.standard.StandardWebSocketHandlerAdapter$3.onMessage(StandardWebSocketHandlerAdapter.java:82) ~[spring-websocket-5.3.8.jar:5.3.8]
at org.apache.tomcat.websocket.WsFrameBase.sendMessageText(WsFrameBase.java:415) ~[tomcat-embed-websocket-9.0.46.jar:9.0.46]
at org.apache.tomcat.websocket.server.WsFrameServer.sendMessageText(WsFrameServer.java:129) ~[tomcat-embed-websocket-9.0.46.jar:9.0.46]
at org.apache.tomcat.websocket.WsFrameBase.processDataText(WsFrameBase.java:515) ~[tomcat-embed-websocket-9.0.46.jar:9.0.46]
at org.apache.tomcat.websocket.WsFrameBase.processData(WsFrameBase.java:301) ~[tomcat-embed-websocket-9.0.46.jar:9.0.46]
at org.apache.tomcat.websocket.WsFrameBase.processInputBuffer(WsFrameBase.java:133) ~[tomcat-embed-websocket-9.0.46.jar:9.0.46]
at org.apache.tomcat.websocket.server.WsFrameServer.onDataAvailable(WsFrameServer.java:85) ~[tomcat-embed-websocket-9.0.46.jar:9.0.46]
at org.apache.tomcat.websocket.server.WsFrameServer.doOnDataAvailable(WsFrameServer.java:183) ~[tomcat-embed-websocket-9.0.46.jar:9.0.46]
at org.apache.tomcat.websocket.server.WsFrameServer.notifyDataAvailable(WsFrameServer.java:162) ~[tomcat-embed-websocket-9.0.46.jar:9.0.46]
at org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.upgradeDispatch(WsHttpUpgradeHandler.java:156) ~[tomcat-embed-websocket-9.0.46.jar:9.0.46]
at org.apache.coyote.http11.upgrade.UpgradeProcessorInternal.dispatch(UpgradeProcessorInternal.java:60) ~[tomcat-embed-core-9.0.46.jar:9.0.46]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:59) ~[tomcat-embed-core-9.0.46.jar:9.0.46]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:893) ~[tomcat-embed-core-9.0.46.jar:9.0.46]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1707) ~[tomcat-embed-core-9.0.46.jar:9.0.46]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.46.jar:9.0.46]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.46.jar:9.0.46]
at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
2021-07-22 13:59:04.610 ERROR 74313 --- [nio-5555-exec-2] s.w.s.s.t.s.WebSocketServerSockJsSession : Broken data received. Terminating WebSocket connection abruptly
Path to reproduce:
NodeJs client code
Spring Boot test app
RabbitMQ test instance
Client code written in NodeJs:
// Required dependencies:
// "#stomp/stompjs": "6.1.0"
// "websocket": "1.0.34"
// Polyfills. For details see:
// https://stomp-js.github.io/guide/stompjs/rx-stomp/ng2-stompjs/pollyfils-for-stompjs-v5.html
Object.assign(global, { WebSocket: require('websocket').w3cwebsocket });
const StompJs = require('#stomp/stompjs');
const client = new StompJs.Client({
//brokerURL: 'ws://localhost:15674/ws', // RabbitMQ (should work)
brokerURL: 'ws://localhost:5555/chat/123/k2qn3dl7/websocket', // Spring app (should fail)
reconnectDelay: 5000,
heartbeatIncoming: 4000,
heartbeatOutgoing: 4000,
logRawCommunication: true,
debug: (x) => console.log(x),
});
client.onConnect = function (frame) {
console.log('onConnect called');
};
client.activate();
The Spring Boot app can be found here. I started it on port 5555:
git clone git#github.com:eugenp/tutorials.git
cd tutorials/spring-websockets
SERVER_PORT=5555 mvn spring-boot:run
Note: if you then go to http://localhost:5555, you will see a chat application served by the Spring Boot app. When you click connect, a STOMP connection will be established.
To start RabbitMQ, you can use the Docker container used for the tests in STOMP.js:
git clone git#github.com:stomp-js/stompjs.git
cd stompjs
sudo docker build -t myrabbitmq rabbitmq/
sudo docker run --rm -p 15674:15674 myrabbitmq
In short: The JSON messages were not "STOMP over native WebSockets" but "STOMP over SocksJS". The additional JSON layer was introduced by the SocksJS protocol, which is used in the Spring Boot example application.
Here is the longer story. It turned out, that my endpoint was wrong. Instead of
'ws://localhost:5555/chat/123/k2qn3dl7/websocket'
it should have been
'ws://localhost:5555/chat'
It had the wrong URI because I was copying the output that I saw in the browser. Instead I should have looked at the configuration:
#Override
public void registerStompEndpoints(final StompEndpointRegistry registry) {
registry.addEndpoint("/chat");
registry.addEndpoint("/chat").withSockJS();
registry.addEndpoint("/chatwithbots");
registry.addEndpoint("/chatwithbots").withSockJS();
}
Now the confusing part. As can be seen from the configuration, the Spring Boot application defines fallbacks with SocksJS.
If you remove the fallback, the confusing error message goes away. Yet when the fallback is active, Spring will try to process the request as SocksJS. That is why it tries to parse the STOMP frame as JSON, which results in the misleading error message.
In addition, I got confused by the JavaScript client used in the Spring Boot example:
function connect() {
var socket = new SockJS('/chat');
stompClient = Stomp.over(socket);
stompClient.connect({}, function(frame) {
setConnected(true);
console.log('Connected: ' + frame);
stompClient.subscribe('/topic/messages', function(messageOutput) {
showMessageOutput(JSON.parse(messageOutput.body));
});
});
}
It is not connected over native WebSocket but over SocksJS. That explains why Firefox shows JSON requests, not the expected STOMP frames.

RSocket channel error : "reactor.core.publisher.Operators.error - Operator called default onErrorDropped" with merged flux

I want to create a rsocket channel where the data sent from the server can be either a reaction to a client request or a push. I use a flux merge for that.
It's referential data : the refresh can be asked by the client and the server can also push updates.
So I have this on the server side :
#MessageMapping("update-stream")
Flux<DomainObject> addUpdatesListener(Flux<RefreshRequest> requests) {
Flux<DomainObject> pushFlux = Flux.from(this.flux)
.doOnError((e) -> log.error("Error on push flux : {}", e, e));
return requests
.map(this::getUpdates)
.flatMap(Flux::fromIterable)
.doOnError((e) -> log.error("Error on channel flux : {}", e, e))
.mergeWith(pushFlux)
.doOnError((e) -> log.error("Error on merged flux : {}", e, e));
}
It works excepts that when I stop the client I have the following error :
06-07-2020 15:58:53.168 [reactor-http-nio-3] ERROR reactor.core.publisher.Operators.error - Operator called default onErrorDropped
java.util.concurrent.CancellationException: Disposed
at reactor.core.publisher.FluxProcessor.dispose(FluxProcessor.java:80)
at io.rsocket.core.RSocketResponder$3.hookOnCancel(RSocketResponder.java:513)
at reactor.core.publisher.BaseSubscriber.cancel(BaseSubscriber.java:230)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at io.rsocket.core.RSocketResponder.cleanUpSendingSubscriptions(RSocketResponder.java:275)
at io.rsocket.core.RSocketResponder.cleanup(RSocketResponder.java:265)
at io.rsocket.core.RSocketResponder.tryTerminate(RSocketResponder.java:167)
at io.rsocket.core.RSocketResponder.tryTerminateOnConnectionClose(RSocketResponder.java:160)
at reactor.core.publisher.LambdaMonoSubscriber.onComplete(LambdaMonoSubscriber.java:132)
at reactor.core.publisher.MonoProcessor$NextInner.onComplete(MonoProcessor.java:518)
at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:308)
at reactor.core.publisher.MonoProcessor.onComplete(MonoProcessor.java:265)
at io.rsocket.internal.BaseDuplexConnection.dispose(BaseDuplexConnection.java:23)
at io.rsocket.transport.netty.TcpDuplexConnection.lambda$new$0(TcpDuplexConnection.java:60)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1158)
at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:760)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:736)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:607)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:105)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:171)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
If I don't do the merge, I have no error.
I tried many different versions but I cant find a way to have both the push an no error logged on client quit.
What am I missing ?
Thank a lot.
The problem disapears when upgrading from spring-boot 2.3.0.RELEASE to 2.3.1.RELEASE.

Error while connecting to Spring Boot RSocket server from RSocket-Java Client

I am having issue while connecting to Spring Boot RSocket application over TCP. The client when using RSocketRequester works fine but when I try to connect using RSocketFactory client it keep getting errors. Code below.
RSocket rSocket = this.client = RSocketFactory
.connect()
.mimeType(WellKnownMimeType.MESSAGE_RSOCKET_ROUTING.toString(), MediaType.APPLICATION_JSON_VALUE)
.frameDecoder(PayloadDecoder.ZERO_COPY)
.transport(TcpClientTransport.create("localhost", 7000))
.start()
.block();
Flux<Payload> s = rSocket.requestStream(DefaultPayload.create("1234", "socket"));
s.subscribe();
This gives error as below:
java.lang.IndexOutOfBoundsException: readerIndex(1) + length(115) exceeds writerIndex(6): AbstractPooledDerivedByteBuf$PooledNonRetainedSlicedByteBuf(ridx: 1, widx: 6, cap: 6/6, unwrapped: PooledUnsafeDirectByteBuf(ridx: 27, widx: 27, cap: 1024))
at io.netty.buffer.AbstractByteBuf.checkReadableBytes0(AbstractByteBuf.java:1477)
at io.netty.buffer.AbstractByteBuf.checkReadableBytes(AbstractByteBuf.java:1463)
at io.netty.buffer.AbstractByteBuf.readSlice(AbstractByteBuf.java:880)
at io.rsocket.metadata.TaggingMetadata$1.next(TaggingMetadata.java:47)
at io.rsocket.metadata.TaggingMetadata$1.next(TaggingMetadata.java:37)
at org.springframework.messaging.rsocket.DefaultMetadataExtractor.extractEntry(DefaultMetadataExtractor.java:136)
at org.springframework.messaging.rsocket.DefaultMetadataExtractor.extract(DefaultMetadataExtractor.java:119)
at org.springframework.messaging.rsocket.annotation.support.MessagingRSocket.createHeaders(MessagingRSocket.java:195)
at org.springframework.messaging.rsocket.annotation.support.MessagingRSocket.handleAndReply(MessagingRSocket.java:167)
at org.springframework.messaging.rsocket.annotation.support.MessagingRSocket.requestStream(MessagingRSocket.java:127)
at io.rsocket.RSocketResponder.requestStream(RSocketResponder.java:207)
at io.rsocket.RSocketResponder.handleFrame(RSocketResponder.java:310)
at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:242)
at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554)
at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630)
at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696)
at reactor.core.publisher.Flux.subscribe(Flux.java:8174)
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:188)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1637)
at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:317)
at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:116)
at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380)
at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316)
at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:114)
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:218)
at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:351)
at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:348)
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:90)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:321)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:295)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:830)
This particular error is as I understand is because of netty message wrapping (from other threads on stackoverflow) but how to solve it?
The server is Spring Boot 5+ RSocket but the client is only using RSocket-Java.
The problem is inside mime types.
In your case server awaits CBOR but you proceed application/json
The code solution: change the way of RSocketRequester initialization as at the example below and you client with be sending CBOR you can see at by enabling debug: logging.level.io.rsocket.FrameLogger: DEBUG. That's all for hello world, no need custom strategies or factories implementations on the client-side
#Bean
RSocketRequester rSocketRequester(RSocketStrategies strategies) {
return RSocketRequester
.builder()
.rsocketStrategies(strategies)
.connectTcp("127.0.0.1", 7000)
.retry(5)
.block();
}
Btw, I didn't reach the solution with JSON on both side even with custom Encoder & Decoder on both side. I guess the reason here is that there is no CBOR to Jackson converter and only vice versa: org.springframework.http.codec.cbor.Jackson2CborEncoder
From this, use the following to generate metadata.
CompositeByteBuf metadata = ByteBufAllocator.DEFAULT.compositeBuffer();
RoutingMetadata routingMetadata = TaggingMetadataCodec.createRoutingMetadata(ByteBufAllocator.DEFAULT, List.of("/route"));
CompositeMetadataCodec.encodeAndAddMetadata(metadata,
ByteBufAllocator.DEFAULT,
WellKnownMimeType.MESSAGE_RSOCKET_ROUTING,
routingMetadata.getContent());

About Spring WebClient on external onTerminate event

I'm running a spring-boot v2.0.3 tomcat-embedded webserver 8.5.31, to Serve Spring Webflux REST services.
One of those REST services calls to another, external REST Webservice.
public Mono<ServerResponse> select(ServerRequest request) {
return request.principal().cast(Authentication.class)
.flatMap(principal ->
client.get().uri(f -> buildUri(request, principal, request.queryParams(), f))
.exchange())
.flatMap((ClientResponse mapper) ->
ServerResponse.status(mapper.statusCode())
.headers(c -> mapper.headers().asHttpHeaders().forEach(c::put))
.body(mapper.bodyToFlux(DataBuffer.class)
.delayElements(Duration.ofSeconds(10))
.doOnCancel(() -> log.error("Cancelled client"))
.doOnTerminate(() -> log.error("Terminated client")), DataBuffer.class))
.doOnTerminate(() -> log.error("Termination called"));
}
If a browser calls my REST-Service, and after a short while cancels the connection, I can see the outer "Termination called" event, and that the client was terminated also. But the client termination seems to trigger an error in tomcat:
2018-07-25 12:50:42.860 DEBUG 12084 --- [ elastic-3] org.example.search.security.UserManager : Authorizing org.springframework.security.web.authentication.preauth.PreAuthenticatedAuthenticationToken#809aec11: Principal: cn=dv dbsearch client, ou=dbsearch, o=example, l=eb, st=unknown, c=de; Credentials: [PROTECTED]; Authenticated: false; Details: null; Not granted any authorities
2018-07-25 12:50:42.864 DEBUG 12084 --- [ elastic-3] org.example.search.security.UserManager : Successfully authorized: org.springframework.security.web.authentication.preauth.PreAuthenticatedAuthenticationToken#c03925ec: Principal: org.springframework.security.core.userdetails.User#809aec0e: Username: cn=dv dbsearch client, ou=dbsearch, o=example, l=eb, st=unknown, c=de; Password: [PROTECTED]; Enabled: true; AccountNonExpired: true; credentialsNonExpired: true; AccountNonLocked: true; Granted Authorities: ROLE_ADMIN; Credentials: [PROTECTED]; Authenticated: true; Details: null; Granted Authorities: ROLE_ADMIN
2018-07-25 12:50:45.470 ERROR 12084 --- [ctor-http-nio-4] c.d.s.s.h.SolrSelectRequestHandler : Termination called
2018-07-25 12:51:15.562 ERROR 12084 --- [ parallel-3] c.d.s.s.h.SolrSelectRequestHandler : Terminated client
2018-07-25 12:51:15.625 ERROR 12084 --- [nio-8443-exec-2] o.s.w.s.adapter.HttpWebHandlerAdapter : Unhandled failure: Eine bestehende Verbindung wurde softwaregesteuert durch den Hostcomputer abgebrochen, response already set (status=200)
2018-07-25 12:51:15.628 WARN 12084 --- [nio-8443-exec-2] o.s.h.s.r.ServletHttpHandlerAdapter : Handling completed with error: Eine bestehende Verbindung wurde softwaregesteuert durch den Hostcomputer abgebrochen
2018-07-25 12:51:15.652 ERROR 12084 --- [nio-8443-exec-2] o.a.catalina.connector.CoyoteAdapter : Exception while processing an asynchronous request
java.lang.IllegalStateException: Calling [asyncError()] is not valid for a request with Async state [DISPATCHING]
at org.apache.coyote.AsyncStateMachine.asyncError(AsyncStateMachine.java:424)
at org.apache.coyote.AbstractProcessor.action(AbstractProcessor.java:470)
at org.apache.coyote.Request.action(Request.java:431)
at org.apache.catalina.core.AsyncContextImpl.setErrorState(AsyncContextImpl.java:388)
at org.apache.catalina.connector.CoyoteAdapter.asyncDispatch(CoyoteAdapter.java:176)
at org.apache.coyote.AbstractProcessor.dispatch(AbstractProcessor.java:232)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:53)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1468)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Sorry for the german errormessages, it means "client abortet connection".
I don't really have a problem with this errormessage per se, it's just, that my buffers in spring's Webclient don't seem to be cleared up (the log I did not reproduce locally, so it has diferent timestamps):
2018-07-23 08:44:36.892 ERROR 22707 — [reactor-http-nio-5] io.netty.util.ResourceLeakDetector : LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:331) io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185)
So here the question: How can I cleanly end the WebClient connection, when the request to my REST-Service is cancelled?
I can't really say for sure about that exception message, but I know Tomcat improved this in the 8.5.x generation. Which version are you using? If you can provide a consistent way to reproduce this with a minimal application, you could create a new issue in jira.spring.io on Spring Framework, or Tomcat itself if you managed to reproduce it without Spring (although it should be a hard one to reproduce).
Now about releasing DataBuffer instances - DataBuffer instances can be pooled, depending on the implementation. Here the WebClient is using Netty, which is pooling buffers. So they need to be released when they're no longer used.
Looking at your implementation, I think those unreleased buffers come from this:
the WebClient is fetching data from the remote endpoint and creating DataBuffer instances
various Reactor operators along the way are buffering those using internal queues (depending on the prefetching and the operators used, the amount of queued buffers can vary)
when the subscriber fails or cancels, those buffers sitting in internal queues are not released as they should.
Currently Reactor does not offer a hook point to reach those objects in those error cases. But this is a brand new feature that's been added in Reactor core 3.2.0. This will be leveraged internally by Spring Framework with SPR-17025. Please follow this issue - your use case might be handy when it comes to testing the fix.

Elasticsearch NoNodeAvailableException

I am getting following error from Elasticsearch.
`<html><head><title>Apache Tomcat/7.0.64 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 500 - Request processing failed; nested exception is org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []</h1><HR size="1" noshade="noshade"><p><b>type</b> Exception report</p><p><b>message</b> <u>Request processing failed; nested exception is org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []</u></p><p><b>description</b> <u>The server encountered an internal error that prevented it from fulfilling this request.</u></p><p><b>exception</b> <pre>org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:943)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:822)
javax.servlet.http.HttpServlet.service(HttpServlet.java:624)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:807)
javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
myapp.filter.SimpleCORSFilter.doFilter(SimpleCORSFilter.java:22)
</pre></p><p><b>root cause</b> <pre>org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:305)
org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:200)
org.elasticsearch.client.transport.support.InternalTransportIndicesAdminClient.execute(InternalTransportIndicesAdminClient.java:86)
org.elasticsearch.client.support.AbstractIndicesAdminClient.exists(AbstractIndicesAdminClient.java:178)
org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequestBuilder.doExecute(IndicesExistsRequestBuilder.java:53)
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
myapp.dao.CommonDao.ValidateTenant(CommonDao.java:7)
myapp.dao.CliffDomainDao.viewCliffDomains(CliffDomainDao.java:55)
myapp.service.CliffDomainService.getall(CliffDomainService.java:53)
myapp.controller.CliffDomainController.getall(CliffDomainController.java:79)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:214)
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:748)
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:689)
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945)
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876)
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:931)
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:822)
javax.servlet.http.HttpServlet.service(HttpServlet.java:624)
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:807)
javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
myapp.filter.SimpleCORSFilter.doFilter(SimpleCORSFilter.java:22)
</pre></p><p><b>note</b> <u>The full stack trace of the root cause is available in the Apache Tomcat/7.0.64 logs.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/7.0.64</h3></body></html>`
I am running Elasticsearch 1.7.2 on Ubuntu.
Changes that I have made in elasticsearch.yml
################################### Cluster ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: cliffservice
# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
#network.publish_host: 192.168.0.1
network.publish_host: 10.100.10.231
# Set both 'bind_host' and 'publish_host':
#
#network.host: 192.168.0.1
Connection code
public TransportClient getClient() {
Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "cliffservice").build();
TransportClient client = new TransportClient(settings);
client = client.addTransportAddress(new InetSocketTransportAddress(this.host, this.port));
this.esclient = client;
return client;
}
Get data from elastic client
SearchResponse response = Connection.getEsclient().prepareSearch("testindex").setTypes("testtype").execute()
.actionGet();
if (response.getHits().getHits().length > 0) {
for (SearchHit hit : response.getHits().getHits()) {
CliffDomain cliffDomain = new CliffDomain();
assetDomain.setCliffDomainId(hit.getId());
assetDomain.setCliffDomainName((String) hit.sourceAsMap().get("clifftDomainName"));
searchResponse.add(cliffDomain);
}
}
What wrong I am doing?
Your client and service are in different clusters. The client is in assetservice and the server is in cliffservice. Also, make sure that the client can connect to the port 9300 of 10.100.10.231.
Finally i figured out the problem. I was using elasticsearch-1.7.1 in my Java application. In my local computer Elasticsearch-1.7.1 is installed but in server the version of elasticsearch was 1.7.2. That was the problem. I downgraded server's elasticsearch to 1.7.1 and now it is working fine.

Resources