I am using Netty server for a Spring boot application. Is there anyway to monitor the Netty server queue size so that we will come to know if the queue is full and server is not able to accept any new request? Also, Is there any logging by netty server if the queue is full or unable to accept a new request?
Netty does not have any logging for that purpose but I implemented a way to find pending tasks and put some logs according to your question. here is a sample log from my local
you can find all code here https://github.com/ozkanpakdil/spring-examples/tree/master/reactive-netty-check-connection-queue
About code which is very explanatory from itself but NettyConfigure is actually doing the netty configuration in spring boot env. at https://github.com/ozkanpakdil/spring-examples/blob/master/reactive-netty-check-connection-queue/src/main/java/com/mascix/reactivenettycheckconnectionqueue/NettyConfigure.java#L46 you can see "how many pending tasks" in the queue. DiscardServerHandler may help you how to discard if the limit is full. You can use jmeter for the test here is the jmeter file https://github.com/ozkanpakdil/spring-examples/blob/master/reactive-netty-check-connection-queue/PerformanceTestPlanMemoryThread.jmx
if you want to handle netty limit you can do it like the code below
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
totalConnectionCount.incrementAndGet();
if (ctx.channel().isWritable() == false) { // means we hit the max limit of netty
System.out.println("I suggest we should restart or put a new server to our pool :)");
}
super.channelActive(ctx);
}
You should check https://stackoverflow.com/a/49823055/175554 for handling the limits and here is another explanation about "isWritable" https://stackoverflow.com/a/44564482/175554
One more extra, I put actuators in the place http://localhost:8080/actuator/metrics/http.server.requests is nice to check too.
Related
I have a Spring Boot app (Jhipster) that uses STOMP over WebSockets to communicate information from the server to users.
I recently added an ActiveMQ server to handle scaling the app horizontally, with an Amazon auto-scaling group / load-balancer.
I make use the convertAndSendToUser() method, which works on single instances of the app to locate the authenticated users' "individual queue" so only they receive the message.
However, when I launch the app behind the load balancer, I am finding that messages are only being sent to the user if the event is generated on the server that their websocket-proxy connection (to the broker) is established on?
How do I ensure the message goes through ActiveMQ to whichever instance of the app that the user is actually "connected too" regardless of which instance receives, say an HTTP Request that executes the convertAndSendToUser() event?
For reference here is my StompBrokerRelayMessageHandler:
#Bean
public AbstractBrokerMessageHandler stompBrokerRelayMessageHandler() {
StompBrokerRelayMessageHandler handler = (StompBrokerRelayMessageHandler) super.stompBrokerRelayMessageHandler();
handler.setTcpClient(new Reactor2TcpClient<>(
new StompTcpFactory(orgProperties.getAws().getAmazonMq().getStompRelayHost(),
orgProperties.getAws().getAmazonMq().getStompRelayPort(), orgProperties.getAws().getAmazonMq
().getSsl())
));
return handler;
}
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableStompBrokerRelay("/queue", "/topic")
.setSystemLogin(orgProperties.getAws().getAmazonMq().getStompRelayHostUser())
.setSystemPasscode(orgProperties.getAws().getAmazonMq().getStompRelayHostPass())
.setClientLogin(orgProperties.getAws().getAmazonMq().getStompRelayHostUser())
.setClientPasscode(orgProperties.getAws().getAmazonMq().getStompRelayHostPass());
config.setApplicationDestinationPrefixes("/app");
}
I have found the name corresponding to the queue that is generated on ActiveMQ by examining the headers in the SessionSubscribeEvent, that is generated in the listener when a user subscribes to a user-queue, as simpSessionId.
#Override
#EventListener({SessionSubscribeEvent.class})
public void onSessionSubscribeEvent(SessionSubscribeEvent event) {
log.debug("Session Subscribe Event:" +
"{}", event.getMessage().getHeaders().toString());
}
Corresponding queues' can be found in ActiveMQ, in the format: {simpDestination}-user{simpSessionId}
Could I save the sessionId in a key-value pair and just push messages onto that topic channel?
I also found some possibilities of setting ActiveMQ specific STOMP properties in the CONNECT/SUBSCRIBE frame to create durable subscribers if I set these properties will Spring than understand the routing?
client-id & subcriptionName
Modifying the MessageBrokerReigstry config resolved the issue:
config.enableStompBrokerRelay("/queue", "/topic")
.setUserDestinationBroadcast("/topic/registry.broadcast")
Based on this paragraph in the documentation section 4.4.13:
In a multi-application server scenario a user destination may remain
unresolved because the user is connected to a different server. In
such cases you can configure a destination to broadcast unresolved
messages to so that other servers have a chance to try. This can be
done through the userDestinationBroadcast property of the
MessageBrokerRegistry in Java config and the
user-destination-broadcast attribute of the message-broker element in
XML
I did not see any documentation on "why" /topic/registry.broadcast was the correct "topic" destination, but I am finding various iterations of it:
websocket sessions sample doesn't cluster.. spring-session-1.2.2
What is MultiServerUserRegistry in spring websocket?
Spring websocket - sendToUser from a cluster does not work from backup server
In my Spring Boot application I have to implement an import service. Users can submit a bunch of JSON files and application will try to import the data from these files. Depending on the data amount at JSON files the single import process can take a 1 or 2 hours.
I do not want to block the users during the import process so I plan to accept the task for importing and notify user that this data is scheduled for processing. I'll put the data into the queue and a free queue-consumer on the other end will start the import process. Also, I need to have a possibility to monitor a jobs in the queue, terminate them if needed.
Right now I'm thinking to use Embedded Apache ActiveMQ in order to introduce message producer and consumer logic but before this I'd like to ask - from the architecture point of view - is it a good choice for the described task or it can be implemented with a more appropriate tools.. like for example plain Spring #Async and so on ?
It is possible to treat files concurrently with Camel like this
from("file://incoming?maxMessagesPerPoll=1&idempotent=true&moveFailed=failed&move=processed&readLock=none").threads(5).process()
Take a look at http://camel.apache.org/file2.html
But i think that it is better for your requirements to use a standalone ActiveMQ, a standalone service to move files to ActiveMQ and standalone consumer to be capable to kill or restart each one independently.
It is better to use ActiveMQ as you said and you can easily create a service to move messages to a queue with Camel like this :
CamelContext context = new DefaultCamelContext();
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=true");
context.addComponent("test-jms", JmsComponent.jmsComponentAutoAcknowledge(connectionFactory));
context.addRoutes(new RouteBuilder() {
public void configure() {
// convertBodyTo to use TextMessage or maybe send them as file to the Queue from("file://testFolderPath").convertBodyTo(String.class).to("test-jms:queue:test.queue");
}
});
context.start();
Here some examples
http://www.programcreek.com/java-api-examples/index.php?api=org.apache.camel.component.jms.JmsComponent
https://skills421.wordpress.com/2014/02/08/sending-local-files-to-a-jms-queue/
https://github.com/apache/camel/blob/master/examples/camel-example-jms-file/src/main/java/org/apache/camel/example/jmstofile/CamelJmsToFileExample.java
https://github.com/apache/camel/tree/master/examples
To monitor and manage you can use jmx with VisualVM or Hawtio http://hawt.io/getstarted/index.html
http://camel.apache.org/camel-jmx.html
To consume you can use DefaultMessageListenerContainer with concurrent consumers on the queue and for this you need to change the prefetchPolicy on the ConnectionFactory of the DefaultMessageListenerContainer , Multithreaded JMS client ActiveMQ
We have a spring application where redis cache has been implemented along with the database MySQL. Here we are using redis cache to store the temporary values for the server validations instead of hitting the database every time, hence hitting the database calls every time gets reduces system performance.
Now i explain my problem while hitting the spring boot action endpoints,
if suddenly my redis cache server stops, we would like to know how to get the notification that my redis cache server is down. So we need solution / example java application to get the notification using redis cache listener context or anything like that.
Redis doesn't work that way. In fact, no remote service will notify your application that it's down. Usually, it's the other way round: If the service you're consuming is accessed with a more or less sophisticated client, you might take advantage of the client's features.
Asynchronous clients that run I/O, or monitoring threads can help here. More specific, it depends on the client you're using with Spring Boot and Redis. Jedis is a plain client that reacts on a request basis. Lettuce allows you to register a RedisConnectionStateListener that is called on specific connection events, such as connected/disconnected:
RedisClient redisClient = …;
redisClient.addListener(new RedisConnectionStateListener() {
#Override
public void onRedisConnected(RedisChannelHandler<?, ?> redisChannelHandler) {
}
#Override
public void onRedisDisconnected(RedisChannelHandler<?, ?> redisChannelHandler) {
}
#Override
public void onRedisExceptionCaught(RedisChannelHandler<?, ?> redisChannelHandler, Throwable throwable) {
}
});
When using Spring Data Redis, retrieving the RedisClient from LettuceConnectionFactory might be a bit tricky as it is a private field. Hence it requires reflection.
I have a (legacy) TCP service that has multiple processes. Each process runs on the same host, but on a different port. The service is single threaded, so the way to increase throughput is to round-robin each request across each of the ports.
I am providing an AMQP exposure to this legacy application. Its very simple - take a string off the AMQP queue, pass it to the application, and return the response string to the AMQP reply queue.
This works great on a single port. However, i'd like to fan out the requests across all the ports.
Spring Integration seems to only provide AbstractClientConnectionFactory implementations that either connect directly to a single host/port (TcpNetClientConnectionFactory) or maintain a pool of connections to a single host/port (CachingClientConnectionFactory). There arent any that pool connections between a single host and multiple ports.
I have attempted to write my own AbstractClientConnectionFactory that maintains a pool of AbstractClientConnectionFactory objects and round-robins between them. However, I have struck several issues to do with handing the TCP connections when the target service goes away or the network is interrupted that I have not been able to solve.
There is also the approach taken by this question: Spring Integration 4 - configuring a LoadBalancingStrategy in Java DSL but the solution to that was to hardcode the number of endpoints. In my case, the number of endpoints is only known at runtime and is a user-configurable setting.
So, basically I need to create a TcpOutboundGateway per port dynamically at runtime and somehow register it in my IntegrationFlow. I have attempted the following:
#Bean
public IntegrationFlow xmlQueryWorkerIntegrationFlow() {
SimpleMessageListenerContainer inboundQueue = getMessageListenerContainer();
DirectChannel rabbitReplyChannel = MessageChannels.direct().get();
IntegrationFlowBuilder builder = IntegrationFlows
.from(Amqp.inboundGateway(inboundQueue)
.replyChannel(rabbitReplyChannel))
/* SOMEHOW DO THE ROUND ROBIN HERE */
//I have tried:
.channel(handlerChannel()) //doesnt work, the gateways dont get started and the message doesnt get sent to the gateway
//and I have also tried:
.handle(gateway1)
.handle(gateway2) //doesnt work, it chains the handlers instead of round-robining between them
//
.transform(new ObjectToStringTransformer())
.channel(rabbitReplyChannel);
return builder.get();
}
#Bean
//my attempt at dynamically adding handlers to the same channel and load balancing between them
public DirectChannel handlerChannel() {
DirectChannel channel = MessageChannels.direct().loadBalancer(new RoundRobinLoadBalancingStrategy()).get();
for (AbstractClientConnectionFactory factory : generateConnections()) {
channel.subscribe(generateTcpOutboundGateway(factory));
}
return channel;
}
Does anyone know how I can solve this problem?
See the dynamic ftp sample - in essence each outbound gateway goes in its own application context and the dynamic router routes to the appropriate channel (for which the outbound adapter is created on demand if necessary).
Although the sample uses XML, you can do the same thing with java configuration, or even with the Java DSL.
See my answer to a similar question for multiple IMAP mail adapters using Java configuration and then a follow-up question.
I am trying to streaming time series data using Springframework SimpMessagingTemplate (default Stomp implementation) to broadcast messages to a topic that the SockJS client subscribed to. However, the messages is received out of order. The server is single thread and messages are sent in ascending order by their timestamps. The client somehow received the messages out of the order.
I am using the latest release version of both stompjs and springframework (4.1.6 release).
looks like there is a built in striped executor, so just enable it:
#Override
protected void configureMessageBroker(MessageBrokerRegistry registry) {
// ...
registry.setPreservePublishOrder(true);
}
https://docs.spring.io/spring/docs/current/spring-framework-reference/web.html#websocket-stomp-ordered-messages
Found the root cause of this issue. The messages were sending in "correct" order from the application implementation perspective (I.e, convertAndSend() are called in one thread or at least thread safe fashion"). However, Springframework web socket uses reactor-tcp implementation which will process the messages on clientOutboundChannel from the thread pool. Thus the messages can be written to the tcp socket in different order that they are arrived. When I configured the web socket to limit 1 thread for the clientOutboundChannel, the order is preserved.
This problem is not in the SocketJS but a limitation of current Spring web socket design.
It's Spring web socket design problem. To receive messages in valid order you have to set corePoolSize of websocket clients to 1.
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketMessageBrokerConfiguration extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureClientOutboundChannel(ChannelRegistration registration) {
registration.taskExecutor().corePoolSize(1);
}
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.taskExecutor().corePoolSize(1);
}
}
UPDATE
Please see #Jason's answer. Spring 5.1 has a setPreservePublishOrder() to order the messages based on their client ID.
I experienced this issue as well. I don't like to limit my thread pool size to 1 for this will cause an overhead on my application. Instead, I used a StripedExecutorService to process messages coming in and out of my application. This type of executor service guarantees ordered processing of messages for tasks that have same stripe. For me, I use WebSocket session ID as stripe. Register this executor via ChannelRegistration.taskExecutor() on your inbound, broker, and outbound channel and this will guarantee ordered messages. Choose your stripe wisely.