Kafka consumer with TLS. Performance issue - performance

In my application I have kafka configured to work with TLS, so I have few consumers which each time polling the new messages from the broker.
Problem is that if I have 5 consumers and each is performing poll for each 100 ms, I have a tones of SSLHandshakes.
I aware about "session resumption" that I used to use in web services. My question:
Is there any possibility to say to the kafka consumer that it doesn't need to perform handshake each time, and use symmetric key that was created during the first handshake?

TLS Handshake is only performed when creating the connection. See https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake
Kafka Consumers keep the connection alive between polls, so each Consumer only performs a TLS Handshake once, to each broker it's using, at startup.

Related

Spring websocket/broker fail over

I have the following design:
Machine1
WebsocketApp
ActiveMQ broker
Machine2
WebsocketApp
ActiveMQ broker
Machine3
WebsocketApp
ActiveMQ broker
Machine4
WebsocketApp
ActiveMQ broker
The clients will use STOMP over WebSockets through an F5 load-balancer to connect to the ActiveMQ brokers. They can land on any machines based on the load factor.
For fail over scenarios how do we share the web socket sessions between ActiveMQ. Otherwise if the broker goes down all the sessions that it is holding will go down.
STOMP is a very simple protocol. It has no support for fail-over.
If the broker to which a STOMP client is connected goes down in your environment then that client's connection will go down and all the messages on that broker will be unavailable until the broker comes back up. The client will need to reconnect to another broker via the F5 URL.
STOMP connections are not like HTTP. They are stateful. Client "session" data is not shared among brokers. If a client's broker goes down then it cannot simply carry on as if nothing happened like is often possible for HTTP use-cases.

What is the ideal way to store the consumer offset using spring boot kafka consumer client?

I have spring kafka consumer application. The application acts as pass through which polls the messages from kafka broker and send to IBM MQ. What would be a best/simplistic approach to store the offset in case of failure?
The simplest approach is to use the default mechanism of storing the offsets in kafka itself.
If you add a SeekToCurrentErrorHandler, the container will keep redelivering records that are failed in the listener, up to 10 times by default but it can be configured for infinite retries.
If you add stateful retry, the listener adapter can add a delay between each delivery attempt.
See Stateful Retry.
ackOnError should be set to false.

How do I distribute JMS Listener Connections to ActiveMQ Network of Brokers using Spring Boot JMS?

Our JMS Listener application connects to an ActiveMQ network of brokers through a load balancer, which we are told distributes connections amongst brokers in a round-robin fashion. Our spring boot application is creating a connection via the load balancer, which in turn feeds the connection to one of the brokers amongst the network of brokers. If a message is published to the brokers then it would be a lot quicker if the message was on the broker that the JMS listener connection lived on. However, the likelihood of that occurring is slim unless we can distribute the connections across the brokers.
I've tried increasing the concurrency in the DefaultJmsListenerContainerFactory, but that didn't do the trick. I was thinking about somehow extending the AbstractJmsListenerContainerFactory, and somehow create a Map of DefaultMessageListenerContainer instances but it looks like the createListenerContainer will only return an instance of whatever is parameterized in the AbstractJmsListenerContainerFactory and we cannot parameterize it with an instance of Map.
We are using Spring Boot 1.5.14.RELEASE.
== UPDATE ==
I've been playing around with the classes above, and it seems like it is inherent in Spring JMS that a Jms Listener be associated with a Single Message Listener Container, which in turn is associated with a single (potentially shared) connection.
For any folks that have JMS Application Listeners that are connecting to a load balanced network of brokers, are you creating a single connection that is connecting to a single broker, and if so, do you experience significant performance degradation as a result of the network of brokers having to move any inbound messages to a broker with consumers?

What does Spring JMS ActiveMQ use to determine when a broker should switch Exclusive Consumers?

An exclusive consumer in Activemq is one that is sent every message from a broker until the consumer dies or goes away, at which time the broker switches consumer.
What is it that defines when the switchover takes place? How do you configure this in Spring JMS/ActiveMQ?
It's not Spring JMS doing the checking; it's the JMS provider, ActiveMQ.
JMS is an API specification; an empty framework, essentially. ActiveMQ provides the implementation backing for managing connections, message brokering, load-balancing, fail-over, etc.
The ActiveMQ broker handles switching-over consumers based on queue properties (you don't need to do anything special in your code):
queue = new ActiveMQQueue("TEST.QUEUE?consumer.exclusive=true");
The switch-over takes place when either the consumer disconnects gracefully or the broker determines that the consumer has disappeared (via the wireFormat.maxInactivityDuration elapsing without any messages or keep-alives being received). You don't have to configure anything if you're happy with the default value of wireFormat.maxInactivityDuration (30 seconds), but you can tweak that if you want to change how long it takes before the broker gives up on a client.

A webapp that uses Spring AMQP is that consired to be 1 client?

Hi there i am wondering if i create a webapp that uses Spring AMQP. Is that single webapp 1 AMQP client? Or is every request made by a user that results into an AMQP call a client, so potentially x numbers of clients?
I don't know AMQP much, but I suspect it has the same terminology as jms. In that sense your application is probably pooling connections to AMQP broker for better performance. Each connection in a pool is treated as a separate client (competing consumer).
Thus each request is not really creating a new connection (client), but your application isn't a single client as well. In fact, when your application tries to access AMQP broker, it picks any connection from the pool and puts it back once it's done. Another request can reuse the same connection (client) or use a different, idle one.

Resources