JMS vs Kafka in specific conditions - jms

I am reading both concepts. Mainly Kafka. And comparing with JMS to understand better.
Kafka guarantees ordered delivery and multiple subscriber. How does kafka achieve it?
Kafka has multiple partitions. If one consumer per partition, then we can guarantee ordering. We can achieve load balancing with multiple partitions. So Both at the same time is possible.
In case of JMS, if we have multiple queues, isn't same as Kafka?
Q1: Which is better in this scenario?
Q2: Am I looking narrowly? Does kafka do more than this?
Please advise me.
Even If I am wrong about JMS, please let me know.

I was asking myself the same question before :)
As you wrote, Kafka guarantees ordered delivery only within a single partition. Period. If you are using multiple partitions (which is a must to have the parallelism), then it is possible that a consumer who listens on several partitions gets a message A from partition 1 before a message B from partition 2, even though message B arrived first.
Now, about the differences between Kafka and JMS. In JMS, you have a queue and you have a topic. With queues, when first consumer consumes a message, others cannot take it anymore. With topics, multiple consumers receive each message but it is much harder to scale. Consumer group from Kafka is a generalization of these two concepts - it allows scaling between members of the same consumer group, but it also allows broadcasting the same message between many different consumer groups.
Even more important difference is the following. Imagine that you have Kafka topic with 500 partitions and on the other hand, 500 JMS message queues. Let's also imagine that you have certain number of producers and consumers. In case of JMS, you need to configure each of them so they know which queues belong to them. What if e.g. some consumer crashes or you detect that you need to increase number of consumers? You have to reconfigure manually the whole system. This comes for free with Kafka, i.e. Kafka provides automatic rebalancing which is an extremely useful feature.
Finally, Kafka is tremendously faster, mostly because of some clever disk/memory transfer techniques and because consumers take care about the messages they consumed, not the broker like in JMS. Because of this, consumer is also able to "rewind", i.e. reread the messages from e.g. 2 days ago.
See also:
Apache Kafka order of messages with multiple partitions
Benchmarking Apache Kafka

Here's a fairly good article on the differences:
http://blog.hampisoftware.com/index.php/2016/01/20/apache-kafka-differences-from-jms/
Kafka does not guarantee message ordering across multiple partitions of a topic. Order is maintained only within a partition. In order to achieve strict ordering, you need to use one partition per topic.

Related

How to determine the number of consumers in a consumer group in Spingboot?

I'm using annotation #KafkaListner to listen to a specific topic. However suddenly I noticed that there is a big lagging for the consumers to receive the messages from the producers. Then I increased the number of partitions of the brokers and the issue is solved.
After some researches I realized that the number of consumers in a consumer group cannot exceed the number of partitions otherwise some of the consumers would be inactive.
So in Spring Boot, does each individual #KafkaListener is considered as a single consumer? If not, how can I find the exact number of consumers in a consumer group thus I'm able to properly configure the partitions?
does each individual #KafkaListener is considered as a single consumer?
No, it's a consumer group which can have one (default) or more consumer threads (Containers). You can use the concurrency property to override the ContainerFactory default property.
As you figured out, the number of topic's partitions determines the level of parallelism. If the concurrency is greater than the number of partitions, the concurrency is adjusted down such that each Container gets one partition.

Kafka work queue with a dynamic number of parallel consumers

I want to use Kafka to "divide the work". I want to publish instances of work to a topic, and run a cloud of identical consumers to process them. As each consumer finishes its work, it will pluck the next work from the topic. Each work should only be processed once by one consumer. Processing work is expensive, so I will need many consumers running on many machines to keep up. I want the number of consumers to grow and shrink as needed (I plan to use Kubernetes for this).
I found a pattern where a unique partition is created for each consumer. This "divides the work", but the number of partitions is set when the topic is created. Furthermore, the topic must be created on the command line e.g.
bin/kafka-topics.sh --zookeeper localhost:2181 --partitions 3 --topic divide-topic --create --replication-factor 1
...
for n in range(0,3):
consumer = KafkaConsumer(
bootstrap_servers=['localhost:9092'])
partition = TopicPartition('divide-topic',n)
consumer.assign([partition])
...
I could create a unique topic for each consumer, and write my own code to assign work to those topic. That seems gross, and I still have to create topics via the command line.
A work queue with a dynamic number of parallel consumers is a common architecture. I can't be the first to need this. What is the right way to do it with Kafka?
The pattern you found is accurate. Note that topics can also be created using the Kafka Admin API and partitions can also be added once a topic has been created (with some gotchas).
In Kafka, the way to divide work and allow scaling is to use partitions. This is because in a consumer group, each partition is consumed by a single consumer at any time.
For example, you can have a topic with 50 partitions and a consumer group subscribed to this topic:
When the throughput is low, you can have only a few consumers in the group and they should be able to handle the traffic.
When the throughput increases, you can add consumers, up to the number of partitions (50 in this example), to pick up some of the work.
In this scenario, 50 consumers is the limit in terms of scaling. Consumers expose a number of metrics (like lag) allowing you to decide if you have enough of them at any time
Thank you Mickael for pointing me in the correct direction.
https://www.safaribooksonline.com/library/view/kafka-the-definitive/9781491936153/ch04.html
Kafka consumers are typically part of a consumer group. When multiple
consumers are subscribed to a topic and belong to the same consumer group,
each consumer in the group will receive messages from a different subset of
the partitions in the topic.
https://dzone.com/articles/dont-use-apache-kafka-consumer-groups-the-wrong-wa,
Having consumers as part of the same consumer group means providing the
“competing consumers” pattern with whom the messages from topic partitions
are spread across the members of the group. Each consumer receives messages
from one or more partitions (“automatically” assigned to it) and the same
messages won’t be received by the other consumers (assigned to different
partitions). In this way, we can scale the number of the consumers up to the
number of the partitions (having one consumer reading only one partition); in
this case, a new consumer joining the group will be in an idle state without
being assigned to any partition.
Example code for dividing the work among 3 consumers, up to a maximum of 100:
bin/kafka-topics.sh --partitions 100 --topic divide-topic --create --replication-factor 1 --zookeeper localhost:2181
...
for n in range(0,3):
consumer = KafkaConsumer(group_id='some-constant-group',
bootstrap_servers=['localhost:9092'])
...
I think, you are on right path -
Here are some steps involved -
Create Kafka Topic and create the required partitions. The number of partitions is the unit of parallelism. In other words you run these many number of consumers to process the work.
You can increase the partitions if the scaling requirements increased. BUT it comes with caveats like repartitioning. Please read the kafka documentation about the new partition addition.
Define a Kafka Consumer group for the consumer. Kafka will assign partitions to available consumers in the consumer group and automatically rebalance. If the consumer is added/removed, kafka does the rebalancing automatically.
If the consumers are packaged as docker container, then using kubernetes helps in managing the containers especially for multi-node environment. Other tools include docker-swarm, openshift, Mesos etc.
Kafka offers the ordering for partitions.
Check out the delivery guarantees - At-least once, Exactly once based on your use cases.
Alternatively, you can use Kafka Streams APIS. Kafka Streams is a client library for processing and analyzing data stored in Kafka. It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, and simple yet efficient management and real-time querying of application state.
Since you have a slow consumer use case, it's a great fit for Confluent's Parallel Consumer (PC). PC directly solves for this, by sub partitioning the input partitions by key and processing each key in parallel. So processing can take as long as you like. It also tracks per record acknowledgement. Check Parallel Consumer GitHub (it's open source BTW, and I'm the author).

Redis vs Kafka vs RabbitMQ for 1MB messages

I am currently researching a queueing solution to handle medium sized messages of 1MB.
Besides the features differences between Redis, Kafka and RabbitMQ I cannot find any good answer to their performance on messages of size around 1MB.
Any of you guys knows how many messages of 1MB can any of these handle?
Do you know any other queueing solutions which can perform better?
When you are evaluating Kafka vs Redis in your case, there are other factors which you have to take into account, besides message size. Here are some of them I can think of:
How many producers/consumers? Redis performance can be affected in case of greater number of producers/consumers due to the nature of Redis (push based queue). This is because Redis delivers the message to all the consumers at once, at the moment the message is put in the queue.
Do you need speed or reliability first? If speed is of utmost importance, use Redis since it does not persist messages and it will deliver them faster. If you need reliability use Kafka since it persist messages even after they are delivered.
Do you want your consumers to get messages once they are ready or you want messages to be sent to the consumers immediately? In first case use Kafka because it's pull based mechanism (consumer have to ask for the message). In second case use Redis since it's push based mechanism (message is pushed to the consumer once it's on the queue). RabbitMQ is also push based (although there is pull API with bad performance)
What is the number of messages expected? If it's not huge use Redis since you are limited with memory. Otherwise use Kafka. Best practice for RabbitMQ is to keep queues short. This means that you can consume messages at the close rate at which they appear on the queue. So if you have some long lasting operation on the consumer part probably RabbitMQ is not the best choice.
Scaling? Kafka scales horizontally really well (it's built with scalability in mind). RabbitMQ is usually scaled vertically. Redis also scales well horizontally if needed.
It's obvious that there are more than one criteria when you evaluate proper queueing solution. There are best practices and recommendations for each of the queueing engines that you are looking at. Think more about your specific use case, it's definitely worth the time since it will save you time later on if you chose inappropriate queueing engine.
I am answering for Kafka.
Kafka itself has very good performance even for big messages.
In our tests with 2 Kafka nodes we reach p2p communication with 170 MB/sec smaller messages 150 MB/s bigger messages.
The only thing you need to remember is to configure the broker to accept bigger messages.
Hier is nice article: Configuring Kafka for Performance and Resource Management - Handling Large Messages
I know other p2p solution which might be interesting when you have concrete requirements look at YAMI4
I was using Redis but only for very small messages, so I cannot say anything about 1MB.

Is Kafka able to have a dynamic number of consumers?

We are looking for a new messaging platform, and have narrowed our choices down to RabbitMQ or Kafka.
Right now, I am leaning toward Kafka, but I have some doubts that it is a good choice given one of our requirements.
We need to have a queue that is consumed by an unknown number of consumers. That is, we need to dynamically add and remove consumers as "workers" come online to do the processing. Also, workers may drop off at any time.
So for example, we may start a queue that has no consumers at all, and then the number of consumers may grow to 30. Later it may grow to 5000 or more, and then drop back off to 3.
We do not care about message ordering for this particular use case. Is Kafka a good fit for this?
Also, we were planning on maintaining a pool of consumer threads so that the workers could grab a single message and process it. So there may be 100 consumers in the pool and only 20 workers. Is it possible that we end up with messages in the other 80 consumers which are not utilized in the workers due to message send buffering? In other words, does Kafka pre-deliver messages to consumers before they are requested like some messaging systems do?
Yes, kafka can definitely match your requirements. You can have many-to-many producers/consumers. If all your consumers are within the same consumer group all messages will be distributed evenly between all consumers. It is not a problem also if you shut down / add new consumers, kafka will manage all automatically for you.
To your last question - kafka consumers are pull-based, so it is consumer responsibility to check if there are some messages to process.

Are there any tools to optimize the number of consumer and producer threads on a JMS queue?

I'm working on an application that is distributed over two JBoss instances and that produces/consumes JMS messages on several JMS queues.
When we configured the application we had to determine which threading model we would use, in particular the number of producing and consuming threads per queue. We have done this in a rather ad-hoc fashion but after reading the most recent columns by Herb Sutter in Dr Dobbs (in particular this one) I would like to size our threads in a more rigorous manner.
Are there any methods/tools to measure the throughput of JMS queues (in particular JBoss Messaging queues) as a function of the number of producing/consuming threads?
This is not really about a specific tool, but may be helpful.
Consumers:
Not sure what your inner architecture is, but let's assume it's an MDB reading in messages. I assert that your only requirement here for rigorous thread count sizing is to choose a maximum cap. If your MDB uses resources from a finite supplier like a JDBC connection pool, consider the maximum cap as the highest number of concurrent instances from that resource that you can tolerate taking. If the MDB's queue is remote, you probably want to consider remote connections (or technically, JMS sessions) a finite resource. If the MDB has less finite requirements (and the queue is local), your maximum cap becomes the number of threads, memory used and/or flat out CPU consumed by the working threads. The reasoning here is that the JBoss MDB container will simply keep allocating more MDB instances (and therefore threads) until the queue is empty or the maximum cap is reached. The only reason I can think of that you would really agonize over the minimum would be if the container's elapsed time or overhead to create new instances is above your tolerance and those operations are usually pretty small potatoes.
Producers
A general axiom of messaging is that producers nearly always outperform consumers. You would think this is pretty arbitrary, but it is a pattern I see recurring all the time, even in widely different messaging scenarios. Anyways, it's tough to say how the threading should work for the producer without knowing a bit about the application, but are you basically capable of [indefinitely] proportionally increasing the number of producer threads and the number of messages generated, or do you have some sort of cap where additional threads simply do not generate more messages ? I would guess it is the latter since most useful work has some limited data or calculation supplier. As I see it, the two drivers here are ordering and persistence.
First off, if you have strict message ordering where messages must be processed in strict (FPFP) First Produced First Processed then you're in a bit of a bind because you almost have to drop down to single threaded throughput unless you can devise some form of logical message demarcation (eg. a client number where any given client's messages are always sent to the same queue, but you may have multiple queues each serviced by one thread so each client is effectively FPFP).
Ordering aside, persistence is the next consideration in that if you have reliable and extensive message persistence, (or have a very high tolerance for message loss) just let the producer threads go to town. The messages will queue up reliably and eventually the consumers will [hopefully] catch up. However, if your message persistence message count or simple queue depths can potentially give you the willies when they get too high, here's where a tool might come in useful. If your producer thread count can be dynamically modified (which they can in many Java ThreadPool implementations) then you could sample the queue depths and raise or lower the producer thread count in accordance with the queue depth ranges you define, optionally to the point where if the consumers basically stall, so will the producers. I do not know of a specific tool that does this but between two JBoss servers this is fairly simple to whip up. Picking your queue depth-->producer thread count will be trickier.
Having said all that, I am going to actually read the article you linked to.....
I've got the perfect thing for you: IBM provide a free command line tool called perfharness.
It's aimed at benchmarking JMS providers, i.e. measuring the throughput of queues (single or multiple) given different numbers of producing or consuming threads.
Some features:
Send and consume messages at a fixed rate (msg/s) or at maximum rate possible on the queue
Use a specific number of threads
Use either JMS or native MQ
Can use data either generated randomly or taken from a file
Generates statistics telling you exactly how fast your queue is performing
The only down side is that it's not super intuitive, given the number of operations it supports. And IBM haven't open sourced it, which is a shame. However it sounds perfect for your purposes.

Resources