I was trying to create a simple producer / consumer kafka duo. Since producer was successfully working, according to the examples in confluent's github page, I had trouble while implementing consumer. I use cloud kafka broker, which is Cloudkarafka. The consumer.go code is below:
func main() {
config := &kafka.ConfigMap{
"metadata.broker.list": "XXXXXXX", // 3 hosts Cloudkarafka provides to me
"security.protocol": "SASL_SSL",
"sasl.mechanisms": "SCRAM-SHA-256",
"sasl.username": "XXXXXXXX", // My username provided by Cloudkarafka
"sasl.password": "XXXXXXXX", // My password provided by
"group.id": "cloudkarafka-example",
"go.events.channel.enable": true,
"go.application.rebalance.enable": true,
"default.topic.config": kafka.ConfigMap{"auto.offset.reset": "earliest"},
//"debug": "generic,broker,security",
}
topic := "XXXXXX" + "A" // username + "A"
consumer, err := kafka.NewConsumer(config)
if err != nil {
panic(fmt.Sprintf("Failed to create consumer: %s", err))
}
topics := []string{topic}
//consumer.SubscribeTopics(topics, nil)
err = consumer.SubscribeTopics(topics, nil)
run := true
for run == true {
ev := consumer.Poll(0)
switch e := ev.(type) {
case *kafka.Message:
fmt.Printf("%% Message on %s:\n%s\n",
e.TopicPartition, string(e.Value))
case kafka.PartitionEOF:
fmt.Printf("%% Reached %v\n", e)
case kafka.Error:
fmt.Fprintf(os.Stderr, "%% Error: %v\n", e)
run = false
default:
fmt.Printf("Ignored %v\n", e)
}
}
consumer.Close()
}
The problem here I get is, even though I produce messages to the same topic, consumer always stays in the default case, and constantly gives the output "Ignored <nil> ". Since I feel beginner to these topics, any help & suggestion would be appreciated.
ps: I use Windows 11, in the details it says "confluent-kafka-go is not supported on Windows" but the code works just stays in default state, also the producer part just works fine.
producer.go:
config := &kafka.ConfigMap{
"metadata.broker.list": "XXXXXXXXXX",
"security.protocol": "SASL_SSL",
"sasl.mechanisms": "SCRAM-SHA-256",
"sasl.username": "XXXXXXXXX",
"sasl.password": "XXXXXXXXX",
"group.id": "cloudkarafka-example",
"default.topic.config": kafka.ConfigMap{"auto.offset.reset": "earliest"},
//"debug": "generic,broker,security",
}
topic := "XXXXX-" + "A"
p, err := kafka.NewProducer(config)
if err != nil {
fmt.Printf("Failed to create producer: %s\n", err)
os.Exit(1)
}
fmt.Printf("Created Producer %v\n", p)
deliveryChan := make(chan kafka.Event)
for i := 0; i < 10; i++ {
value := fmt.Sprintf("[%d] Hello Go!", i+1)
err = p.Produce(&kafka.Message{TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}, Value: []byte(value)}, deliveryChan)
e := <-deliveryChan
m := e.(*kafka.Message)
if m.TopicPartition.Error != nil {
fmt.Printf("Delivery failed: %v\n", m.TopicPartition.Error)
} else {
fmt.Printf("Delivered message to topic %s [%d] at offset %v\n",
*m.TopicPartition.Topic, m.TopicPartition.Partition, m.TopicPartition.Offset)
}
}
close(deliveryChan)
Poll() will return nil on timeout. Since you are specifying a timeout of 0ms, I suspect that what you are seeing is the behaviour of a consumer with no messages to consume.
i.e. you are asking it to wait 0ms for new messages, there are never any new messages so the Poll() call is immediately returning nil every time, all the time. Without a specific nil case, these are handled by your default case.
Are you SURE you are producing messages to the same topic your consumer is subscribed to?As Andrey pointed out in his comment, either your topic ids are different or you have obfuscated them differently in your consumer vs producer example code. It may be more helpful to attempt first reproducing your problem with a configuration that does not require obfuscation, to avoid uncertainty on such points.
Are you getting any error from Subscribe() (why aren't you checking this)?
How long have you waited to see messages consumed?It can take a few seconds for a broker to accept a new consumer into a group; with a 0ms timeout, you may see lots of "no message" events before you eventually start receiving any waiting messages.
For a minimal, working example, I'd suggest keeping things as simple as possible:
You don't need to configure go.events.channel.enable if you are using Poll() to read messages
You don't need to configure go.application.rebalance.enable if you aren't interested in, and don't need to modify, initial offsets.
If you aren't interested in events such as PartitionEOF etc (and you likely aren't) then you might want to consider using the higher-level consumer.ReadMessage() function rather than Poll() (ReadMessage returns only messages or errors and ignores all other events).
Related
I use rabbitmq for delivering messages from various input-sources (websocket, rest, ...) to my workers. Every worker listens to a bunch of different routing-keys on a shared exchange.
Now it is possible that workerA handles "routeA". When my input-source sends something to routeA, workerA picks it up and consumes it.
But what happens if there is no routeA-"consumer"? In that case I want the input source to "know" that there is no one for that request. And since there is no "rabbit consumer" consuming this message, it is discarded. (sorry if the lingo isn't accurate) As far as I understand the control message (?) handling, that is where NotifyReturn() (golang library for amqp) kicks in, so that the publisher can know about the fact, it's message was discarded.
Here is a stripped example of my code. This approach works for me in a simple "just publish this message scenario". I breaks for RPC.
RPC always triggers the case returnNotification := <-returnChannel: case.
My question:
Am I holding it wrong? / Is this not the way to check the deliverability of a message?
Thanks!
edit: Forgot to mention: The error is raised on the "reply". So the request is send, but the reply (send via the "just publish") gets a NO_ROUTE returnNotification
Just publish
// error handling omitted for example code
tC, _ := rabbitConnection.Channel()
defer tC.Close()
tC.Confirm(false)
var ack = make(chan uint64)
var nack = make(chan uint64)
tC.NotifyConfirm(ack, nack)
returnChannel := make(chan amqp.Return)
tC.NotifyReturn(returnChannel)
p := someFunctionGeneratingAPublishing()
tC.Publish(
exchange,
e.GetRoutingKey(),
true,
false,
*p,
)
select {
case returnNotification := <-returnChannel:
if returnNotification.ReplyCode == amqp.NoRoute {
return fmt.Errorf("no amqp route for %s", e.GetRoutingKey())
}
case <-ack:
return nil
case <-nack:
return fmt.Errorf("basic nack for %s", e.GetRoutingKey())
}
RPC
publishing := someFunctionGeneratingAPublishing()
publishing.ReplyTo = "amq.rabbitmq.reply-to"
con := GetConnection()
directChannel, _ := con.Channel()
defer directChannel.Close()
directChannel.Confirm(false)
var ack = make(chan uint64)
var nack = make(chan uint64)
directChannel.NotifyConfirm(ack, nack)
returnChannel := make(chan amqp.Return)
directChannel.NotifyReturn(returnChannel)
// consume direct-reply to pattern queue
deliveryChan, _ := directChannel.Consume(
"amq.rabbitmq.reply-to",
"",
true,
false,
false,
false,
nil,
)
directChannel.Publish(
exchange,
e.GetRoutingKey(),
true,
false,
*publishing,
)
select {
case returnNotification := <-returnChannel:
if returnNotification.ReplyCode == amqp.NoRoute {
return fmt.Errorf("no amqp route for %s", e.GetRoutingKey())
}
case <-ack:
return nil
case <-nack:
return fmt.Errorf("basic nack for %s", e.GetRoutingKey())
}
Ok, after more wandering in the dark I finaly realized my fundamental error: Not carefully reading the documentation:
If the RPC server publishes with the mandatory flag set then amq.rabbitmq.reply-to.* is treated as not a queue; i.e. if the server only publishes to this name then the message will be considered "not routed"; a basic.return will be sent if the mandatory flag was set.
RabbitMQ Direct-Reply-To Article
It is by design, that the reply messages, in my configured case, are considered "not routed".
I have a Go application processing events from a single RabbitMQ queue. I use the github.com/streadway/amqp RabbitMQ Client Library.
The Go application processes every message in ~2-3 seconds. It's possible to process ~1000 or even more messages in parallel, if I feed them from memory.
But, unfortunately, RabbitMQ performance is worse.
So, I want to consume messages from queue faster.
So, the question is: how to consume messages in most effective manner using github.com/streadway/amqp?
As far as I understand, there are two approaches:
set high prefetch
https://godoc.org/github.com/streadway/amqp#Channel.Qos.
Use single consumer goroutine
Example code:
conn, err := amqp.Dial("amqp://guest:guest#localhost:5672/")
failOnError(err, "Failed to connect to RabbitMQ")
defer conn.Close()
ch, err := conn.Channel()
failOnError(err, "Failed to open a channel")
defer ch.Close()
ch.Qos(
10000, // prefetch count
0, // prefetch size
false, // global
)
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
false, // NO auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
for d := range msgs {
log.Printf("Received a message: %s", d.Body)
err:= processMessage(d)
if err != nil {
log.Printf("%s : while consuming task", err)
d.Nack(false, true)
} else {
d.Ack(false)
}
continue // consume other messages
}
But DO the processMessage will be called here in parallel?
spawn many channels and use multiple consumers
conn, err := amqp.Dial("amqp://guest:guest#localhost:5672/")
failOnError(err, "Failed to connect to RabbitMQ")
defer conn.Close()
var i = 0
for i = 0; i<=100; i++ {
go func(){
ch, err := conn.Channel()
failOnError(err, "Failed to open a channel")
defer ch.Close()
ch.Qos(
10, // prefetch count
0, // prefetch size
false, // global
)
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
false, // NO auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
for d := range msgs {
log.Printf("Received a message: %s", d.Body)
err:= processMessage(d)
if err != nil {
log.Printf("%s : while consuming task", err)
d.Nack(false, true)
} else {
d.Ack(false)
}
continue // consume other messages
}
}()
}
But is this a RAM friendly approach? Isn't spawning a new channel for each worker is quite dramatic for RabbitMQ?
So, question is, which variant is better? Better performance, better memory usage, etc.
So, what is the optimal usage of RabbitMQ here?
Update: currently, I encountered a case when my worker consumes all RAM on VPS, and is OOM-killed. I used second approach for it. So, better in my case is ability to keep my worker without OOM killing after few minutes of work.
Update 2: nack when worker failed to process message, and ack when worker processed message is very important. All messages has to be processed (its customers analytics), but sometimes worker cannot process it, so it have to nack message to pass it to other workers (currently, some 3rd party api used to process messages sometimes simply returns 503 status code, in this case message should be passed to other worker or retried).
SO, using auto-ack is unfortunately not an option.
I suppose each processMessage() run in a new goroutine.
Which variant is better?
I prefer the first one, because open/close channel is a little bit expensive (2 + 2 TCP packets). I think your OOM problem is not related to too many gorutine, gorutine is very light, just cost about 5KB. So the problem is probably caused by your processMessage().
I think the github.com/streadway/amqp channel consume operation is thread/gorutine-safe, so it is safe to share channel between goruntine if you just do some consume operation.
I've never used kafka before. I have two test Go programs accessing a local kafka instance: a reader and a writer. I'm trying to tweak my producer, consumer, and kafka server settings to get a particular behavior.
My writer:
package main
import (
"fmt"
"math/rand"
"strconv"
"time"
"github.com/confluentinc/confluent-kafka-go/kafka"
)
func main() {
rand.Seed(time.Now().UnixNano())
topics := []string{
"policymanager-100",
"policymanager-200",
"policymanager-300",
}
progress := make(map[string]int)
for _, t := range topics {
progress[t] = 0
}
producer, err := kafka.NewProducer(&kafka.ConfigMap{
"bootstrap.servers": "localhost",
"group.id": "0",
})
if err != nil {
panic(err)
}
defer producer.Close()
fmt.Println("producing messages...")
for i := 0; i < 30; i++ {
index := rand.Intn(len(topics))
topic := topics[index]
num := progress[topic]
num++
fmt.Printf("%s => %d\n", topic, num)
msg := &kafka.Message{
Value: []byte(strconv.Itoa(num)),
TopicPartition: kafka.TopicPartition{
Topic: &topic,
},
}
err = producer.Produce(msg, nil)
if err != nil {
panic(err)
}
progress[topic] = num
time.Sleep(time.Millisecond * 100)
}
fmt.Println("DONE")
}
There are three topics that exist on my local kafka: policymanager-100, policymanager-200, policymanager-300. They each only have 1 partition to ensure all messages are sorted by the time kafka receives them. My writer will randomly pick one of those topics and issue a message consisting of a number that increments solely for that topic. When it's done running, I expect the queues to look something like this (topic names shortened for legibility):
100: 1 2 3 4 5 6 7 8 9 10 11
200: 1 2 3 4 5 6 7
300: 1 2 3 4 5 6 7 8 9 10 11 12
So far so good. I'm trying to configure things so that any number of consumers can be spun up and consume these messages in order. By "in-order" I mean that no consumer should get message 2 for topic 100 until message 1 is COMPLETED (not just started). If message 1 for topic 100 is being worked on, consumers are free to consume from other topics that currently don't have a message being processed. If a message of a topic has been sent to a consumer, that entire topic should become "locked" until either a timeout assumes that the consumer failed or the consumer commits the message, then the topic is "unlocked" to have it's next message made available to be consumed.
My reader:
package main
import (
"fmt"
"time"
"github.com/confluentinc/confluent-kafka-go/kafka"
)
func main() {
count := 2
for i := 0; i < count; i++ {
go consumer(i + 1)
}
fmt.Println("cosuming...")
// hold this thread open indefinitely
select {}
}
func consumer(id int) {
c, err := kafka.NewConsumer(&kafka.ConfigMap{
"bootstrap.servers": "localhost",
"group.id": "0", // strconv.Itoa(id),
"enable.auto.commit": "false",
})
if err != nil {
panic(err)
}
c.SubscribeTopics([]string{`^policymanager-.+$`}, nil)
for {
msg, err := c.ReadMessage(-1)
if err != nil {
panic(err)
}
fmt.Printf("%d) Message on %s: %s\n", id, msg.TopicPartition, string(msg.Value))
time.Sleep(time.Second)
_, err = c.CommitMessage(msg)
if err != nil {
fmt.Printf("ERROR commiting: %+v\n", err)
}
}
}
From my current understanding, the way I'm likely to achieve this is by setting up my consumer properly. I've tried many different variations of this program. I've tried having all my goroutines share the same consumer. I've tried using a different group.id for each goroutine. None of these was the right configuration to get the behavior I'm after.
What the posted code does is empty out one topic at a time. Despite having multiple goroutines, the process will read all of 100 then move to 200 then 300 and only one goroutine will actually do all the reading. When I let each goroutine have a different group.id then messages get read by multiple goroutines which I would like to prevent.
My example consumer is simply breaking things up with goroutines but when I begin working this project into my use case at work, I'll need this to work across multiple kubernetes instances that won't be talking to each other so using anything that interacts between goroutines won't work as soon as there are 2 instances on 2 kubes. That's why I'm hoping to make kafka do the gatekeeping I want.
Generally speaking, you cannot. Even if you had a single consumer that consumed all the partitions for the topic, the partitions would be consumed in a non-deterministic order and your total ordering across all partitions would not be guaranteed.
Try Keyed Messages, think you may find this of good use for your use case.
I am trying to implement a tunnel between two go channels (note I am very new to go). My goroutine looks like this:
consumer := ...
producer := ...
go func() {
for jsn := range producer {
msg, err := ops.FromJSON(jsn)
if err != nil {
log.Print(err)
continue
}
consumer <- msg
}
}()
This though seems to have some issues. How can I check if the consumer is already closed? How to solve the race between getting the message from the producer and sending it to the consumer? ... and maybe others.
Can someone provide a good example of a tunnel between two channels?
So I am trying to use Kafka for my application which has a producer logging actions into the Kafka MQ and the consumer which reads it off the MQ.Since my application is in Go, I am using the Shopify Sarama to make this possible.
Right now, I'm able to read off the MQ and print the message contents using a
fmt.Printf
Howeveer, I would really like the error handling to be better than console printing and I am willing to go the extra mile.
Code right now for consumer connection:
mqCfg := sarama.NewConfig()
master, err := sarama.NewConsumer([]string{brokerConnect}, mqCfg)
if err != nil {
panic(err) // Don't want to panic when error occurs, instead handle it
}
And the processing of messages:
go func() {
defer wg.Done()
for message := range consumer.Messages() {
var msgContent Message
_ = json.Unmarshal(message.Value, &msgContent)
fmt.Printf("Reading message of type %s with id : %d\n", msgContent.Type, msgContent.ContentId) //Don't want to print it
}
}()
My questions (I am new to testing Kafka and new to kafka in general):
Where could errors occur in the above program, so that I can handle them? Any sample code will be great for me to start with. The error conditions I could think of are when the msgContent doesn't really contain any Type of ContentId fields in the JSON.
In kafka, are there situations when the consumer is trying to read at the current offset, but for some reason was not able to (even when the JSON is well formed)? Is it possible for my consumer to backtrack to say x steps above the failed offset read and re-process the offsets? Or is there a better way to do this? again, what could these situations be?
I'm open to reading and trying things.
Regarding 1) Check where I log error messages below. This is more or less what I would do.
Regarding 2) I don't know about trying to walk backwards in a topic. Its very much possible by just creating a consumer over and over, with its starting offset minus one each time. But I wouldn't advise it, as most likely you'll end up replaying the same message over and over. I do advice saving your offset often so you can recover if things go south.
Below is code that I believe addresses most of your questions. I haven't tried compiling this. And sarama api has been changing lately, so the api may currently differ a bit.
func StartKafkaReader(wg *sync.WaitGroup, lastgoodoff int64, out chan<- *Message) (error) {
wg.Add(1)
go func(){
defer wg.Done()
//to track the last known good offset we processed, which is
// updated after each successfully processed event.
saveprogress := func(off int64){
//Save the offset somewhere...a file...
//Ive also used kafka to store progress
//using a special topic as a WAL
}
defer saveprogress(lastgoodoffset)
client, err := sarama.NewClient("clientId", brokers, sarama.NewClientConfig())
if err != nil {
log.Error(err)
return
}
defer client.Close()
sarama.NewConsumerConfig()
consumerConfig.OffsetMethod = sarama.OffsetMethodManual
consumerConfig.OffsetValue = int64(lastgoodoff)
consumer, err := sarama.NewConsumer(client, topic, partition, "consumerId", consumerConfig)
if err != nil {
log.Error(err)
return
}
defer consumer.Close()
for {
select {
case event := <-consumer.Events():
if event.Err != nil {
log.Error(event.Err)
return
}
msgContent := &Message{}
err = json.Unmarshal(message.Value, msgContent)
if err != nil {
log.Error(err)
continue //continue to skip this message or return to stop without updating the offset.
}
// Send the message on to be processed.
out <- msgContent
lastgoodoff = event.Offset
}
}
}()
}