RabbitMQ consumer in Go - go

I am trying to write a RabbitMQ Consumer in Go. Which is suppose to take the 5 objects at a time from the queue and process them. Moreover, it is suppose to acknowledge if successfully processed else send to the dead-letter queue for 5 times and then discard, it should be running infinitely and handling the cancellation event of the consumer.
I have few questions :
Is there any concept of BasicConsumer vs EventingBasicConsumer in RabbitMq-go Reference?
What is Model in RabbitMQ and is it there in RabbitMq-go?
How to send the objects when failed to dead-letter queue and again re-queue them after ttl
What is the significance of consumerTag argument in the ch.Consume function in the below code
Should we use the channel.Get() or channel.Consume() for this scenario?
What are the changes i need to make in the below code to meet above requirement. I am asking this because i couldn't find decent documentation of RabbitMq-Go.
func main() {
consumer()
}
func consumer() {
objConsumerConn := &rabbitMQConn{queueName: "EventCaptureData", conn: nil}
initializeConn(&objConsumerConn.conn)
ch, err := objConsumerConn.conn.Channel()
failOnError(err, "Failed to open a channel")
defer ch.Close()
msgs, err := ch.Consume(
objConsumerConn.queueName, // queue
"demo1", // consumerTag
false, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
failOnError(err, "Failed to register a consumer")
forever := make(chan bool)
go func() {
for d := range msgs {
k := new(EventCaptureData)
b := bytes.Buffer{}
b.Write(d.Body)
dec := gob.NewDecoder(&b)
err := dec.Decode(&k)
d.Ack(true)
if err != nil { fmt.Println("failed to fetch the data from consumer", err); }
fmt.Println(k)
}
}()
log.Printf(" Waiting for Messages to process. To exit press CTRL+C ")
<-forever
}
Edited question:
I have delayed the processing of the messages as suggested in the links link1 link2. But the problem is messages are getting back to their original queue from dead-lettered queue even after ttl. I am using RabbitMQ 3.0.0. Can anyone point out what is the problem?

Is there any concept of BasicConsumer vs EventingBasicConsumer in
RabbitMq-go Reference?
Not exactly, but the Channel.Get and Channel.Consume calls serve a similar concept. With Channel.Get you have a non-blocking call that gets the first message if there's any available, or returns ok=false. With Channel.Consume the queued messages are delivered to a channel.
What is Model in RabbitMQ and is it there in RabbitMq-go?
If you're referring to the IModel and Connection.CreateModel in C# RabbitMQ, that's something from the C# lib, not from RabbitMQ itself. It was just an attempt to abstract away from the RabbitMQ "Channel" terminology, but it never caught on.
How to send the objects when failed to dead-letter queue and again
re-queue them after ttl
Use the delivery.Nack method with requeue=false.
What is the significance of consumerTag argument in the ch.Consume
function in the below code
The ConsumerTag is just a consumer identifier. It can be used to cancel the channel with channel.Cancel, and to identify the consumer responsible for the delivery. All messages delivered with the channel.Consume will have the ConsumerTag field set.
Should we use the channel.Get() or channel.Consume() for this scenario?
I think channel.Get() is almost never preferable over channel.Consume(). With channel.Get you'll be polling the queue and wasting CPU doing nothing, which doesn't make sense in Go.
What are the changes i need to make in the below code to meet above
requirement.
Since you're batch processing 5 at a time, you can have a goroutine that receives from the consumer channel and once it gets the 5 deliveries you call another function to process them.
To acknowledge or send to the dead-letter queue you'll use the delivery.Ack or delivery.Nack functions. You can use multiple=true and call it once for the batch. Once the message goes to the dead letter queue, you have to check the delivery.Headers["x-death"] header for how many times its been dead-lettered and call delivery.Reject when its been retried for 5 times already.
Use channel.NotifyCancel to handle the cancellation event.

Related

golang Confusing code RabbitMq forever blocked !?

I found an interesting piece of code when using rabbitMQ
forever := make(chan bool)
go func() {
for d := range msgs {
log.Printf("Received a message: %s", d.Body)
}
}()
log.Printf(" [*] Waiting for messages. To exit press CTRL+C")
<-forever
This is a block of code ,
In fact, in normal mode,
this would cause a deadlock error,
Like this
enter image description here
enter image description here
But when I import rabbitMQ package , this code does not cause an error
enter image description here
Why is that? I'm confused.
Thanks for answer!
Expect someone to explain
When you are using rabbitmq, the msgs variable is a receiving channel of type <-chan amqp.Delivery - see docs i.e a channel receive messages. The range loop is enters the body each time a message appears. This is a useful control sequence - block the main goroutine whilst the worker awaits and processes messages. There is no deadlock in this case as rabbitmq connection will send messages down the msgs chan when appropriate.
In earlier code, when you connect to the queue and instantiate the msgs channel, the amqp package çreates another goroutine in the background to send messages down the msgs channel.
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
true, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
This is unlike the deadlock examples provided, where there is no additional go-routines to send messages down the forever channel. The goroutine goes to sleep and there is no chance of it being awakened. It is a deadlock.

why does my "newPendingTransactions" geth subscription not get any events?

Im facing the following challenge. I try to subscribe to "newPendingTransactions" via websocket. I can successfully connect to the websocket. When connected, I would expect a stream of new incoming pending transactions, which I can read out and propagate it to some channel (chan json.RawMessage).
...
c, httpResponse, err := websocket.DefaultDialer.Dial(u.String(), req.Header)
...
for {
messageType, message, err := c.ReadMessage()
if err != nil {
log.Println("ERROR:", err)
os.Exit(1)
return
}
_, _ = message, messageType
// s.Out is the outgoing chan json.RawMessage
s.Out <- message
}
...
sadly I dont receive any message (pending tx).. only one on closing the whole construct. When I check on my node directly with "txpool.status" in console, then I can see that there are new pending txs incoming all the time. They just dont wanna get propagated to my websocket connection. Is there anyone who can help me out here? Maybe I am missing a parameter for starting the geth node itself?
here is how I start my geth node:
geth --http.api eth,web3,debug,txpool,net,shh,db,admin,debug --http --ws --ws.api eth,web3,debug,txpool,net,shh,db,admin,debug --ws.origins localhost --gcmode full --http.port=8545 --maxpeers 120
here is my "admin.nodeInfo":
Geth/v1.10.16-stable-20356e57/linux-amd64/go1.17.5
I found out what I did wrong:
It was not about any forgotten geth parameter and even that the "newPendingTransactions" were propagated correctly..
I tested the websocket connection with another tool called "wscat" and sent the necessary rpc via console (resulting in a stream of tx hashes!)
{"id": 1, "method": "eth_subscribe", "params": ["newPendingTransactions"]}
It showed me that the error must be about the golang code itself..
The last line inside the for loop
s.Out <- message
was sending messages into an unbuffered channel. In order for this to work, some other go routine must consume the channels messages on the other side.
It also helps to use a buffered channel like this:
s.Out = make(chan Message, 1000)
... so at least a 1000 messages will be sent out

Can't close confluent golang kafka consumer

I'm having an issue regarding the disposing of kafka consumer in the end of program execution. Here is code responsible for closing the consumer
func(kc *KafkaConsumer) Dispose() {
Sugar.Info("Disposing of consumer")
kc.mu.Lock()
kc.Consumer.Close();
Sugar.Info("Disposed of consumer")
kc.mu.Unlock()
}
As you might have already noticed, i'm making use of sync.Mutex, inasmuch as consumer is accessed by multiple goroutines. Below is another snippet responsible for reading messages from kafka
func (kc *KafkaConsumer) Consume(signalChan chan os.Signal, ctx context.Context) {
for{
select{
case sig := <-signalChan:
Sugar.Info("Caught signal %v", sig)
break
case <-ctx.Done():
Sugar.Info("Got context done message. Closing consumer...")
kc.Dispose()
break
default:
for{
message, err := kc.Consumer.ReadMessage(-1); if err != nil{
Log.Error(err.Error())
return
}
Sugar.Infof("Got a new message %v",message)
resp := make(chan *KafkaResponseEntity)
go router.UseMessage(*message, resp, ctx)
//Potential deadlock
response := <-resp
/*
Explicit commit of an offset in order to ensure
that request has been successfully processed
*/
kc.Consumer.Commit()
Sugar.Info("Successfully commited an offset")
Sugar.Infof("Just got a response %v", response)
go producer.KP.Produce(response.PaymentId, response.Bytes, "some_random_topic")
}
}
}
}
The problem is that when closing the consumer, program execution simply stalls.
Are there any issues? Should i use cond along with mutex? I'd be very glad if you provide thorough explanation of what might went wrong in my code.
Thanks in advance.
I suspect this is hanging because of:
kc.Consumer.ReadMessage(-1)
Which the documentation states will block indefinitely, hence why it's not closed. The simplest approach would be to make that value a positive time duration (e.g. 1 * time.Second), but then you may get time out errors if messages are not consumed within the timeouts. The time out error is generally innocuous, but is something to account for, from the linked documentation:
Timeout is returned as (nil, err) where err is
`err.(kafka.Error).Code() == kafka.ErrTimedOut`
I'm not yet sure of a good way to utilize the indefinite blocking and allow it to be interrupted. If anyone does know please post the findings!

rabbitmq / amqp direct reply to and return notification

I use rabbitmq for delivering messages from various input-sources (websocket, rest, ...) to my workers. Every worker listens to a bunch of different routing-keys on a shared exchange.
Now it is possible that workerA handles "routeA". When my input-source sends something to routeA, workerA picks it up and consumes it.
But what happens if there is no routeA-"consumer"? In that case I want the input source to "know" that there is no one for that request. And since there is no "rabbit consumer" consuming this message, it is discarded. (sorry if the lingo isn't accurate) As far as I understand the control message (?) handling, that is where NotifyReturn() (golang library for amqp) kicks in, so that the publisher can know about the fact, it's message was discarded.
Here is a stripped example of my code. This approach works for me in a simple "just publish this message scenario". I breaks for RPC.
RPC always triggers the case returnNotification := <-returnChannel: case.
My question:
Am I holding it wrong? / Is this not the way to check the deliverability of a message?
Thanks!
edit: Forgot to mention: The error is raised on the "reply". So the request is send, but the reply (send via the "just publish") gets a NO_ROUTE returnNotification
Just publish
// error handling omitted for example code
tC, _ := rabbitConnection.Channel()
defer tC.Close()
tC.Confirm(false)
var ack = make(chan uint64)
var nack = make(chan uint64)
tC.NotifyConfirm(ack, nack)
returnChannel := make(chan amqp.Return)
tC.NotifyReturn(returnChannel)
p := someFunctionGeneratingAPublishing()
tC.Publish(
exchange,
e.GetRoutingKey(),
true,
false,
*p,
)
select {
case returnNotification := <-returnChannel:
if returnNotification.ReplyCode == amqp.NoRoute {
return fmt.Errorf("no amqp route for %s", e.GetRoutingKey())
}
case <-ack:
return nil
case <-nack:
return fmt.Errorf("basic nack for %s", e.GetRoutingKey())
}
RPC
publishing := someFunctionGeneratingAPublishing()
publishing.ReplyTo = "amq.rabbitmq.reply-to"
con := GetConnection()
directChannel, _ := con.Channel()
defer directChannel.Close()
directChannel.Confirm(false)
var ack = make(chan uint64)
var nack = make(chan uint64)
directChannel.NotifyConfirm(ack, nack)
returnChannel := make(chan amqp.Return)
directChannel.NotifyReturn(returnChannel)
// consume direct-reply to pattern queue
deliveryChan, _ := directChannel.Consume(
"amq.rabbitmq.reply-to",
"",
true,
false,
false,
false,
nil,
)
directChannel.Publish(
exchange,
e.GetRoutingKey(),
true,
false,
*publishing,
)
select {
case returnNotification := <-returnChannel:
if returnNotification.ReplyCode == amqp.NoRoute {
return fmt.Errorf("no amqp route for %s", e.GetRoutingKey())
}
case <-ack:
return nil
case <-nack:
return fmt.Errorf("basic nack for %s", e.GetRoutingKey())
}
Ok, after more wandering in the dark I finaly realized my fundamental error: Not carefully reading the documentation:
If the RPC server publishes with the mandatory flag set then amq.rabbitmq.reply-to.* is treated as not a queue; i.e. if the server only publishes to this name then the message will be considered "not routed"; a basic.return will be sent if the mandatory flag was set.
RabbitMQ Direct-Reply-To Article
It is by design, that the reply messages, in my configured case, are considered "not routed".

How to perform concurrent downloads in Go

We have a process whereby users request files that we need to get from our source. This source isn't the most reliable so we implemented a queue using Amazon SQS. We put the download URL into the queue and then we poll it with a small app that we wrote in Go. This app simply retrieves the messages, downloads the file and then pushes it to S3 where we store it. Once all of this is complete it calls back a service which will email the user to let them know that the file is ready.
Originally I wrote this to create n channels and then attached 1 go-routine to each and had the go-routine in an infinite loop. This way I could ensure that I was only ever processing a fixed number of downloads at a time.
I realised that this isn't the way that channels are supposed to be used and, if I'm understanding correctly now, there should actually be one channel with n go-routines receiving on that channel. Each go-routine is in an infinite loop, waiting on a message and when it receives it will process the data, do everything that it's supposed to and when it's done it will wait on the next message. This allows me to ensure that I'm only ever processing n files at a time. I think this is the right way to do it. I believe this is fan-out, right?
What I don't need to do, is to merge these processes back together. Once the download is done it is calling back a remote service so that handles the remainder of the process. There is nothing else that the app needs to do.
OK, so some code:
func main() {
queue, err := ConnectToQueue() // This works fine...
if err != nil {
log.Fatalf("Could not connect to queue: %s\n", err)
}
msgChannel := make(chan sqs.Message, 10)
for i := 0; i < MAX_CONCURRENT_ROUTINES; i++ {
go processMessage(msgChannel, queue)
}
for {
response, _ := queue.ReceiveMessage(MAX_SQS_MESSAGES)
for _, m := range response.Messages {
msgChannel <- m
}
}
}
func processMessage(ch <-chan sqs.Message, queue *sqs.Queue) {
for {
m := <-ch
// Do something with message m
// Delete message from queue when we're done
queue.DeleteMessage(&m)
}
}
Am I anywhere close here? I have n running go-routines (where MAX_CONCURRENT_ROUTINES = n) and in the loop we will keep passing messages in to the single channel. Is this the right way to do it? Do I need to close anything or can I just leave this running indefinitely?
One thing that I'm noticing is that SQS is returning messages but once I've had 10 messages passed into processMessage() (10 being the size of the channel buffer) that no further messages are actually processed.
Thanks all
That looks fine. A few notes:
You can limit the work parallelism by means other than limiting the number of worker routines you spawn. For example you can create a goroutine for every message received, and then have the spawned goroutine wait for a semaphore that limits the parallelism. Of course there are tradeoffs, but you aren't limited to just the way you've described.
sem := make(chan struct{}, n)
work := func(m sqs.Message) {
sem <- struct{}{} // When there's room we can proceed
// do the work
<-sem // Free room in the channel
}()
for _, m := range queue.ReceiveMessage(MAX_SQS_MESSAGES) {
for _, m0 := range m {
go work(m0)
}
}
The limit of only 10 messages being processed is being caused elsewhere in your stack. Possibly you're seeing a race where the first 10 fill the channel, and then the work isn't completing, or perhaps you're accidentally returning from the worker routines. If your workers are persistent per the model you've described, you'll want to be certain that they don't return.
It's not clear if you want the process to return after you've processed some number of messages. If you do want this process to exit, you'll need to wait for all the workers to finish their current tasks, and probably signal them to return afterwards. Take a look at sync.WaitGroup for synchronizing their completion, and having another channel to signal that there's no more work, or close msgChannel, and handle that in your workers. (Take a look at the 2-tuple return channel receive expression.)

Resources