AMQP gem specifying a dead letter exchange - ruby

I've specified a queue on the RabbitMQ server called MyQueue. It is durable and has x-dead-letter-exchange set to MyQueue.DLX.
(I also have an exchange called MyExchange bound to that queue, and another exchange called MyQueue.DLX, but I don't believe this is important to the question)
If I use ruby's amqp gem to subscribe to those messages I would do it like this:
# Doing this before and in a new thread has to do with how my code is structured
# shown here in case it has a bearing on the question
Thread.new do
AMQP.start('amqp://guest:guest#127.0.0.1:5672')
end
EventMachine.next_tick do
channel = AMQP::Channel.new(AMQP.connection)
queue = channel.queue("MyQueue", :durable => true, :'x-dead-letter-exchange' => "MyQueue.DLX")
queue.subscribe(:ack => true) do |metadata, payload|
p metadata
p payload
end
end
If I execute this code with the queues and exchanges already created and bound (as they need to be in my set up) then RabbitMQ throws the following error in its logs:
=ERROR REPORT==== 19-Aug-2013::14:25:53 ===
connection <0.19654.2>, channel 2 - soft error:
{amqp_error,precondition_failed,
"inequivalent arg 'x-dead-letter-exchange'for queue 'MyQueue' in vhost '/': received none but current is the value 'MyQueue.DLX' of type 'longstr'",
'queue.declare'}
Which seems to be saying that I haven't specified the same Dead Letter Exchange as the pre-existing queue - but I believe I have with the queue = ... line.
Any ideas?

The DLX info is passed in the arguments option:
queue = channel.queue("MyQueue", {durable: true, arguments: {"x-dead-letter-exchange" => "MyQueue.DLX"}})

I had the same error, even though using #Karl Wilbur s format for the options.
Looks like your "MyQueue" already exists on the RabbitMQ server (durable: true) and it exists without a dead letter exchange configuration.
queue = channel.queue("MyQueue", :durable => true, :'x-dead-letter-exchange' => "MyQueue.DLX")
this will not create a new queue if one already exists by the name "MyQueue". Instead it will try to connect to the existing one, but the options/arguments etc have to be the same or you get an error like the one you got.
All you have to do is delete the old one and run your code again (with Karl's suggestion).
I used the RabbitMQ management GUI to delete mine. see here re deleting queues

Related

Ruby, AWS SQS: how to get all the messages from a queue without removing them from the queue

How can I download all the messages in a SQS queue, but keeping them in the queue.
I need this for analysis purposes to not actually to execute the messages, this is because I need the messages to remain in the queue after my download.
The AWS API allows you to download messages from the queue in batches of 10. The problem is that if you request the messages several times you may receive the same messages again.
The trick is to keep the downloaded messages hidden for subsequent requests, at less until you have downloaded all the messages. This has consequences, for example that the messages won't be accessible for other consumers either.
Example of code:
require "aws-sdk" # gem "aws-sdk", "~> 3"
client = Aws::SQS::Client.new(:region => "eu-west-1")
queue_url = "https://sqs.eu-west-1.amazonaws.com/XXXX/your_queueu"
queue =
Aws::SQS::Queue.new({
:url => queue_url,
:client => client
})
loop do
# [Aws::SQS::Queue.receive_messages documentation](http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/SQS/Queue.html#receive_messages-instance_method)
messages =
queue.receive_messages({
:max_number_of_messages => 10,
:visibility_timeout => 10 # make this as big as necessary to give time to the script to get all the Messages
})
messages.each do |message|
# [Aws::SQS::Message documentation](http://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/SQS/Message.html)
puts message.body # send the output to a file or where do you want
end
break if messages.length.zero?
end

direct reply pseudo queue with bunny gem

I am creating an rabbitmq rpc in ruby 2.3 using bunny 2.7.0
I've made it with one reply queue per client. But I am expected to have quite a large amount of clients and it is not efficient to do it in this way. I want to use a direct reply feature of rabbitmq
connection = Bunny.new(rabbitmq_url, :automatically_recover => true)
connection.start
channel = connection.create_channel
reply_queue = channel.queue('amq.rabbitmq.reply-to', no_ack: true)
on the last line of code I receive error
Bunny::AccessRefused: ACCESS_REFUSED - queue name 'amq.rabbitmq.reply-to' contains reserved prefix 'amq.*'
in theory that is expected due to http://rubybunny.info/articles/queues.html
but on other hand - there is an article https://www.rabbitmq.com/direct-reply-to.html that describes existance an usability of this queue.
i want to declare a queue because i need to subscribe to it to receive respond
consumer = reply_queue.subscribe do |_, properties, payload|
# action
end
I dont understand what am I doing wrong with it (
there are similar topics with examples of such approach but created on other languages and tools like nodejs and that seems to work fine. What am I doing wrong with bunny ?
Update
found the problem - I used odler version of rabbitmq server. That one that id not support direct reply queue yet
I think it's trying to create it which you're not allowed to do.
https://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2013-September/030095.html
My ruby is a tad rusty but give this a try:
channel = connection.create_channel
channel.queue_declare('amq.rabbitmq.reply-to', :passive => true)

Catching RabbitMQ connection loss mid Request on Passenger

I am using the bunny gem to publish messages to a RabbitMQ.
Following the recommendation given in the official documentation for use with Passenger in Rack apps, I added connection creation to be executed after a new worker process was started.
if defined?(PhusionPassenger)
PhusionPassenger.on_event(:starting_worker_process) do |forked|
if forked
# We’re in a smart spawning mode
# Now is a good time to connect to RabbitMQ
#
# Every process will get it's own connection!
$rabbitmq_connection = Bunny.new rabbit[settings.environment.to_s]
$rabbitmq_connection.start
end
end
else # For non passenger environments - e.g. specs or rackup
$rabbitmq_connection = Bunny.new rabbit[settings.environment.to_s]
$rabbitmq_connection.start
end
This works pretty well, however when the connection is lost mid request (before the message could be published) no exception is caught. The process just seems to die and return the generic apache error page - logging doesn't work anymore either.
Thus in my specific case a database entry was created but I could not write a log message cleanly indicating that one and which message could not be published.
One workaround I found is to just create the connection to rabbitmq on a per request basis by establishing the connection directly before actually publishing the message.
That doesn't seem to be very efficient though, given that passenger worker processes handle more than a single request before they are discarded.
It does however properly catch the exception, log it and continue on with the request handling
def publish_message(exchange, key, message)
rabbitmq_connection = Bunny.new $rabbit
rabbitmq_connection.start
ch = rabbitmq_connection.create_channel
exchange = ch.exchange(exchange, durable: true, type: :topic)
exchange.publish(message, routing_key: key, persistent: true)
ch.close
rabbitmq_connection.close
rescue Exception => e
$log_file.error "Sending message #{message} to #{exchange} with key #{key} failed: #{e.message}"
end
Now I am wondering how to catch this when following the recommended approach
and whether there is any other best practice for Rack Apps with passenger which I just haven't found yet.
I'd appreciate any hints leading me to finding a better solution than my workaround.

RabbitMQ - Ruby - Bunny - Best practise to (re-)publish messages to next queue

we are checking RabbitMQ for some Workflow use case.
So far, we created a test environment with ruby which fits our needs and seems to work fine.
The question I have, in case of being Rabbit newbies, is about best / good practise.
Lets define 3 QUEUES (just for the example)
Q_DECISION
Q_RIGHT
Q_LEFT
every producer will post message inside Q_DECISION
There is a worker running on that queue which check some content of the body. In case of decision, the message / task has to be moved to Q_LEFT or Q_RIGHT.
We are storing message specific information properties.headers, so we repeat them as well as the body.
So far no problem, question is now about republishing:
q_decision.subscribe(:block => true) do |delivery_info, properties, body|
# ... more code here
if (decision_left)
q_left.publish(body, :headers => properties.headers, :persistent => true)
end
end
If we do re-publishing like above, do we loose something from the previous message?
There are a lot of attributes defined / stored in delivery_info and properties as well.
Do we have to re-post them or only the self created headers and body?
Message body and message headers are two different things. I would assume bunny or any client library will create a new message with the body you are passing. This means you need to re-set the headers you want to pass to the next queue as well.

Posting large number of messages to AMQP queue

Using v0.7.1 of the Ruby amqp library and Ruby 1.8.7, I am trying to post a large number (millions) of short (~40 bytes) messages to a RabbitMQ server. My program's main loop (well, not really a loop, but still) looks like this:
AMQP.start(:host => '1.2.3.4',
:username => 'foo',
:password => 'bar') do |connection|
channel = AMQP::Channel.new(connection)
exchange = channel.topic("foobar", {:durable => true})
i = 0
EM.add_periodic_timer(1) do
print "\rPublished #{i} commits"
end
results = get_results # <- Returns an array
processor = proc do
if x = results.shift then
exchange.publish(x, :persistent => true,
:routing_key => "test.#{i}")
i += 1
EM.next_tick processor
end
end
EM.next_tick(processor)
AMQP.stop {EM.stop} end
The code starts processing the results array just fine, but after a while (usually, after 12k messages or so) it dies with the following error
/Library/Ruby/Gems/1.8/gems/amqp-0.7.1/lib/amqp/channel.rb:807:in `send':
The channel 1 was closed, you can't use it anymore! (AMQP::ChannelClosedError)
No messages are stored on the queue. The error seems to be happening just when network activity from the program to the queue server starts.
What am I doing wrong?
First mistake is that you didn't post the RabbitMQ version that you are using. Lots of people are running old obsolete version 1.7.2 because that is what is in their OS package repositories. Bad move for anyone sending the volume of messages that you are. Get RabbitMQ 2.5.1 from the RabbitMQ site itself and get rid of your default system package.
Second mistake is that you did not tell us what is in the RabbitMQ logs.
Third mistake is that you said nothing about what is consuming the messages. Is there another process running somewhere that has declared a queue and bound it to the exchange. There is NO message queue unless somebody declares it to RabbitMQ and binds it to an exchange. Even then messages will only flow if the binding key for the queue matches the routing key that you publish with.
Fourth mistake. You have routing keys and binding keys mixed up. The routing key is a string such as topic.test.json.echos and the binding key (used to bind a queue to an exchange) is a pattern like topic.# or topic..json.
Updated after your clarifications
Regarding versions, I'm not sure when it was fixed but there was a problem in 1.7.2 with large numbers of persistent messages causing RabbitMQ to crash when it rolled over its persistence log, and after crashing it was unable to restart until someone manually undid the rollover.
When you say that a connection is being opened and closed, I hope that it is not per message. That would be a strange way to use AMQP.
Let me repeat. Producers do NOT write messages to queues. They write messages to exchanges which then route the messages to queues based on the routing key (string) and the queue's binding key (pattern). In your example I misread the use of the # sign, but I see nothing which declares a queue and binds it to the exchange.

Resources