RabbitMQ: Connecting & publishing to an existing queue in Ruby - ruby

I have two process types on Heroku: a web dyno in Ruby and a worker in Node.js. I'm using the RabbitMQ addon (currently beta) to pass a message from Ruby to Node. Node connects and consumes correctly, and Ruby connects and publishes correctly as long as it is the first to connect / create the queue.
Apparently, Carrot throws some funny errors when you try to create a queue that already exists, which is how I discovered that the reason for not being able to get my message across (I could have sworn it worked when I tested last night) was that I started my Node process before my Ruby.
Since I'm on Heroku, I'm going to have more than one of each Ruby and Node threads working concurrently, and they each need to support being the first to start a queue and connect into an existing queue, without issue.
Which brings me to my question:
How do I connect to an existing RabbitMQ queue, using Ruby, for the purpose of publishing messages to consumers which are already connected and waiting to receive messages?

Carrot will silently fail if there is a collission with an existing queue.
In order to connect to an existing queue, without colliding, you must specify the same options used when you first created the queue.
It sucks that Carrot silently fails in this case, but that's what it is.
Ruby:
Carrot.server
q = Carrot.queue('onboarding', {:durable=>true, :autoDelete=>false})
q.publish('test')
Node.js:
var amqp = require("amqp");
var c = amqp.createConnection({ host: 'localhost' });
q = c.queue('onboarding', {durable: true, autoDelete:false});
// ... wait for queue to connect (1 sec), or use .addListener('ready', callback) ...
q.subscribe( {ack:true}, function(message){
console.log(message.data.toString())
q.shift()
})

Have you tried the other client(s)?
http://rubyamqp.info/

Related

Receiver of ActiveMQ: not able to retrieve whats in the Queue

I have executed the hello world program as mentioned in the below link.. http://www.coderpanda.com/jms-example-using-apache-activemq/ Also, I have downloaded the ActiveMQ jar and related files as mentioned. I am able to compile and run the all the java files too. Noticed that the receiver java file compiles successfully but when the Receiver executes no output message gets generated on the console. The message sent to the queue is not getting retrieved. I can see that message count getting increased on UI of ActiveMQ on each hit(hosted on local host url) but the message put on the queue is not yet printed/retrieved. Can anyone suggest any other implementation for publisher subscriber, if any? Or your thoughts on JMS Q...
The answer from Vihar is right.
When you see a message dequed count increasing, then it is clear that message is successfully consumed by some consumer, and on running of your receiver, why did your consumer count increase in queue? Are there multiple instances? Or you haven't closed the connection properly
I did not close the connection and consumers consumed the message in the queue whtn I ran the receive multiple times, I had no clue why and how it happened not until I ran it one at a time keeping a tab at the queue at the same time.

Commit blocks using spring-amqp and rabbitmq when disk_size_limit threshold is reached

We are using rabbitmq 3.0.1 on CentOS 6, and as a client Spring spring-rabbit version 1.1.2.RELEASE. (I know these aren't the latest versions, see later).
We send messages to rabbitmq via this client. These messages are initiated via an external rest call. Someone else calls our web service updates the database and sends the amqp message. I would like to be informed if rabbitmq blocks the client - for instance if the disk_free_limit threshold is reached.
Importantly, I would like to be informed in the same thread as that processing the web request, so that I can rollback the transaction.
Our web service can also update a database (within a transaction obviously). Normally, this works fine. However, under certain circumstances, rabbitmq can block our web server - the most obvious being when the disk_free_limit is reached. This blocks the web server Thread, indefinitely. The external caller of the web service will obviously time out after a sensible period, but the thread in our web service doesn't - it stays around, and keeps the resources, and importantly the transaction open.
The web server is blocking the thread because it is transactional. It isn't the initial message which is blocking, it is the commit. I assume that rabbitmq is blocking because it can't persist it or something like that. The thread is blocking until rabbitmq sends the commit ok message back. The bit of code is deep within the rabbitmq implementation - com.rabbitmq.client.impl.ChannelN
public Tx.CommitOk More ...txCommit() throws IOException
{
return (Tx.CommitOk) exnWrappingRpc(new Tx.Commit()).getMethod();
}
and this eventually calls the following method from com.rabbitmq.client.impl.AMQChannel
public T More ...getReply() throws ShutdownSignalException
{
return _blocker.uninterruptibleGetValue();
}
The preferable solution for this would be some sort of timeout on the txCommit - then I could throw an exception and fail the web service with a 500 or whatever. I can't find any way of doing this.
What I have found is:
addBlockedListener - this adds a listener on a message sent by rabbitmq when this it is blocked. This is good, but the message will is treated by another thread - so I can't fail the web service. Using this I can at least log the fact that rabbitmq is blocked, through syslog or whatever. However, this isn't available on the version that we run - we would have to upgrade to the latest. We would prefer not to do this because of the testing it would imply.
setConnectionTimeout(int) - this sets the connection timeout for the initial connection to rabbitmq. This doesn't apply in my case, because rabbitmq is up and running and accepts the connection.
AmqpTemplate.setReplyTimeout() - as shown above, this reply timeout does not apply to the commit.
I fully understand that this situation (disk_free_limit threshold is breached) is a situation which should not occur in a production system. However, I would like to be able to cope nicely with this situation so that my application behaves nicely when one of its components (rabbitmq) has a problem.
So, what other options do I have? Is there any way, short of rewriting portions of the spring amqp client or removing the transactionality of doing what I want?

How to check if the channel is still working in streadway/amqp RabbitMQ client? [duplicate]

This question already has answers here:
How to detect dead RabbitMQ connection?
(4 answers)
Closed 10 months ago.
I'm using github.com/streadway/amqp for my program. How should I make sure that the channel I'm using for consumption and/or production is still working, before re-initializing it?
For example, in ruby, I could simply do:
bunny_client = Bunny.new({....})
bunny_client.start
to start the client, and
if not bunny_client or bunny_client.status != :connected
# re-initialize the client
How to do this with streadway/amqp client?
The QueueDeclare and QueueInspect functions may provide equivalent functionality. According to the docs:
QueueInspect:
Use this method to check how many unacknowledged messages reside in the queue and how many consumers are receiving deliveries and whether a queue by this name already exists.
If the queue by this name exists, use Channel.QueueDeclare check if it is declared with specific parameters.
If a queue by this name does not exist, an error will be returned and the channel will be closed.
QueueDeclare:
QueueDeclare declares a queue to hold messages and deliver to consumers. Declaring creates a queue if it doesn't already exist, or ensures that an existing queue matches the same parameters.
It looks like there's some good info regarding Durable queues (survive server restarts) in those docs as well.
I have worked around most connectivity issues by trial and error seeing which patterns will work and which won't. I think the best method is to flag a channel when using it (on error). If it fails to publish it gets flagged as well on library side. The errors that are received from server automatically terminate the channel anyways so it just tells my channel pool to rebuild channels when flagged.
You can use my library in golang as an example to create a connection/channelpool and how I flag channels on error.
https://github.com/houseofcat/turbocookedrabbit

Way to automatically clear all applications connected to the queue

We have an environment where MQ acts as an interface between
Websites and Micro Focus. Sometimes a message gets stuck in a queue,
thereby blocking all the communications over that particular queue. If
the queue depth increases greatly, all the communication stops in the
queue manager.
When we check the status of queue, we see that microfocus process is present there.
Is there are way to automatically clear all applications connected to the queue?
I don't think its possible to close an applications handle on a given queue but you could have a script that runs a couple of MQSC commands against the queue manager to first get the connection identifier using the DISPLAY CONN command and then close the connection using the STOP CONN command. You could then setup a trigger on the queue that executes the script once a certain queue depth has been reached.

Ruby AMQP persistent message is deleted after restarting RabbitMQ

I have a ruby script that creates a message using AMQP in RabbitMQ.
# above code sets up config for connecting to RabbitMQ via APMQ
AMQP.start(:host => 'localhost') do
amq = MQ.new
amq.queue('initiate', :durable => true).publish(message_id, :persistent => true)
AMQP.stop{ EM.stop }
end
If the RabbitMQ server is restarted, the message is no longer in the initiate queue (or any queue, for that matter). What am I doing wrong that the message is not persistent? I've also tried explicitly creating a durable exchange, and binding the queue to that exchange, but the message is still deleted after RabbitMQ restart.
As already mentioned, if you just mark messages as persistent they will not necessarily get persisted straight away, so if the server shuts down unexpectedly they may never end up on disk.
So what do you do if you really need the message to be on disk, even if the server crashes?
There are two things you can do. One is to wrap your publish in a transaction. When you have committed the transaction, the message will be on disk (if it's not already delivered to a consumer of course). However, this adds a synchronous call to the server, so it can slow you down. If you know you're going to publish a lot of messages, you can wrap a bunch of publishes in a transaction, then when you commit you know they're all on disk.
The other (higher performance) alternative is to use publish confirms. But these are new in the 2.3.1 server and I don't think any Ruby clients support them yet.
Finally, RabbitMQ will anyway periodically flush persistent messages to disk even in the absence of confirms, transactions and controlled shutdowns. However there's a bug in 2.2.0 which means that this sometimes doesn't happen for a long time, so upgrading to 2.3.1 might be worthwhile.
Funny I was just Googling for the same problem. RabbitMQ 2.2.0, default options. In my case, Ruby clients using rubygem-amqp-0.6.7-3.el5 from EPEL. Durable queues bound to Durable fanout exchange, publishing messages with :persistent => true. Messages lost on server restart.
-Alan
Yes, Simon is right. About publisher confirms (described at http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms), I plan to support them in AMQP 0.8 which shall be released soon.
BTW, in the original example, the first argument for publish is supposed to be the actual data, everything else is specified via options, so it's publish(message, opts) rather than publish(message_id, opts).

Resources