Ruby Bunny - Consuming from Multiple Queues - ruby

I’ve just started using Ruby and am writing a piece to consume some messages from a RabbitMQ queue. I’m using Bunny to do so.
So I’ve created my queues and binded them to an exchange.
However I’m now unsure how I handle subscribing to them both and allowing the ruby app to continue running (want the messages to keep coming through i.e. not blocked or at least not for a long time) until I actually exit it with ctrl+c.
I’ve tried using :block => true however as I have 2 different queues I’m subscribing to, using this means it remains consuming from only one.
So this is how I’m consuming messages:
def consumer
begin
puts ' [*] Waiting for messages. To exit press CTRL+C'
#oneQueue.subscribe(:manual_ack => true) do |delivery_info, properties, payload|
puts('Got One Queue')
puts "Received #{payload}, message properties are #{properties.inspect}"
end
#twoQueue.subscribe(:manual_ack => true) do |delivery_info, properties, payload|
puts('Got Two Queue')
puts "Received #{payload}, message properties are #{properties.inspect}"
end
rescue Interrupt => _
#TODO - close connections here
exit(0)
end
end
Any help would be appreciated.
Thanks!

You can't use block: true when you have two subscriptions as only the first one will block; it'll never get to the second subscription.
One thing you can do is set up both subscriptions without blocking (which will automatically spawn two threads to process messages), and then block your main thread with a wait loop (add just before your rescue):
loop { sleep 5 }

Related

Presence not picking up user leave events?

I need to perform some actions when the user leaves a channel (in most cases where they close the tab voluntarily, but there may also be a connection loss/timeout etc.)
According to posts like https://elixirforum.com/t/phoenix-presence-run-some-code-when-user-leaves-the-channel/17739 and How to detect if a user left a Phoenix channel due to a network disconnect?, intercepting the "presence_diff" event from Presence seems to be a foolproof way to go, as it should also covers the cases where the connection terminates abnormally.
Strangely, the presence_diff event seems to only be triggered when I track the user via Presence.track, but not when the user leaves.
Meanwhile, adding a terminate(reason, socket) callback in my channel correctly catches the leave event.
I wonder what could be wrong in my configuration. Or did I not understand the use of Presence correctly?
Example code:
def join("participant:" <> participant_id, _payload, socket) do
if socket.assigns.participant_id == participant_id do
send(self(), :after_participant_join)
{:ok, socket}
else
{:error, %{reason: "unauthorized"}}
end
end
def handle_info(:after_participant_join, socket) do
experiment_id = socket.assigns.experiment_id
Presence.track(socket, experiment_id, %{
# keys to track
})
# Broadcast something
# broadcast(socket, ...)
{:noreply, socket}
end
intercept(["presence_diff"])
def handle_out("presence_diff", payload, socket) do
# Only gets triggered at Presence.track, but not when the connection is closed.
IO.puts("presence_diff triggered, payload is #{inspect(payload)}")
leaves = payload.leaves
for {experiment_id, meta} <- leaves do
IO.puts("Leave information: #{meta}")
# Do stuffs
end
end
# This works, however.
def terminate(reason, socket) do
IO.puts("terminated. #{inspect(reason)}")
# Do stuffs.
end
OK I think I know what happened: Each "participant:" <> participant_id topic is, as its name suggests, only subscribed to by one participant. Therefore, when that participant quits, the process also dies and nobody is able to act on the presence_diff message.
A separate process is still needed. One can call MyApp.Endpoint.subscribe from that process to subscribe to the "participant:" <> participant_id topic and act on the presence_diff messages.
Or one can set up an external monitor. See How to detect if a user left a Phoenix channel due to a network disconnect?

Send multiply messages in websocket using threads

I'm making a Ruby server using the em-websocket gem. When a client sends some message (e.g. "thread") the server creates two different threads and sends two anwsers to the client in parallel (I'm actually studying multithreading and websockets). Here's my code:
EM.run {
EM::WebSocket.run(:host => "0.0.0.0", :port => 8080) do |ws|
ws.onmessage { |msg|
puts "Recieved message: #{msg}"
if msg == "thread"
threads = []
threads << a = Thread.new {
sleep(1)
puts "1"
ws.send("Message sent from thread 1")
}
threads << b = Thread.new{
sleep(2)
puts "2"
ws.send("Message sent from thread 2")
}
threads.each { |aThread| aThread.join }
end
How it executes:
I'm sending "thread" message to a server
After one second in my console I see printed string "1". After another second I see "2".
Only after that both messages simultaneously are sent to the client.
The problem is that I want to send messages exactly at the same time when debug output "1" and "2" are sent.
My Ruby version is 1.9.3p194.
I don't have experience with EM, so take this with a pinch of salt.
However, at first glance, it looks like "aThread.join" is actually blocking the "onmessage" method from completing and thus also preventing the "ws.send" from being processed.
Have you tried removing the "threads.each" block?
Edit:
After having tested this in arch linux with both ruby 1.9.3 and 2.0.0 (using "test.html" from the examples of em-websocket), I am sure that even if removing the "threads.each" block doesn't fix the problem for you, you will still have to remove it as Thread#join will suspend the current thread until the "joined" threads are finished.
If you follow the function call of "ws.onmessage" through the source code, you will end up at the Connection#send_data method of the Eventmachine module and find the following within the comments:
Call this method to send data to the remote end of the network connection. It takes a single String argument, which may contain binary data. Data is buffered to be sent at the end of this event loop tick (cycle).
As "onmessage" is blocked by the "join" until both "send" methods have run, the event loop tick cannot finish until both sets of data are buffered and thus, all the data cannot be sent until this time.
If it is still not working for you after removing the "threads.each" block, make sure that you have restarted your eventmachine and try setting the second sleep to 5 seconds instead. I don't know how long a typical event loop takes in eventmachine (and I can't imagine it to be as long as a second), however, the documentation basically says that if several "send" calls are made within the same tick, they will all be sent at the same time. So increasing the time difference will make sure that this is not happening.
I think the problem is that you are calling sleep method, passing 1 to the first thread and 2 to the second thread.
Try removing sleep call on both threads or passing the same value on each call.

Ruby Sinatra with consumer thread and job queue

I’m trying to create a very simple restful server. When it receives a request, I want to create a new job on a queue that can be handled by another thread while the current thread returns a response to the client.
I looked at Sinatra, but haven't got too far.
require 'sinatra'
require 'thread'
queue = Queue.new
set :port, 9090
get '/' do
queue << 'item'
length = queue.size
puts 'QUEUE LENGTH %d', length
'Message Received'
end
consumer = Thread.new do
5.times do |i|
value = queue.pop(true) rescue nil
puts "consumed #{value}"
end
end
consumer.join
In the above example, I know the consumer thread would only run a few times (as opposed to the life of the application), but even this isn't working for me.
Is there a better approach?
Your main problem is your call to Queue#pop. You’re passing true, which causes it not to suspend the thread and raises an exception instead, which you rescue with nil. Your consumer thread therefore loops five times before any thing else can happen.
You need to change that line to
value = queue.pop
so that the thread waits for new data being pushed onto the queue.
You’ll also need to remove the consumer.join line from the end, since that will cause deadlock once you’ve changed the call to pop.
(Also, it’s not part of your main problem, but it looks like you want printf rather than puts when you print the queue length).

Graceful shutdown while processing messages from queue with Bunny gem

I'm using Bunny gem for consuming messages via AMQP. My app is subscribed for messages in a queue, it's a neverending blocking call (via subscribe block). I'd like it to shut down gracefully while the process is interrupted (e.g. ctrl+c in terminal). What's the proper way to do it? I would like it to process the current message if it's processing one and then jump out of the block.
code example:
trap("INT") do
puts "Stopping now"
Indexer.client.stop # ???
end
module Indexer
extend self
def run
client.queue('indexer.index').subscribe do |msg|
# omitted
end
end
def client
#client ||= Bunny.new.tap(&:start)
end
end
Indexer.run # runs forever
I know this question is 2 years old, and you've probably figured out something by now. That said, the way I'd handle this would be to put a 'should I quit?' check at the end of your subscribe loop, and then have your SIGINT trap toggle the variable. Bunny itself is pretty good about cleaning up all the AMQP connection stuff, so you really would only need to worry about your own bits on exiting.

Posting large number of messages to AMQP queue

Using v0.7.1 of the Ruby amqp library and Ruby 1.8.7, I am trying to post a large number (millions) of short (~40 bytes) messages to a RabbitMQ server. My program's main loop (well, not really a loop, but still) looks like this:
AMQP.start(:host => '1.2.3.4',
:username => 'foo',
:password => 'bar') do |connection|
channel = AMQP::Channel.new(connection)
exchange = channel.topic("foobar", {:durable => true})
i = 0
EM.add_periodic_timer(1) do
print "\rPublished #{i} commits"
end
results = get_results # <- Returns an array
processor = proc do
if x = results.shift then
exchange.publish(x, :persistent => true,
:routing_key => "test.#{i}")
i += 1
EM.next_tick processor
end
end
EM.next_tick(processor)
AMQP.stop {EM.stop} end
The code starts processing the results array just fine, but after a while (usually, after 12k messages or so) it dies with the following error
/Library/Ruby/Gems/1.8/gems/amqp-0.7.1/lib/amqp/channel.rb:807:in `send':
The channel 1 was closed, you can't use it anymore! (AMQP::ChannelClosedError)
No messages are stored on the queue. The error seems to be happening just when network activity from the program to the queue server starts.
What am I doing wrong?
First mistake is that you didn't post the RabbitMQ version that you are using. Lots of people are running old obsolete version 1.7.2 because that is what is in their OS package repositories. Bad move for anyone sending the volume of messages that you are. Get RabbitMQ 2.5.1 from the RabbitMQ site itself and get rid of your default system package.
Second mistake is that you did not tell us what is in the RabbitMQ logs.
Third mistake is that you said nothing about what is consuming the messages. Is there another process running somewhere that has declared a queue and bound it to the exchange. There is NO message queue unless somebody declares it to RabbitMQ and binds it to an exchange. Even then messages will only flow if the binding key for the queue matches the routing key that you publish with.
Fourth mistake. You have routing keys and binding keys mixed up. The routing key is a string such as topic.test.json.echos and the binding key (used to bind a queue to an exchange) is a pattern like topic.# or topic..json.
Updated after your clarifications
Regarding versions, I'm not sure when it was fixed but there was a problem in 1.7.2 with large numbers of persistent messages causing RabbitMQ to crash when it rolled over its persistence log, and after crashing it was unable to restart until someone manually undid the rollover.
When you say that a connection is being opened and closed, I hope that it is not per message. That would be a strange way to use AMQP.
Let me repeat. Producers do NOT write messages to queues. They write messages to exchanges which then route the messages to queues based on the routing key (string) and the queue's binding key (pattern). In your example I misread the use of the # sign, but I see nothing which declares a queue and binds it to the exchange.

Resources