How to push messages from unacked to ready - ruby

My question is similar to a question asked previously, however it does not find an answer, I have a Consumer which I want to process an action called a Web Service, however, if this web service does not respond for some reason, I want the consumer not to process the message of the RabbitMQ but I encole it to process it later, my consumer is the following one:
require File.expand_path('../config/environment.rb', __FILE__)
conn=Rabbit.connect
conn.start
ch = conn.create_channel
x = ch.exchange("d_notification_ex", :type=> "x-delayed-message", :arguments=> { "x-delayed-type" => "direct"})
q = ch.queue("d_notification_q", :durable =>true)
q.bind(x)
p 'Wait ....'
q.subscribe(:manual_ack => true, :block => true) do |delivery_info, properties, body|
datos=JSON.parse(body)
if datos['status']=='request'
#I call a web service and process the json
result=Notification.send_payment_notification(datos.to_json)
else
#I call a web service and process the body
result=Notification.send_payment_notification(body)
end
#if the call to the web service, the web server is off the result will be equal to nil
#therefore, he did not notify RabbitMQ, but he puts the message in UNACKED status
# and does not process it later, when I want him to keep it in the queue and evaluate it afterwards.
unless result.nil?
ch.ack(delivery_info.delivery_tag)
end
end
An image of RabbitMQ,
There is some way that in the statement: c hack (delivery_info.delivery_tag), this instead of deleting the element of the queue can process it later, any ideas? Thanks

The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Try this:
if result.nil?
ch.nack(delivery_info.delivery_tag)
else
ch.ack(delivery_info.delivery_tag)
end

I decided to send the data back to the queue with a style "producer within the consumer", my code now looks like this:
if result.eql? 'ok'
ch.ack(delivery_info.delivery_tag)
else
if(datos['count'] < 5)
datos['count'] += 1
d_time=1000
x.publish(datos.to_json, :persistent => true, :headers=>{"x-delay" => d_time})
end
end
However I was forced to include one more attribute in the JSON attribute: Count! so that it does not stay in an infinite cycle.

Related

How can I send messages to specific client using Faye Websockets?

I've been working on a web application which is essentially a web messenger using sinatra. My goal is to have all messages encrypted using pgp and to have full duplex communication between clients using faye websocket.
My main problem is being able to send messages to a specific client using faye. To add to this all my messages in a single chatroom are saved twice for each person since it is pgp encrypted.
So far I've thought of starting up a new socket object for every client and storing them in a hash. I do not know if this approach is the most efficient one. I have seen that socket.io for example allows you to emit to a specific client but not with faye websockets it seems ? I am also considering maybe using a pub sub model but once again I am not sure.
Any advice is appreciated thanks !
I am iodine's author, so I might be biased in my approach.
I would consider naming a channel by the used ID (i.e. user1...user201983 and sending the message to the user's channel.
I think Faye will support this. I know that when using the iodine native websockets and builtin pub/sub, this is quite effective.
So far I've thought of starting up a new socket object for every client and storing them in a hash...
This is a very common mistake, often seen in simple examples.
It works only in single process environments and than you will have to recode the whole logic in order to scale your application.
The channel approach allows you to scale using Redis or any other Pub/Sub service without recoding your application's logic.
Here's a quick example you can run from the Ruby terminal (irb). I'm using plezi.io just to make it a bit shorter to code:
require 'plezi'
class Example
def index
"Use Websockets to connect."
end
def pre_connect
if(!params[:id])
puts "an attempt to connect without credentials was made."
return false
end
return true
end
def on_open
subscribe channel: params[:id]
end
def on_message data
begin
msg = JSON.parse(data)
if(!msg["to"] || !msg["data"])
puts "JSON message error", data
return
end
msg["from"] = params[:id]
publish channel: msg["to"].to_s, message: msg.to_json
rescue => e
puts "JSON parsing failed!", e.message
end
end
end
Plezi.route "/" ,Example
Iodine.threads = 1
exit
To test this example, use a Javascript client, maybe something like this:
// in browser tab 1
var id = 1
ws = new WebSocket("ws://localhost:3000/" + id)
ws.onopen = function(e) {console.log("opened connection");}
ws.onclose = function(e) {console.log("closed connection");}
ws.onmessage = function(e) {console.log(e.data);}
ws.send_to = function(to, data) {
this.send(JSON.stringify({to: to, data: data}));
}.bind(ws);
// in browser tab 2
var id = 2
ws = new WebSocket("ws://localhost:3000/" + id)
ws.onopen = function(e) {console.log("opened connection");}
ws.onclose = function(e) {console.log("closed connection");}
ws.onmessage = function(e) {console.log(e.data);}
ws.send_to = function(to, data) {
this.send(JSON.stringify({to: to, data: data}));
}.bind(ws);
ws.send_to(1, "hello!")

Is it reasonable to use resque(ruby) to manage external long-running commands (and log tasks)

I have to run bash heavy-job.sh <data-num> (that takes 0.5~2 days) frequently on my computer to process data located at ~/a/data/num . The script call a few sub-processes sequentially and write a log to ~/a/result/num.log . I have done this manually until now.
I wanted to visualize processed tasks and it's status(success or fail), etc as html table. I wrote simple sinatra app to render a table that shows
the list of ~/a/data/num to be processed
~/a/result/num.log exists or not (process not-launched/processing/done)
it's status (the log file contains the word "error" or not)
I found that it would be convenient that if I could launch a bash heavy-job.sh <data-num> from the sinatra app, log the tasks (and info like time,date,etc..) and it's args (heavy-jobs takes some optional args ) and show them as html table.
So I need something that manages jobs and logs to files (or db).
First I wrote a code like below for test (! for test, not integrated with my system yet !), but later I found resque is what i wanted. I am a beginner and not sure if my decision is reasonable or not.
my questions are
is it reasonable to use resque to manage external long-running commands (and log tasks)
or should I use another tool (not necessarily ruby-tool).
(extra;) the task-manager and the sinatra app should work separately (and communicate each other over REST or something) OR not ?
The jobs are not critical since I can retry tasks manually later if failed.
I am not good at English and my question may be misleading. I appreciate any help :) .
class TaskSpawn
def initialize()
#pids = []
end
def spawn(command, options = {})
#opt = {:pgroup => true}
#pids << Kernel.spawn(command, options)
end
def pids()
return #pids.clone
end
def waitany_nohang()
delete_idx = nil
ret = nil
#pids.each_with_index do |p, idx|
pid,status = Process.waitpid2(p, Process::WNOHANG)
unless pid.nil?
delete_idx = idx
ret = [pid,status]
break
end
end
if delete_idx
#pids.delete_at(delete_idx)
return ret
else
# no task fininshed
return nil
end
end
def waitall()
ret = waitall
raise "interal error" if ret.size != pids.size
return ret
end
end

Bunny::AccessRefused message when trying to read messages

I'm trying to read messages from a queue using Bunny. I only have read permissions on the RabbitMQ server but it seems the code I'm using tries to create the queue - though I can see the queue already exists with queue_exists?().
There must be a process in Bunny whereby one can simply read messages off an existing queue? Here's the code I'm using
require 'bunny'
class ExampleConsumer < Bunny::Consumer
def cancelled?
#cancelled
end
def handle_cancellation(_)
#cancelled = true
end
end
conn = Bunny.new("amqp://xxx:xxx#xxx", automatic_recovery: false)
conn.start
ch = conn.channel
q = ch.queue("a_queue")
consumer = ExampleConsumer.new(ch, q)
When I execute the above I receive:
/Users/jamessmith/.rvm/gems/ruby-1.9.3-p392/gems/bunny-1.7.1/lib/bunny/channel.rb:1915:in `raise_if_continuation_resulted_in_a_channel_error!': ACCESS_REFUSED - access to queue 'a_queue' in vhost '/' refused for user 'xxx' (Bunny::AccessRefused)
in most RMQ configurations I've seen, the consumer will have permissions to create the queue that they need.
If you must have your permissions set up so that you can't create the queue from your consumer, I'd suggest opening an issue ticket with the Bunny gem. it doesn't look like that is supported

Daemon-kit process one amqp job at a time

We've used daemon-kit to create a amqp worker which should receive a job and then ask for a new job, but not before the first job is finished. The problem is that Daemon Kit forkes the job and immediately starts a new job if there is one in the RabbitMQ queue.
Is there a formal way to force one-job-at-a-time-behaviour in daemon-kit? Or how can we achieve this?
This is a short version of how we start the amqp worker and process jobs. When a job finishes with a result it publishes this back to the RabbitMQ server.
# Run an event-loop for processing
DaemonKit::AMQP.run do |connection|
connection.on_tcp_connection_loss do |client, settings|
DaemonKit.logger.debug("AMQP connection status changed: #{client.status}")
client.reconnect(false, 1)
end
amq = AMQP::Channel.new
amq.queue(engine_key).subscribe do |metadata,msg|
msg_decode = JSON.parse(msg)
job = REFxEngineRunnerAPI10.new msg_decode
result = job.run(metadata.correlation_id)
amq.queue( metadata.reply_to, :auto_delete => false)
xc = amq.default_exchange
xc.publish JSON.dump(result), :routing_key => metadata.reply_to, :correlation_id => metadata.correlation_id
end
end
UPDATE
I found this to work for us:
DaemonKit::AMQP.run do |connection|
amq = AMQP::Channel.new(connection, prefetch: 1)
# I needs this extra line because I use RabbitMQ new than version 2.3.6
amq.qos(0, 1)
# be sure to add (:ack => true)
amq.queue(engine_key).subscribe(:ack => true) do |metadata,msg|
#### run long job one at a time
# Tell RabbitMQ I finished the job and I can now receive a new job
metadata.ack
end
end
I'm taking a stab in the dark here, since this sounds to me exactly how the protocol should behave. You can however using QoS or prefetching to limit the number of messages sent down to a subscriber from the broker using something like this:
amq = AMQP::Channel.new(connection, prefetch: 1)
According to the example this should give you the behaviour your desire.

Ruby + AMQP: processing queue in parallel

Since most of my tasks depends on the network, I want to process my queue in parallel, not just one message at a time.
So, I'm using the following code:
#!/usr/bin/env ruby
# encoding: utf-8
require "rubygems"
require 'amqp'
EventMachine.run do
connection = AMQP.connect(:host => '127.0.0.1')
channel = AMQP::Channel.new(connection)
channel.prefetch 5
queue = channel.queue("pending_checks", :durable => true)
exchange = channel.direct('', :durable => true)
queue.subscribe(:ack => true) do |metadata, payload|
time = rand(3..9)
puts 'waiting ' + time.to_s + ' for message ' + payload
sleep(time)
puts 'done with '+ payoad
metadata.ack
end
end
Why it is not using my prefetch setting? I guess it should get 5 messages and process them in parallel, no?
Prefetch is the maximum number of messages that may be sent to you in advance before you ack.
In other words, the prefetch size does not limit the transfer of single messages to a client, only the sending in advance of more messages while the client still has one or more unacknowledged messages. (From AMPQ docs)
QoS Prefetching Messages
RabbitMQ AMQP Reference
EventMachine is single threaded and event based. For parallel jobs on different threads or processes, see EM::Deferrable, then Thread or spawn.
Also see Hot Bunnies, a fast DSL on top of the RabbitMQ Java client:
https://github.com/ruby-amqp/hot_bunnies
(Thanks for info from Michael Klishin on Google Groups, and stoyan on blogger)

Resources