Rabbitmq queuing system not being consumed using fanout - ruby

ive been trying to fanout those queues using a ruby consumer very simple it just subscribes to that exchange/queues and it receives the message. now the problem whenever a new message is published. they dont receive the message which means they are not consuming and no consumers are listed. once you rebind the queue with the exchange again and restart the ruby app it starts to consume again. then it goes back to lembo again! sometimes when you restart the ruby app for few times it works. any idea?
code used for the consumer below:
#!/usr/bin/env ruby
# encoding: utf-8
require "rubygems"
require "amqp"
EventMachine.run do
connection = AMQP.connect(:host => '127.0.0.1', :port => 5672, :user => "user",:pass => "pass",:vhost => "/",:ssl => false,:frame_max => 131072 )
puts "Connected to AMQP broker. Running #{AMQP::VERSION} version of the gem..."
channel = AMQP::Channel.new(connection)
exchange = channel.fanout("p_cmds.p1")
channel.queue("p1_queue").bind(exchange).subscribe do |payload|
puts "#{payload} => p1"
end
end

Related

Using WebSockets with EM and IRC to Send "Connection Successful" Message

I'm attempting to write an IRC client using WebSockets. The IRC client I found on GitHub uses EventMachine, but I'm trying to use WebSockets as well to notify any connected clients when they're connected. However, I don't think I'm quite understanding EventMachine, because although the client successfully connects and joins the IRC channel, the puts 'Connected...' nor the subsequent line gets executed.
I assume this is because of a fundamental misunderstanding of EventMachine on my behalf.
EM.run {
EventMachine::WebSocket.start(:host => '0.0.0.0', :port => 8080) do |websocket|
websocket.onopen {
irc = Net::YAIL.new(
:address => 'irc.my-example-server.net',
:port => 6667,
:username => 'MyExample',
:realname => 'My Example Bot',
:nicknames => ['MyExample1', 'MyExample2', 'MyExample3']
)
irc.on_welcome proc { |event|
irc.join('#ExampleChannel')
EM.next_tick {
puts 'Connected...'
websocket.send({ :message => 'Connected' })
}
}
irc.start_listening!
}
end
}
I think I've answered my own question after a night of research. Essentially it has nothing to do with my misunderstanding of EventMachine. It's just the IRC client I was attempting to use was simply an infinite loop, and therefore nothing else could interrupt it. After researching for a further couple of hours for an EventMachine compatible IRC client, I came across Ponder: https://github.com/tbuehlmann/ponder so now hopefully I can continue creating my application!
Shameless plug: https://github.com/Wildhoney/Banter.js

publish/subscribe messaging with redis and ruby

I looked over this documentation:
http://redis.io/topics/pubsub
It states:
When you subscribe to a channel, you will get a message that is represented as a multi-bulk reply with three elements. The first element of a message is the kind of message (e.g. SUBSCRIBE or UNSUBSCRIBE). The second element of the message is the name of the given channel you are subscribing or unsubscribing to. The third element of the message is the number of channels you are currently subscribed to:
> SUBSCRIBE first second
*3 #three elements in this message: “subscribe”, “first”, and 1
$9 #number of bytes in the element
subscribe #kind of message
$5 #number of bytes in the element
first #name of channel
:1 #number of channels we are subscribed to
That's cool you can see the number of channels you are subscribed to as part of a bulk reply from subscribing to a channel. Now I try to get this reply back when using ruby:
require 'rubygems'
require 'redis'
require 'json'
redis = Redis.new(:timeout => 0)
redis.subscribe('chatroom') do |on|
on.message do |channel, msg, total_channels|
data = JSON.parse(msg)
puts "##{channel} - [#{data['user']}]: #{data['msg']} - channels subscribed to: #{total_channels}"
end
end
However, I do not get that kind of reply at all. What it gives me is the name of channel, the data published to that channel, and then total_channels is nil, because there is no third parameter sent back.
So where is this "multi-bulk reply" that redis speaks of?
Actually, the protocol is to send a subscribe reply message as the first message just after a subscribe operation. You do not get the number of subscribed channels in all messages you receive (just as a reply to subscribe/unsubscribe).
With the current version of redis-rb, you need a separate handler to process subscribe/unsubscribe reply messages:
require 'rubygems'
require 'redis'
require 'json'
redis = Redis.new(:timeout => 0)
redis.subscribe('chatroom') do |on|
on.subscribe do |channel, subscriptions|
puts "Subscribed to ##{channel} (#{subscriptions} subscriptions)"
end
on.message do |channel, msg|
data = JSON.parse(msg)
puts "##{channel} - [#{data['user']}]: #{data['msg']}"
end
end
Please note that in your example, the number of subscriptions will be always 1.

can't convert Symbol into Integer (TypeError) : while using bunny popping queue using bunny client only on windows

Though I have searched enough but could not get help resolving this issue. I am very new to Ruby so please pardom me if I am missing something very basic.
While executing below code on windows I am getting error "C:/Users/shhashmi/workspace/rabbitmqSender/sender.rb:36:in []': can't convert Symbol into Integer (TypeError)
from C:/Users/shhashmi/workspace/rabbitmqSender/sender.rb:36:ingetItem'
from C:/Users/shhashmi/workspace/rabbitmqSender/sender.rb:59:in `'"
However code is working fine on CentOS.
require "bunny"
require "net/http"
# Rabbit MQ
#host = "test.host"
#queue = "test.queue"
##host = "localhost"
##queue = "TEST"
# Put your target machine here
#target = "http://localhost:3000/"
def getItem
b = Bunny.new(:host=>#host, :port=>5672,)
# start a communication session with the amqp server
b.start
# declare a queue
q = b.queue(#queue, :auto_delete=>true)
# declare default direct exchange which is bound to all queues
e = b.exchange("")
# publish a message to the exchange which then gets routed to the queue
#e.publish("Hello, everybody! 211", :key => #queue)
#e.publish("Hello, everybody! 311", :key => #queue)
# get message from the queue
msg = q.pop[:payload]
puts "This is the message: " + msg + "\n\n"
# close the connection
b.stop
return msg
end
getItem
I was able to reproduce the error with your code. The problem is with this line:
msg = q.pop[:payload]
The Queue#pop method returns an array of 3 items, after popping the message from the queue. The method should look something like this:
delivery_info, message_properties, msg = q.pop
Now, you should see your message payload in the 'msg' variable. You could inspect the other two results to glean any useful information (e.g. number of remaining messages in the queue) or ignore them altogether if you don't need them

Suggested Redis driver for use within Goliath?

There seem to be several options for establishing Redis connections for use within EventMachine, and I'm having a hard time understanding the core differences between them.
My goal is to implement Redis within Goliath
The way I establish my connection now is through em-synchrony:
require 'em-synchrony'
require 'em-synchrony/em-redis'
config['redis'] = EventMachine::Synchrony::ConnectionPool.new(:size => 20) do
EventMachine::Protocols::Redis.connect(:host => 'localhost', :port => 6379)
end
What is the difference between the above, and using something like em-hiredis?
If I'm using Redis for sets and basic key:value storage, is em-redis the best solution for my scenario?
We use em-hiredis very successfully inside Goliath. Here's a sample of how we coded publishing:
config/example_api.rb
# These give us direct access to the redis connection from within the API
config['redisUri'] = 'redis://localhost:6379/0'
config['redisPub'] ||= EM::Hiredis.connect('')
example_api.rb
class ExampleApi < Goliath::API
use Goliath::Rack::Params # parse & merge query and body parameters
use Goliath::Rack::Formatters::JSON # JSON output formatter
use Goliath::Rack::Render # auto-negotiate response format
def response(env)
env.logger.debug "\n\n\nENV: #{env['PATH_INFO']}"
env.logger.debug "REQUEST: Received"
env.logger.debug "POST Action received: #{env.params} "
#processing of requests from browser goes here
resp =
case env.params["action"]
when 'SOME_ACTION' then process_action(env)
when 'ANOTHER_ACTION' then process_another_action(env)
else
# skip
end
env.logger.debug "REQUEST: About to respond with: #{resp}"
[200, {'Content-Type' => 'application/json', 'Access-Control-Allow-Origin' => "*"}, resp]
end
# process an action
def process_action(env)
# extract message data
data = Hash.new
data["user_id"], data["object_id"] = env.params['user_id'], env.params['object_id']
publishData = { "action" => 'SOME_ACTION_RECEIVED',
"data" => data }
redisPub.publish("Channel_1", Yajl::Encoder.encode(publishData))
end
end
return data
end
# process anothr action
def process_another_action(env)
# extract message data
data = Hash.new
data["user_id"], data["widget_id"] = env.params['user_id'], env.params['widget_id']
publishData = { "action" => 'SOME_OTHER_ACTION_RECEIVED',
"data" => data }
redisPub.publish("Channel_1", Yajl::Encoder.encode(publishData))
end
end
return data
end
end
Handling subscriptions are left as an exercise for the reader.
what em-synchrony does is patch the em-redis gem to allow using it with fibers which effectively allows it to run in goliath.
Here is a project using Goliath + Redis which can guide you on how to make all this works: https://github.com/igrigorik/mneme
Example with em-hiredis, what goliath do is wrap your request in a fiber so a way to test it is:
require 'rubygems'
require 'bundler/setup'
require 'em-hiredis'
require 'em-synchrony'
EM::run do
Fiber.new do
## this is what you can use in goliath
redis = EM::Hiredis.connect
p EM::Synchrony.sync redis.keys('*')
## end of goliath block
end.resume
end
and the Gemfile I used:
source :rubygems
gem 'em-hiredis'
gem 'em-synchrony'
If you run this example you will get the list of defined keys in your redis database printed on screen.
Without the EM::Synchrony.sync call you would get a deferrable but here the fiber is suspended until the calls return and you get the result.

Am i using eventmachine in the right way?

I am using ruby-smpp and redis to achive a queue based background worker to send SMPP messages.
And i am wondering if I am using eventmachine in the right way. It works but it doesnt feel right.
#!/usr/bin/env ruby
# Sample SMS gateway that can receive MOs (mobile originated messages) and
# DRs (delivery reports), and send MTs (mobile terminated messages).
# MTs are, in the name of simplicity, entered on the command line in the format
# <sender> <receiver> <message body>
# MOs and DRs will be dumped to standard out.
require 'smpp'
require 'redis/connection/hiredis'
require 'redis'
require 'yajl'
require 'time'
LOGFILE = File.dirname(__FILE__) + "/sms_gateway.log"
PIDFILE = File.dirname(__FILE__) + '/worker_test.pid'
Smpp::Base.logger = Logger.new(LOGFILE)
#Smpp::Base.logger.level = Logger::WARN
REDIS = Redis.new
class MbloxGateway
# MT id counter.
##mt_id = 0
# expose SMPP transceiver's send_mt method
def self.send_mt(sender, receiver, body)
if sender =~ /[a-z]+/i
source_addr_ton = 5
else
source_addr_ton = 2
end
##mt_id += 1
##tx.send_mt(('smpp' + ##mt_id.to_s), sender, receiver, body, {
:source_addr_ton => source_addr_ton
# :service_type => 1,
# :source_addr_ton => 5,
# :source_addr_npi => 0 ,
# :dest_addr_ton => 2,
# :dest_addr_npi => 1,
# :esm_class => 3 ,
# :protocol_id => 0,
# :priority_flag => 0,
# :schedule_delivery_time => nil,
# :validity_period => nil,
# :registered_delivery=> 1,
# :replace_if_present_flag => 0,
# :data_coding => 0,
# :sm_default_msg_id => 0
#
})
end
def logger
Smpp::Base.logger
end
def start(config)
# Write this workers pid to a file
File.open(PIDFILE, 'w') { |f| f << Process.pid }
# The transceiver sends MT messages to the SMSC. It needs a storage with Hash-like
# semantics to map SMSC message IDs to your own message IDs.
pdr_storage = {}
# Run EventMachine in loop so we can reconnect when the SMSC drops our connection.
loop do
EventMachine::run do
##tx = EventMachine::connect(
config[:host],
config[:port],
Smpp::Transceiver,
config,
self # delegate that will receive callbacks on MOs and DRs and other events
)
# Let the connection start before we check for messages
EM.add_timer(3) do
# Maybe there is some better way to do this. IDK, But it works!
EM.defer do
loop do
# Pop a message
message = REDIS.lpop 'messages:send:queue'
if message # If there is a message. Process it and check the queue again
message = Yajl::Parser.parse(message, :check_utf8 => false) # Parse the message from Json to Ruby hash
if !message['send_after'] or (message['send_after'] and Time.parse(message['send_after']) < Time.now)
self.class.send_mt(message['sender'], message['receiver'], message['body']) # Send the message
REDIS.publish 'log:messages', "#{message['sender']} -> #{message['receiver']}: #{message['body']}" # Push the message to the redis queue so we can listen to the channel
else
REDIS.lpush 'messages:queue', Yajl::Encoder.encode(message)
end
else # If there is no message. Sleep for a second
sleep 1
end
end
end
end
end
sleep 2
end
end
# ruby-smpp delegate methods
def mo_received(transceiver, pdu)
logger.info "Delegate: mo_received: from #{pdu.source_addr} to #{pdu.destination_addr}: #{pdu.short_message}"
end
def delivery_report_received(transceiver, pdu)
logger.info "Delegate: delivery_report_received: ref #{pdu.msg_reference} stat #{pdu.stat}"
end
def message_accepted(transceiver, mt_message_id, pdu)
logger.info "Delegate: message_accepted: id #{mt_message_id} smsc ref id: #{pdu.message_id}"
end
def message_rejected(transceiver, mt_message_id, pdu)
logger.info "Delegate: message_rejected: id #{mt_message_id} smsc ref id: #{pdu.message_id}"
end
def bound(transceiver)
logger.info "Delegate: transceiver bound"
end
def unbound(transceiver)
logger.info "Delegate: transceiver unbound"
EventMachine::stop_event_loop
end
end
# Start the Gateway
begin
puts "Starting SMS Gateway. Please check the log at #{LOGFILE}"
# SMPP properties. These parameters work well with the Logica SMPP simulator.
# Consult the SMPP spec or your mobile operator for the correct settings of
# the other properties.
config = {
:host => 'server.com',
:port => 3217,
:system_id => 'user',
:password => 'password',
:system_type => 'type', # default given according to SMPP 3.4 Spec
:interface_version => 52,
:source_ton => 0,
:source_npi => 1,
:destination_ton => 1,
:destination_npi => 1,
:source_address_range => '',
:destination_address_range => '',
:enquire_link_delay_secs => 10
}
gw = MbloxGateway.new
gw.start(config)
rescue Exception => ex
puts "Exception in SMS Gateway: #{ex} at #{ex.backtrace.join("\n")}"
end
Some easy steps to make this code more EventMachine-ish:
Get rid of the blocking Redis driver, use em-hiredis
Stop using defer. Pushing work out to threads with the Redis driver will make things even worse as it relies on locks around the socket it's using.
Get rid of the add_timer(3)
Get rid of the inner loop, replace it by rescheduling a block for the next event loop using EM.next_tick. The outer one is somewhat unnecessary. You shouldn't loop around EM.run as well, it's cleaner to properly handle a disconnect by doing a reconnect in your unbound method instead of stopping and restarting the event loop, by calling the ##tx.reconnect.
Don't sleep, just wait. EventMachine will tell you when new things come in on a network socket.
Here's how the core code around EventMachine would look like with some of the improvements:
def start(config)
File.open(PIDFILE, 'w') { |f| f << Process.pid }
pdr_storage = {}
EventMachine::run do
##tx = EventMachine::connect(
config[:host],
config[:port],
Smpp::Transceiver,
config,
self
)
REDIS = EM::Hiredis.connect
pop_message = lambda do
REDIS.lpop 'messages:send:queue' do |message|
if message # If there is a message. Process it and check the queue again
message = Yajl::Parser.parse(message, :check_utf8 => false) # Parse the message from Json to Ruby hash
if !message['send_after'] or (message['send_after'] and Time.parse(message['send_after']) < Time.now)
self.class.send_mt(message['sender'], message['receiver'], message['body'])
REDIS.publish 'log:messages', "#{message['sender']} -> #{message['receiver']}: #{message['body']}"
else
REDIS.lpush 'messages:queue', Yajl::Encoder.encode(message)
end
end
EM.next_tick &pop_message
end
end
end
end
Not perfect and could use some cleaning up too, but this is more what it should be like in an EventMachine manner. No sleeps, avoid using defer if possible, and don't use network drivers that potentially block, implement traditional loop by rescheduling things on the next reactor loop. In terms of Redis, the difference is not that big, but it's more EventMachine-y this way imho.
Hope this helps. Happy to explain further if you still have questions.
You're doing blocking Redis calls in EM's reactor loop. It works, but isn't the way to go. You could take a look at em-hiredis to properly integrate Redis calls with EM.

Resources