I'm following the Ruby SDK guide.
I can publish successfully but when trying to subscribe, nothing happens as I try to send a message to the channel from the PubNub console.
When running the code, it finishes and exits. No async is happening.
pubnub = Pubnub.new(
subscribe_key: 'demo',
publish_key: 'demo',
connect_callback: lambda {|msg| pubnub.publish(channel: 'demo', message: 'Hello from PubNub Ruby SDK!!', http_sync: true)}
)
pubnub.subscribe(channel: 'demo') do |envelope|
puts envelope.message
end
Your program finishes because main thread ends its work and exits before async code gets messages. Just add some sleep time on the end or run this code in pry console.
Related
[Note: I "fixed" this problem by creating and using a new gemset. I'm still curious why the problem occurred but it is no longer blocking me.]
[I am aware that there is a similar issue at Deadlock in Ruby join(), but I have tried the timeout parameter suggested there and it does not help. I suspect there is a pry-specific problem not covered there.]
I am getting the error below when running the code below, but only when executed within a pry session. This code has not been changed and has been working fine for quite a while, and I have no idea why it's an issue just now. I am using pry version 0.11.3 on Ruby 2.5.1. Also, this code works fine when pasted into pry; it's not working in my wifi-wand application that launches pry in the context of one of its objects (gem install wifi-wand to install, https://github.com/keithrbennett/wifiwand is the project page).
domains = %w(google.com baidu.com)
puts "Calling dig on domains #{domains}..." if #verbose_mode
threads = domains.map do |domain|
Thread.new do
output = `dig +short #{domain}`
output.length > 0
end
end
threads.each(&:join)
[1] pry(#<WifiWand::CommandLineInterface>)> ci
Calling dig on domains ["google.com", "baidu.com"]...
fatal: No live threads left. Deadlock?
3 threads, 3 sleeps current:0x00007fbd13d0c5a0 main thread:0x00007fbd13d0c5a0
* #<Thread:0x00007fbd14069c20 sleep_forever>
rb_thread_t:0x00007fbd13d0c5a0 native:0x00007fff89c2b380 int:0
/Users/kbennett/work/wifi-wand/lib/wifi-wand/models/base_model.rb:89:in `join'
/Users/kbennett/work/wifi-wand/lib/wifi-wand/models/base_model.rb:89:in `each'
/Users/kbennett/work/wifi-wand/lib/wifi-wand/models/base_model.rb:89:in `block in connected_to_internet?'
/Users/kbennett/work/wifi-wand/lib/wifi-wand/models/base_model.rb:126:in `connected_to_internet?'
/Users/kbennett/work/wifi-wand/lib/wifi-wand/command_line_interface.rb:264:in `cmd_ci'
Strangely, the problem was fixed by uninstalling and reinstalling the awesome_print gem. I have no idea why.
We have a problem when we run our nightwatch tests in parallel and there is a problem with the setup, for example the selenium grid is not available. The tests execute very quickly and we get no error messages.
Started child process for: folder1/test1
Started child process for: folder1/test2
Started child process for: folder1/test3
Started child process for: folder1/test4
>> folder1/test1 finished.
>> folder1/test2 finished.
>> folder1/test3 finished.
>> folder1/test4 finished.
But when I run the tests serially, I get a good error message like
Error retrieving a new session from the selenium server
Connection refused! Is selenium server started?
{ status: 13,
value: { message: 'Error forwarding the new session Empty pool of VM for setup Capabilities [{acceptSslCerts=true, name=Test1, browserName=chrome, javascriptEnabled=true, uuid=ab54872b-10ee-43a1-bf65-7676262fa647, platform=ANY}]',
class: 'org.openqa.grid.common.exception.GridException' } }
Why don't I get the good error message when running in parallel mode? Is there something I can change so I get the good error message in parallel mode?
By setting
live_output: true
in your nightwatch config file, you'll see logs while running in parallel.
More information: config-basic
I get the following error when I try to use the method "read_nonblock" from the "socket" library
IO::EAGAINWaitReadable: Resource temporarily unavailable - read would block
But when I try it through the IRB on the terminal it works fine
How can I make it read the buffer?
I get the following error when I try to use the method "read_nonblock" from the "socket" library
It is expected behaviour when the data isn't ready in the buffer. Since the exception, IO::EAGAINWaitReadable is originated from ruby version 2.1.0, in older version you must trap IO::WaitReadable with additional port selection and retry. So do as it was adviced in the ruby documentation:
begin
result = io.read_nonblock(maxlen)
rescue IO::WaitReadable
IO.select([io])
retry
end
For newer version os ruby you should trap IO::EAGAINWaitReadable also, but just with retrial reading for a timeout or infinitely. I haven't found out the example in the docs, but remember that it was without port selection:
begin
result = io.read_nonblock(maxlen)
rescue IO::EAGAINWaitReadable
retry
end
However some my investigations lead to that it is also better to do port selection on IO::EAGAINWaitReadable, so you'll can get:
begin
result = io.read_nonblock(maxlen)
rescue IO::WaitReadable, IO::EAGAINWaitReadable
IO.select([io])
retry
end
To support the code with both versions of exception, just declare the definition of IO::EAGAINWaitReadable in lib/ core under if clause:
if ! ::IO.const_defined?(:EAGAINWaitReadable)
class ::IO::EAGAINWaitReadable; end
end
I'm building a web application that connects to a server via web sockets. The server component is a small Ruby application based on sinatra, redis, and faye-websocket. The server is running on Phusion Passenger. A separate updater daemon is constantly pulling updates from various sources and publishes them to redis (using the redis gem and Redis::publish).
In order to push the updates to the clients I tried the following in my Sinatra app:
get '/' do
if Faye::WebSocket.websocket?(request.env)
store = Redis.new
ws = Faye::WebSocket.new(request.env)
ws.on(:open) do |event|
store.incr('connection_count')
puts 'Client connected (connection count: %s)' % store.get('connection_count')
end
ws.on(:close) do |event|
store.decr('connection_count')
puts 'Client disconnected (connection count: %s)' % store.get('connection_count')
end
ws.rack_response
store.subscribe(:updates) do |on|
on.message do |ch, payload|
puts "Got update"
ws.send(payload) if payload
end
end
end
end
This works only partially. A client can connect successfully and also receives updates but the store.incr and store.decr calls don't work. Also, the connections don't seem to be closed correctly—when I fire up multiple clients, I noticed that the connections pile up and the Passenger server stops working eventually.
Log output:
devserver_1 | App 614 stdout: Got update
devserver_1 | App 614 stdout: Got update
devserver_1 | App 614 stdout: Got update
When I comment out the following block, keeping track of the connections suddenly works:
store.subscribe(:updates) do |on|
on.message do |ch, payload|
puts "Got update"
ws.send(payload) if payload
end
end
Log output:
devserver_1 | App 1028 stdout: Client connected (connection count: 1)
devserver_1 | App 1039 stdout: Client connected (connection count: 2)
devserver_1 | App 1039 stdout: Client disconnected (connection count: 1)
devserver_1 | App 1028 stdout: Client disconnected (connection count: 0)
So using Redis::subscribe seems to somehow interfere with the web socket connection.
How can I solve this?
Phusion Passenger version 4.0.58
ruby 2.2.0p0 (2014-12-25 revision 49005) [x86_64-linux-gnu]
sinatra (1.4.6)
faye-websocket (0.9.2)
I think the problem here is that Faye uses EventMachine which means there's a reactor on your thread that is handling events, and calling your callbacks ws.on(:open) and ws.on(:close).
Now when you hit
store.subscribe(:updates) do |on|
on.message do |ch, payload|
puts "Got update"
ws.send(payload) if payload
end
end
This is a blocking operation - it entirely blocks the current thread. If your current thread is blocked, the reactor can't listen for events and then call your callbacks.
One solution to this is to run your store.subscribe on a different thread so it doesn't matter if it blocks that thread.
But I think a better solution is to use a non-blocking version of the Redis library:
From the documentation:
redis = EM::Hiredis.connect
pubsub = redis.pubsub
pubsub.subscribe(:updates).callback do
puts "Got update"
ws.send(payload) if payload
end
Both of these (Redis + Faye) should register with the EventMachine reactor loop, so that it dispatches events to both.
How can I use EventMachine.connect_unix_domain while running Thin as a service (using the init script (excerpt) and configuration below). The code directly below is the problem (I get an eventmachine not initialized: evma_connect_to_unix_server error). The second code example works, but doesn't allow me to daemonize thin (I don't think). Does Thin not already have a running instance of EventMachine?
UPDATE: Oddly enough: stopping the server (with service thin stop), seems to get into the config.ru file and run the app (so it works, until the stop command times out and kills the process). What happens when thin stops that could be causing this behavior?
Problematic Code
class Server < Sinatra::Base
# Webserver code removed
end
module Handler
def receive_data data
$received_data_changed = 1
$received_data = data
end
end
$sock = EventMachine.connect_unix_domain("/tmp/mysock.sock", Handler)
Working Code
EventMachine.run do
class Server < Sinatra::Base
# Webserver code removed
end
module Handler
def receive_data data
$received_data_changed = 1
$received_data = data
end
end
$sock = EventMachine.connect_unix_domain("/tmp/mysock.sock", Handler)
Server.run!(:port => 4567)
end
Init Script excerpt
DAEMON=/usr/local/bin/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
$DAEMON start --all $CONFIG_PATH
;;
Thin Config
---
chdir: /var/www
environment: development
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
socket: /tmp/thin.server.sock
daemonize: true
Thin is built on top of EventMachine. I think that you should use EventMachine for serving your app. Try to debug further way Thin won't daemonize. (What version are you using?). Also you can run Thin on another port such as 4000 and then pass that as the upstream server to your proxy-forwarding server, if that is what you want to achieve.
What I ended up doing was removing the EventMachine.run do ... end and simply enclosing the socket connection in an EM.next_tick{ $sock = EventMachine.connect_unix_domain("/tmp/mysock.sock", Handler) }.
Could swear I tried this once before... but it works now.
EDIT: Idea for next_tick came from here.