I'm attempting to write a Ruby SDK for the Stream Deck, a product that is basically a fancy hardware AutoHotkey allowing the user to program buttons with customized icons that do whatever they please, including making directories to achieve an unlimited amount of buttons organized to their liking. It has a language-agnostic API wherein it runs your script or compiled app with the arguments -port, -pluginUUID, -registerEvent, and -info. It runs a websocket on localhost at the port specified in the args, and on opening a connection you are to send a JSON string with your event and UUID as specified in the args.
I've gotten Ruby 3.0.5 running within a plugin with console output, but I'm having trouble getting it to talk to the websocket. I'm using SDPL to load my script (intended only for testing):
#!/esr/bin/env ruby
require "json"
require "async/websocket/client"
require "async/http/endpoint"
include Async
include Async::HTTP
# Parse arguments
_, port, _, UUID, _, REGISTER_EVENT, _, *info = ARGV
PORT = port.to_i
INFO = JSON.parse info.join(" ")
# Debug output prints properly
p PORT
p UUID
p REGISTER_EVENT
p INFO
Async do |task|
WebSocket::Client.connect(Endpoint.parse "http://localhost:#{PORT}") do |ws|
ws.write({ event: REGISTER_EVENT, uuid: UUID }.to_json)
ws.flush
puts "Opened!"
while msg = ws.read
puts "Message:"
puts msg
end
end
end
The arguments output as expected, then it hangs. If this code is run in WSL (with modifications to hardcode the port and an open plugin UUID), it talks to the Stream Deck as expected. Possible issue with the module on Windows 10? On RubyInstaller 3.1.2, the situation is even worse- it crashes with the following error:
0.0s warn: Async::Task [oid=0x280] [ec=0x294] [pid=32436] [2022-12-13 00:36:46 -0500]
| Task may have ended with unhandled exception.
| Errno::EBADF: Bad file descriptor
| → <internal:io> 63
| C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/io-event-1.1.4/lib/io/event/selector/select.rb 206
Related
Using Ruby (tested with versions 2.6.9, 2.7.5, 3.0.3, 3.1.1) and forking processes to handle socket communication there seems to be a huge difference between MacOS OSX and a Debian Linux.
While running on Debian, the forked processes get called in a balanced manner - that mean: if having 10 tcp server forks and running 100 client calls, each fork will get 10 calls. The order of the pid call stack is also always the same even not ordered by pid (caused by load when instantiating the forks).
Doing the same on a MacOS OSX (Catalina) the forked processes will not get called balanced - that mean: "pid A" might get called 23 or whatever times while e.g. "pid G" was never used.
Sample code (originally from: https://relaxdiego.com/2017/02/load-balancing-sockets.html)
#!/usr/bin/env ruby
# server.rb
require 'socket'
# Open a socket
socket = TCPServer.open('0.0.0.0', 9999)
puts "Server started ..."
# For keeping track of children pids
wpids = []
# Forward any relevant signals to the child processes.
[:INT, :QUIT].each do |signal|
Signal.trap(signal) {
wpids.each { |wpid| Process.kill(:KILL, wpid) }
}
end
5.times {
wpids << fork do
loop {
connection = socket.accept
connection.puts "Hello from #{ Process.pid }"
connection.close
}
end
}
Process.waitall
Run some netcat to the server on a second terminal:
for i in {1..20}; do nc -d localhost 9999; done
As said: if running on Linux each forked process will get 4 calls - doing same on MacOS OSX its a random usage per forked process.
Any solution or correction to make it work on MacOS OSX in a balanced manner also?
The problem is that the default socket backlog size is 5 on MacOS and 128 on Linux. You can change the backlog size by passing it as the second argument to TCPServer#listen:
socket.listen(128)
Or you can use the backlog size from the environment variable SOMAXCONN:
socket.listen(ENV.fetch('SOMAXCONN', 128).to_i)
I've implemented a very simple kind of server in Ruby, using TCPServer. I have a Server class with serve method:
def serve
# Do the actual serving in a child process
#pid = fork do
# Trap signal sent by #stop or by pressing ^C
Signal.trap('INT') { exit }
# Create a new server on port 2835 (1 ounce = 28.35 grams)
server = TCPServer.new('localhost', 2835)
#logger.info 'Listening on http://localhost:2835...'
loop do
socket = server.accept
request_line = socket.gets
#logger.info "* #{request_line}"
socket.print "message"
socket.close
end
end
end
and a stop method:
def stop
#logger.info 'Shutting down'
Process.kill('INT', #pid)
Process.wait
#pid = nil
end
I run my server from the command line, using:
if __FILE__ == $0
server = Server.new
server.logger = Logger.new(STDOUT)
server.logger.formatter = proc { |severity, datetime, progname, msg| "#{msg}\n" }
begin
server.serve
Process.wait
rescue Interrupt
server.stop
end
end
The problem is that, sometimes, when I do ruby server.rb from my terminal, the server starts, but when I try to make a request on localhost:2835, it fails. Only after several requests it starts serving some pages. In other cases, I need to stop/start the server again for it to properly serve pages. Why is this happening? Am I doing something wrong? I find this very weird...
The same things applies to my specs: I have some specs defined, and some Capybara specs. Before each test I create a server and start it and after each test I stop the server. And the problem persists: tests sometimes pass, sometimes fail because the requested page could not be found.
Is there something fishy going on with my forking?
Would appreciate any answer because I have no more place to look...
Your code is not an HTTP server. It is a TCP server that sends the string "message" over the socket after receiving a newline.
The reason that your code isn't a valid HTTP server is that it doesn't conform to the HTTP protocol. One of the many requirements of the HTTP protocol is that the server respond with a message of the form
HTTP/1.1 <code> <reason>
Where <code> is a number and <reason> is a human-readable "status", like "OK" or "Server Error" or something along those lines. The string message obviously does not conform to this requirement.
Here is a simple introduction to how you might build a HTTP server in ruby: https://practicingruby.com/articles/implementing-an-http-file-server
Situation
I connect to a WebSocket with Chrome's Remote Debugging Protocol, using a Rails application and a class that implements Celluloid, or more specifically, celluloid-websocket-client.
The problem is that I don't know how to disconnect the WebSocket cleanly.
When an error happens inside the actor, but the main program runs, Chrome somehow still makes the WebSocket unavailable, not allowing me to attach again.
Code Example
Here's the code, completely self-contained:
require 'celluloid/websocket/client'
class MeasurementConnection
include Celluloid
def initialize(url)
#ws_client = Celluloid::WebSocket::Client.new url, Celluloid::Actor.current
end
# When WebSocket is opened, register callbacks
def on_open
puts "Websocket connection opened"
# #ws_client.close to close it
end
# When raw WebSocket message is received
def on_message(msg)
puts "Received message: #{msg}"
end
# Send a raw WebSocket message
def send_chrome_message(msg)
#ws_client.text JSON.dump msg
end
# When WebSocket is closed
def on_close(code, reason)
puts "WebSocket connection closed: #{code.inspect}, #{reason.inspect}"
end
end
MeasurementConnection.new ARGV[0].strip.gsub("\"","")
while true
sleep
end
What I've tried
When I uncomment #ws_client.close, I get:
NoMethodError: undefined method `close' for #<Celluloid::CellProxy(Celluloid::WebSocket::Client::Connection:0x3f954f44edf4)
But I thought this was delegated? At least the .text method works too?
When I call terminate instead (to quit the Actor), the WebSocket is still opened in the background.
When I call terminate on the MeasurementConnection object that I create in the main code, it makes the Actor appear dead, but still does not free the connection.
How to reproduce
You can test this yourself by starting Chrome with --remote-debugging-port=9222 as command-line argument, then checking curl http://localhost:9222/json and using the webSocketDebuggerUrl from there, e.g.:
ruby chrome-test.rb $(curl http://localhost:9222/json 2>/dev/null | grep webSocket | cut -d ":" -f2-)
If no webSocketDebuggerUrl is available, then something is still connecting to it.
It used to work when I was using EventMachine similar to this example, but not with faye/websocket-client, but em-websocket-client instead. Here, upon stopping the EM loop (with EM.stop), the WebSocket would become available again.
I figured it out. I used the 0.0.1 version of the celluloid-websocket-client gem which did not delegate the close method.
Using 0.0.2 worked, and the code would look like this:
In MeasurementConnection:
def close
#ws_client.close
end
In the main code:
m = MeasurementConnection.new ARGV[0].strip.gsub("\"","")
m.close
while m.alive?
m.terminate
sleep(0.01)
end
I'm attempting to use Ruby SNMP to capture SNMP traps from various devices. In order to test them I'm attempting to send them from my laptop using the 'snmptrap' command. I can see that the traps are being sent and arriving at my server (the server is the manager) in packet captures, as well as in the 'snmptrapd' utility when I run it. I'm using the following example code exactly as it is, in the demo from the documentation to set up a TrapListener.
require 'snmp'
require 'logger'
log = Logger.new(STDOUT)
m = SNMP::TrapListener.new do |manager|
manager.on_trap_default do |trap|
log.info trap.inspect
end
end
m.join
I'm sending an SNMPv2c trap, and nothing ever appears on the screen...
Here is the command I'm using to send a test SMTP trap, in the even that it's useful:
snmptrap -v 2c -c public hostname_goes_here SNMP-NOTIFICATION-MIB::snmpNotifyType SNMPv2-MIB::sysLocation
Any suggestions appreciated! Thanks!
I was stuck on this for a long time as well. It turns out that by default, Traplistener only opens ports on 127.0.0.1. To make it listen on ALL interfaces on the port you specified (or default port 162), specify a :Host option. '0' makes it listen on ALL interfaces, or you can provide an IP address.
log = Logger.new(STDOUT)
m = SNMP::TrapListener.new(:Host => 0) do |manager|
manager.on_trap_default do |trap|
log.info trap.inspect
end
end
m.join
Here are four codes:
Code A: (Perl TCP Server)
prompt> perl -e '
use IO::Socket;
$s = IO::Socket::INET->new(LocalPort => 8080, Type => SOCK_STREAM, Reuse => 1, Listen => 10, Proto => "tcp") or die "$!";
while ($c = $s->accept) {
print while <$c>;
}'
Code B: (Perl TCP Client)
prompt> perl -e '
use IO::Socket;
$c = IO::Socket::INET->new(PeerAddr => "localhost:8080") or die "$!";
while (<>) {
print $c $_
}'
Code C: (Ruby TCP Server)
prompt> ruby -e '
require "socket"
s = TCPServer.new("localhost", 8080)
while( c = s.accept)
while l = c.gets
puts l
end
end'
Code D: (Ruby TCP Client)
prompt> ruby -e '
require "socket"
c = TCPSocket.new("localhost", 8080)
while l = gets
c.puts l
end'
The following issues confused me:
Code A and Code B can be run simultaneously. I thought it should threw an "Address already be used" Error when the latter process starts which bind to the same TCP port with former process.
Two (maybe more than two) instance of Code C can be run simulataneously, while I can't run two instance of Code A.
while Code A and Code C were being running simultaneously, I visited "http://localhost:8008" via Google Chrome, then Code C printed the HTTP messages, while Code A did not.
while I run Code C singlehandedly, Code B can not connect to it.
while I run Code A singlehandedly, Code D can connect to it.
while Code A and Code C were being running simultaneously, code D connected to C and Code B connected A.
Code A and Code B can be run simultaneously. I thought it should threw an "Address already be used" Error when the latter process starts which bind to the same TCP port with former process.
The addresses wouldn't conflict in this situation, only ports. More specifically, if the source ports were the same. Destinations ports are regularly the same, as this defines where a service may exist.
Ex: http servers generally use source port 80 with destination ports "randomized". http clients generally use destination port 80 with source ports "randomized". (Not truly random, but that's beyond the scope of the question.)
while Code A and Code C were being running simultaneously, I visited "http://localhost:8008" via Google Chrome, then Code C printed the HTTP messages, while Code A did not.
This statement in particular leads me to believe the code above wasn't actually was was ran. Specifically this line:
s = TCPServer.new("localhost", 8080)
This would explain most of the issues you're describing. Try putting each of these into files and running them. You'll lessen the possibility of typos from one run to the next.
The only remaining unsolved issue is this guy:
Two (maybe more than two) instance of Code C can be run simulataneously, while I can't run two instance of Code A.
Try running lsof -Pni:8080 (or something similar in your environment), to see what services are listening on that port.
There appears to be a dual-stack issue with the Ruby script. It's defaulting to IPv6 localhost, then IPv6 site-local, and lastly IPv4 localhost. It's looks as if it's specifying the source address internally.
The Perl script is functioning correctly. It's likely opening a socket with in6addr_any and listening comfortably on v4 and v6.
Code A and Code B can be run simultaneously. I thought it should threw an "Address already be used" Error when the latter process starts which bind to the same TCP port with former process.
The server is bound to 127.0.0.1:8080, and the client is bound to 127.0.0.1:<some free port> (since you didn't request to bind the client to a specific a port). 127.0.0.1:8080 != 127.0.0.1:<some free port>, so no problem.
Two (maybe more than two) instance of Code C can be run simulataneously, while I can't run two instance of Code A.
You can't run more than two working instances of "C". It's impossible to have two sockets bound to the same IP address and port. It's like trying to give two people the same mailing address.
while Code A and Code C were being running simultaneously, I visited "http://localhost:8008" via Google Chrome, then Code C printed the HTTP messages, while Code A did not.
Of course. Because "C" manged to bind to 127.0.0.1:8080, "A" can't and dies.
while I run Code C singlehandedly, Code B can not connect to it.
I don't see why. What error do you get?
When I tried it in a file with one modification from above code
Ruby Client <-> Ruby server was Success
Perl Client <-> Perl Server was Success
Ruby Client <-> Perl Server was Success
Perl Client <-> Ruby Server FAILED
I got the below Error..
"No connection could be made because the target machine actively
refused it. at pc.pl line 4."
So I was wondering what was going on!!! So I checked with a different client program and it was still not connecting to the Ruby client server. So I thought to focus on ruby server first
sample Socket Tester:
http://sourceforge.net/projects/sockettest/
Ruby Server:
s = TCPServer.new("localhost", 20000)
while( c = s.accept)
while l = c.gets
puts l
end
end
Perl Client:(pc.pl)
#!/usr/bin/perl
use IO::Socket::INET;
print "Client\n";
$c = IO::Socket::INET->new(PeerAddr => "localhost:20000") or die "$!";
while (<>) {
print $c $_
}
The other code for reference:
Perl Server:
#!/usr/bin/perl
use IO::Socket::INET;
print "Server\n";
$s = IO::Socket::INET->new(LocalPort => 20000, Type => SOCK_STREAM, Reuse => 1, Listen => 10, Proto => "tcp") or die "$!";
while ($c = $s->accept) {
print while <$c>;
}
Ruby Client:
require "socket"
c = TCPSocket.new("localhost", 20000)
while l = gets
c.puts l
end
The FIX
I had 3 systems connected in my house and replacing localhost to the actual IP of the current PC resolved the issue. (ipconfig in WINDOWS to get the IP)
Ruby Server:
require "socket"
s = TCPServer.new("192.168.1.3", 20000)
while( c = s.accept)
while l = c.gets
puts l
end
end
Perl Client:
#!/usr/bin/perl
use IO::Socket::INET;
print "Client\n";
$c = IO::Socket::INET->new(PeerAddr => "192.168.1.3:20000") or die "$!";
while (<>) {
print $c $_
}
Thx Abraham