Elixir/Phoenix websocket error - websocket

I'm trying to connect from a JS application to a websocket connection in Phoenix/Elixir, however, now matter what I try, I get
ERROR:
[info] GET /socket/websocket
[debug] ** (Phoenix.Router.NoRouteError) no route found for GET /socket/websocket (AirtameZap.Router)
From the Javascript client I connect in the following way:
socket = new Socket("ws://example/socket", {});
socket.connect();
socket.channel("zap:all")
...
Here are the source files for the backend service:
/lib/airtame_zap/endpoint.ex
defmodule AirtameZap.Endpoint do
use Phoenix.Endpoint, otp_app: :airtame_zap
socket "/socket", AirtameZap.ZapSocket
...
/web/channels/zap_socket.ex
defmodule AirtameZap.ZapSocket do
use Phoenix.Socket
channel "zap:*", AirtameZap.ZapChannel
transport :websocket, Phoenix.Transports.WebSocket
...
def connect(_params, socket) do
{:ok, socket}
end
def id(_socket), do: nil
end
/web/channels/zap_channel.ex
defmodule AirtameZap.ZapChannel do
use Phoenix.Channel
def join("zap:all", _message, socket) do
{:ok, socket}
end
end
Any ideas? I found this reply Phoenix: Trying to connect to Channel but getting a not Route found for GET /websocket error but it didn't help

Related

Access to SSL context in faye-websocket+eventmachine connection

I would like to get a wire dump of a secure websocket connection where I am the client.
I am using the faye-websocket gem in ruby to connect to a secure websocket service. This works well. To understand a specific issue, I need to get a wire dump of the communication. I typically use wireshark for this (running on the same machine as the client). To decrypt the SSL connection, I need to extract the master key to pass it to wireshark. I know how to extract the master key if I have direct access to the socket, but I fail to get access to it when using the faye-websocket gem.
The code to run faye-websocket is pretty standard:
EM.run {
ws = Faye::WebSocket::Client.new('wss://...')
ws.on :open do |event|
p [:open]
### authentication
end
ws.on :message do |event|
p [:message, event.data]
### message - response loop here
end
ws.on :close do |event|
p [:close, event.code, event.reason]
ws = nil
end
}
Inspecting the content of ws, it has a #socket member, but I fail to receive it (get_instance_var returns nil).
For the record, once I have the SSLcontext, I would use the code from
https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/how-to-decrypt-ruby-ssl-communications-with-wireshark/
to extract the master key and pass it to wireshark:
ssl_socket.session.to_text.each_line do |line|
if match = line.match(/Session-ID\s*: (?<session_id>.*)/)
session_id = match[:session_id]
end
if match = line.match(/Master-Key\s*: (?<master_key>.*)/)
master_key = match[:master_key]
end
end
Does someone have a solution to get access to the underlying socket and the SSL context?

Elm application stops receiving phoenix channel broadcasts

Elm, phoenix, and elixir are quite new to me so I thought I would make channels test app simple example app to test the use of phoenix channels. The app has other stuff in it too because its made from old "parts" but bear with me on this.
The idea is that you have several genservers making http calls to a phoenix endpoint. Basically they are just updating a list held in an agent process.
That list is displayed in an Elm app through a phoenix channel. The goal was just to see what happens if the agent state is updated frequently with several processes.
So this is what I have so far. I have the phoenix site with the Elm app setup and a separate Elixir app with genservers making the updates. Everything works fine about 20 seconds but then the channel connection is cut and not reestablished unless I hit refresh on browser. I can see from logging that the backend is still working fine and there is no error on the browser console either. So whats the deal here? I thought that the channel connection should automatically reconnect if lost and why is it disconnecting anyway?
Im guessing that the problem is with the elm-phoenix-socket. Here is it is setup in the elm app:
socketServer : String
socketServer =
"ws://localhost:4000/socket/websocket"
initPhxSocket : Phoenix.Socket.Socket Msg
initPhxSocket =
Phoenix.Socket.init socketServer
|> Phoenix.Socket.withDebug
|> Phoenix.Socket.on "new:heartbeats" "heartbeats:lobby" ReceiveHeartbeats
Here is how the broadcast is done on the backend:
defmodule AbottiWeb.ApiController do
use AbottiWeb.Web, :controller
def index(conn, _params) do
beats = AbottiWeb.HeartbeatAgent.get()
json conn, beats
end
def heartbeat(conn, %{"agent" => agent} ) do
AbottiWeb.HeartbeatAgent.update(agent)
beats = AbottiWeb.HeartbeatAgent.get()
AbottiWeb.Endpoint.broadcast("heartbeats:lobby", "new:heartbeats", beats)
json conn, :ok
end
end
so in essence the genservers are constantly making calls to that heartbeat endpoint. I doubt the problem is here though. Another possibility where the problem lies is the channel setup which looks like this:
user_socket.ex:
defmodule AbottiWeb.UserSocket do
use Phoenix.Socket
channel "heartbeats:*", AbottiWeb.HeartbeatChannel
transport :websocket, Phoenix.Transports.WebSocket
def connect(_params, socket) do
{:ok, socket}
end
def id(_socket), do: nil
end
and heartbeat_channel.ex:
defmodule AbottiWeb.HeartbeatChannel do
use AbottiWeb.Web, :channel
require Logger
def join("heartbeats:lobby", payload, socket) do
Logger.debug "Hearbeats:lobby joined: #{inspect payload}"
if authorized?(payload) do
{:ok, socket}
else
{:error, %{reason: "unauthorized"}}
end
end
# Channels can be used in a request/response fashion
# by sending replies to requests from the client
def handle_in("ping", payload, socket) do
{:reply, {:ok, payload}, socket}
end
# It is also common to receive messages from the client and
# broadcast to everyone in the current topic (heartbeats:lobby).
def handle_in("shout", payload, socket) do
broadcast socket, "shout", payload
{:noreply, socket}
end
# This is invoked every time a notification is being broadcast
# to the client. The default implementation is just to push it
# downstream but one could filter or change the event.
def handle_out(event, payload, socket) do
Logger.debug "Broadcasting #{inspect event} #{inspect payload}"
push socket, event, payload
{:noreply, socket}
end
# Add authorization logic here as required.
defp authorized?(_payload) do
true
end
end
So any ideas what the problem is? Im guessing it is something really simple.
Ok, I know now that the socket transport times out. But why does it do that?
Well, I solved it with this:
transport :websocket, Phoenix.Transports.WebSocket,
timeout: :infinity
Don't know how harmful that is but this being a test app it doesn't really matter.

Faye WebSocket, reconnect to socket after close handler gets triggered

I have a super simple script that has pretty much what's on the Faye WebSocket GitHub page for handling closed connections:
ws = Faye::WebSocket::Client.new(url, nil, :headers => headers)
ws.on :open do |event|
p [:open]
# send ping command
# send test command
#ws.send({command: 'test'}.to_json)
end
ws.on :message do |event|
# here is the entry point for data coming from the server.
p JSON.parse(event.data)
end
ws.on :close do |event|
# connection has been closed callback.
p [:close, event.code, event.reason]
ws = nil
end
Once the client is idle for 2 hours, the server closes the connection. I can't seem to find a way to reconnect to the server once ws.on :close is triggered. Is there an easy way of going about this? I just want it to trigger ws.on :open after :close goes off.
Looking for the Faye Websocket Client implementation, there is a ping option which sends some data to the server periodically, which prevents the connection to go idle.
# Send ping data each minute
ws = Faye::WebSocket::Client.new(url, nil, headers: headers, ping: 60)
However, if you don't want to rely on the server behaviour, since it can finish the connection even if you are sending some data periodically, you can just put the client setup inside a method and start all over again if the server closes the connection.
def start_connection
ws = Faye::WebSocket::Client.new(url, nil, headers: headers, ping: 60)
ws.on :open do |event|
p [:open]
end
ws.on :message do |event|
# here is the entry point for data coming from the server.
p JSON.parse(event.data)
end
ws.on :close do |event|
# connection has been closed callback.
p [:close, event.code, event.reason]
# restart the connection
start_connection
end
end

is Ruby em-websocket blocking?

I'm writing a ruby program that has 2 threads. One that listens on an incoming UDP connection and another that broadcasts on a websocket from which browsers on the client side read.I'm using the em-websocket gem. However, My UDP listener thread never gets called and it looks like the code stays within the websocket initialization code. I'm guessing because em-websocket is blocking, but I haven't been able to find any info online that suggests that. Is it an error on my side? I'm kinda new to ruby so I'm not able to figure out what I'm doing wrong.
require 'json'
require 'em-websocket'
require 'socket'
socket=nil
text="default"
$x=0
EventMachine.run do
EventMachine::WebSocket.start(:host => "0.0.0.0", :port => 8080) do |ws|
ws.onopen {
ws.send "Hello Client!"
socket=ws
$x=1
}
ws.onmessage { |msg| socket.send "Pong: #{msg}" }
ws.onclose { puts "WebSocket closed" }
end
end
def listen()
puts "listening..."
s = UDPSocket.new
s.bind(nil, 3000)
while 1<2 do
text, sender = s.recvfrom(1024)
puts text
if $x==1 then
socket.send text
end
end
end
t2=Thread.new{listen()}
t2.join
em-websocket is non-blocking, however UDPSocket#recv_from is. Might be better to just use EventMachine's open_datagram_socket instead.
Another thing to note: you should not expose socket as a "global" variable. Every time somebody connects the reference to the previously connected client will be lost. Maybe make some sort of repository for socket connections, or use an observer pattern to broadcast messages when something comes in. What I would do is have a dummy object act as an observer, and whenever a socket is connected/disconnect you register/unregister from the observer:
require 'observer'
class Dummy
include Observable
def receive_data data
changed true
notify_observers data
end
end
# ... later on ...
$broadcaster = Dummy.new
class UDPHandler < EventMachine::Connection
def receive_data data
$broadcaster.receive_data data
end
end
EventMachine.run do
EM.open_datagram_socket "0.0.0.0", 3000, UDPHandler
EM::WebSocket.start :host => "0.0.0.0", :port => 8080 do |ws|
ws.onopen do
$broadcaster.add_observer ws
end
ws.onclose do
$broadcaster.delete_observer ws
end
# ...
end
end
The whole point of EventMachine is to abstract away from the basic socket and threading structure, and handle all the asynchronous bits internally. It's best not to mix the classical libraries like UDPSocket or Thread with EventMachine stuff.

How to disconnect redis client in websocket eventmachine

I'm trying to build a websocket server where each client establish its own redis connections used for publish and subscribe.
When the redis server is running I can see the two new connections being established when a client connects to the websocket server and I can also publish data to the client, but when the client drops the connection to the websocket server I also want to disconnect from Redis . How can I do this?
Maybe I'm doing it wrong, but this is my code.
#require 'redis'
require 'em-websocket'
require 'em-hiredis'
require 'json'
CLIENTS = Hash.new
class PubSub
def initialize(client)
#socket = client.ws
# These clients can only be used for pub sub commands
#publisher = EM::Hiredis.connect #Later I will like to disconnect this
#subscriber = EM::Hiredis.connect #Later I will like to disconnect this
client.connections << #publisher << #subscriber
end
def subscribe(channel)
#channel = channel
#subscriber.subscribe(channel)
#subscriber.on(:message) { |chan, message|
#socket.send message
}
end
def publish(channel,msg)
#publisher.publish(channel, msg).errback { |e|
puts [:publisherror, e]
}
end
def unsubscribe()
#subscriber.unsubscribe(#channel)
end
end
class Client
attr_accessor :connections, :ws
def initialize(ws)
#connections = []
#ws = ws
end
end
EventMachine.run do
# Creates a websocket listener
EventMachine::WebSocket.start(:host => '0.0.0.0', :port => 8081) do |ws|
ws.onopen do
# I instantiated above
puts 'CLient connected. Creating socket'
#client = Client.new(ws)
CLIENTS[ws] = #client
end
ws.onclose do
# Upon the close of the connection I remove it from my list of running sockets
puts 'Client disconnected. Closing socket'
#client.connections.each do |con|
#do something to disconnect from redis
end
CLIENTS.delete ws
end
ws.onmessage { |msg|
puts "Received message: #{msg}"
result = JSON.parse(msg)
if result.has_key? 'channel'
ps = PubSub.new(#client)
ps.subscribe(result['channel'])
elsif result.has_key? 'publish'
ps = PubSub.new(ws)
ps.publish(result['publish']['channel'],result['publish']['msg']);
end
}
end
end
This version of em-hiredis supports close connection: https://github.com/whatupdave/em-hiredis
Here is how I would (and did many times) this:
instead of always opening and closing connections for each client you can keep 1 connection open per Thread/Fiber dependeing on what you are basing your concurrency on, that way if you are using a poll of Thread/Fibers once each one of them have its connections they will keep it and reuse them.
I did not worked much with websocket until now (I was waiting for a standard implementation) but I am sure you can apply that thinking to it too.
You can also do what rails/activerecord: keeo a pool of redis connection, each time you need to use a connection you request one, use it and realease it, it could look like this:
def handle_request(request)
#redis_pool.get_connection do |c|
# [...]
end
end
before yielding the block a connection is taken from the available ones and after it the connection is marked as free.
This was added to em-hiredis: https://github.com/mloughran/em-hiredis/pull/6

Resources