Rails 5 ActionCable receive not called - websocket

I have an ActionCable channel that is working well in terms of Subscribing and receiving data. Everything works, except the receive(data) method is not being called when messages are being sent down the channel. Does anyone know why?
My cable channel:
class MyChannel < ApplicationCable::Channel
def connect
end
def subscribed
current_user = #authorized_user
stream_for current_user
end
def receive(data)
Rails.logger.error "received: #{data.inspect}"
puts "received: #{data.inspect}"
end
def unsubscribed
# Any cleanup needed when channel is unsubscribed
end
end
When I call the broadcast_to method, my consumer correctly receives the message, this all works fine:
MyChannel.broadcast_to User.find(4), "test"
But my receive(data) method is never called (can't see my logging anywhere). As far as the Rails documentation goes, this should just work, but when I look at the Cable channel methods, I can't see receive at all. What am I missing?

Figured it out!
You can specify the Channel action you wish to hit within the Cable message itself. In my example, if I wished my data to hit the "receive" method, my cable message would look like this:
{
"command": "message",
"identifier": "{\"channel\":\"MyChannel\"}",
"data": "{\"someinfo\":\"hello world this is a test\",\"action\":\"receive\"}"
}

Related

Elm application stops receiving phoenix channel broadcasts

Elm, phoenix, and elixir are quite new to me so I thought I would make channels test app simple example app to test the use of phoenix channels. The app has other stuff in it too because its made from old "parts" but bear with me on this.
The idea is that you have several genservers making http calls to a phoenix endpoint. Basically they are just updating a list held in an agent process.
That list is displayed in an Elm app through a phoenix channel. The goal was just to see what happens if the agent state is updated frequently with several processes.
So this is what I have so far. I have the phoenix site with the Elm app setup and a separate Elixir app with genservers making the updates. Everything works fine about 20 seconds but then the channel connection is cut and not reestablished unless I hit refresh on browser. I can see from logging that the backend is still working fine and there is no error on the browser console either. So whats the deal here? I thought that the channel connection should automatically reconnect if lost and why is it disconnecting anyway?
Im guessing that the problem is with the elm-phoenix-socket. Here is it is setup in the elm app:
socketServer : String
socketServer =
"ws://localhost:4000/socket/websocket"
initPhxSocket : Phoenix.Socket.Socket Msg
initPhxSocket =
Phoenix.Socket.init socketServer
|> Phoenix.Socket.withDebug
|> Phoenix.Socket.on "new:heartbeats" "heartbeats:lobby" ReceiveHeartbeats
Here is how the broadcast is done on the backend:
defmodule AbottiWeb.ApiController do
use AbottiWeb.Web, :controller
def index(conn, _params) do
beats = AbottiWeb.HeartbeatAgent.get()
json conn, beats
end
def heartbeat(conn, %{"agent" => agent} ) do
AbottiWeb.HeartbeatAgent.update(agent)
beats = AbottiWeb.HeartbeatAgent.get()
AbottiWeb.Endpoint.broadcast("heartbeats:lobby", "new:heartbeats", beats)
json conn, :ok
end
end
so in essence the genservers are constantly making calls to that heartbeat endpoint. I doubt the problem is here though. Another possibility where the problem lies is the channel setup which looks like this:
user_socket.ex:
defmodule AbottiWeb.UserSocket do
use Phoenix.Socket
channel "heartbeats:*", AbottiWeb.HeartbeatChannel
transport :websocket, Phoenix.Transports.WebSocket
def connect(_params, socket) do
{:ok, socket}
end
def id(_socket), do: nil
end
and heartbeat_channel.ex:
defmodule AbottiWeb.HeartbeatChannel do
use AbottiWeb.Web, :channel
require Logger
def join("heartbeats:lobby", payload, socket) do
Logger.debug "Hearbeats:lobby joined: #{inspect payload}"
if authorized?(payload) do
{:ok, socket}
else
{:error, %{reason: "unauthorized"}}
end
end
# Channels can be used in a request/response fashion
# by sending replies to requests from the client
def handle_in("ping", payload, socket) do
{:reply, {:ok, payload}, socket}
end
# It is also common to receive messages from the client and
# broadcast to everyone in the current topic (heartbeats:lobby).
def handle_in("shout", payload, socket) do
broadcast socket, "shout", payload
{:noreply, socket}
end
# This is invoked every time a notification is being broadcast
# to the client. The default implementation is just to push it
# downstream but one could filter or change the event.
def handle_out(event, payload, socket) do
Logger.debug "Broadcasting #{inspect event} #{inspect payload}"
push socket, event, payload
{:noreply, socket}
end
# Add authorization logic here as required.
defp authorized?(_payload) do
true
end
end
So any ideas what the problem is? Im guessing it is something really simple.
Ok, I know now that the socket transport times out. But why does it do that?
Well, I solved it with this:
transport :websocket, Phoenix.Transports.WebSocket,
timeout: :infinity
Don't know how harmful that is but this being a test app it doesn't really matter.

Popping messages from Ruby Queue in a proper way

Here is my code to send messages from Queue:
def initialize
#messages = Queue.new
end
def send_messages_from_queue
while msg = #messages.pop(true)
#Pushing data through websocket
end
rescue ThreadError
#raised if queue is empty
end
It works but has one drawback: if there are no channel subscribers the message is not being sent but still taken out of the Queue and therefore goes lost.
How to take last message from Queue but not delete it from there until it is sent?
I could expose the inner queue:
#messages.class.module_eval { attr_reader :que }
or
#messages.define_singleton_method(:first_message) do
#que.first
end
but it is not the right way and also has it's own drawback:
when I take a message out of Queue in one thread using pop, it won't be available for other thread few milliseconds later, but when I use the custom first_message method, the message will be available for any other thread until I call pop.

Catching client connection disconnect in redis subscription

I'm trying to build a notifications system with Redis and Sinatra streams. However I can't seem to catch when connection closes down, so the blocking Redis subscription block seems to never close down. What is the best way to achieve this?
get '/user/:id/next_notification' do
stream :keep_open do |out|
$redis.subscribe("notifications:#{params[:id]}") { |on|
on.message { |channel, msg|
$redis.unsubscribe
out << msg
}
}
out.callback {
puts "unsub"
# $redis.unsubscribe
}
out.errback {
puts "unsub"
# $redis.unsubscribe
}
end
end
Redis Subscription is a blocking call. So you need to execute it in a separate thread. Dunno how to do it in Ruby. But i'm sure there must be threading library in ruby.
Wrap the Blocking call in a try..catch and you will know when the connection has closed from the server side.

EventMachine and em-websocket - reading from a queue and pushing to a channel

I'm using eventmachine to read from a HornetQ topic, push to a Channel which is subscribed to by EM websocket connections. I need to prevent the #topic.receive loop from blocking, so have created a proc and am calling EventMachine.defer with no callback. This will run indefinitely. This works fine. I could also have just used Thread.new.
My question is, is this the correct way to read from a stream/queue and pass the data to the channel and is there a better/any other way to do this?
require 'em-websocket'
require 'torquebox-messaging'
class WebsocketServer
def initialize
#channel = EM::Channel.new
#topic = TorqueBox::Messaging::Topic.new('/topics/mytopic')
end
def start
EventMachine.run do
topic_to_channel = proc do
while true
msg = #topic.receive
#channel.push msg
end
end
EventMachine.defer(topic_to_channel)
EventMachine::WebSocket.start(:host => "127.0.0.1", :port => 8081, :debug => false) do |connection|
connection.onopen do
sid = #channel.subscribe { |msg| connection.send msg }
connection.onclose do
#channel.unsubscribe(sid)
end
end
end
end
end
end
WebsocketServer.new.start
This is ok, but EM.defer will spawn 20 threads, so I would avoid it for your use case. In general I would avoid EM entirely, especially the Java reactor as we never finished it.
The Torquebox has a native stomp over websockets solution that would be a much better way to go in this context, and solves a bunch of other encapsulation challenges for you.
If you really want to stick with EM for this, then I'd use Thread.new instead of defer, so as to avoid having 19 idle threads taking up extra ram for no reason.

How does thin asynch app example buffer response to the body?

Hi I have been going through the documentation on Thin and I am reasonably new to eventmachine but I am aware of how Deferrables work. My goal is to understand how Thin works when the body is deferred and streamed part by part.
The following is the example that I'm working with and trying to get my head around.
class DeferrableBody
include EventMachine::Deferrable
def call(body)
body.each do |chunk|
#body_callback.call(chunk)
end
# #body_callback.call()
end
def each &blk
#body_callback = blk
end
end
class AsyncApp
# This is a template async response. N.B. Can't use string for body on 1.9
AsyncResponse = [-1, {}, []].freeze
puts "Aysnc testing #{AsyncResponse.inspect}"
def call(env)
body = DeferrableBody.new
# Get the headers out there asap, let the client know we're alive...
EventMachine::next_tick do
puts "Next tick running....."
env['async.callback'].call [200, {'Content-Type' => 'text/plain'}, body]
end
# Semi-emulate a long db request, instead of a timer, in reality we'd be
# waiting for the response data. Whilst this happens, other connections
# can be serviced.
# This could be any callback based thing though, a deferrable waiting on
# IO data, a db request, an http request, an smtp send, whatever.
EventMachine::add_timer(2) do
puts "Timer started.."
body.call ["Woah, async!\n"]
EventMachine::add_timer(5) {
# This could actually happen any time, you could spawn off to new
# threads, pause as a good looking lady walks by, whatever.
# Just shows off how we can defer chunks of data in the body, you can
# even call this many times.
body.call ["Cheers then!"]
puts "Succeed Called."
body.succeed
}
end
# throw :async # Still works for supporting non-async frameworks...
puts "Async REsponse sent."
AsyncResponse # May end up in Rack :-)
end
end
# The additions to env for async.connection and async.callback absolutely
# destroy the speed of the request if Lint is doing it's checks on env.
# It is also important to note that an async response will not pass through
# any further middleware, as the async response notification has been passed
# right up to the webserver, and the callback goes directly there too.
# Middleware could possibly catch :async, and also provide a different
# async.connection and async.callback.
# use Rack::Lint
run AsyncApp.new
The part which I don't clearly understand is what happens within the DeferrableBody class in the call and the each methods.
I get that the each receives chunks of data once the timer fires as blocks stored in #body_callback and when succeed is called on the body it sends the body but when is yield or call called on those blocks how does it become a single message when sent.
I feel I don't understand closures enough to understand whats happening. Would appreciate any help on this.
Thank you.
Ok I think I figured out how the each blocks works.
Thin on post_init seems to be generating a #request and #response object when the connection comes in. The response object needs to respond to an each method. This is the method we override.
The env['async.callback'] is a closure that is assigned to a method called post_process in the connection.rb class method where the data is actually sent to connection which looks like this
#response.each do |chunk|
trace { chunk }
puts "-- [THIN] sending data #{chunk} ---"
send_data chunk
end
How the response object's each is defined
def each
yield head
if #body.is_a?(String)
yield #body
else
#body.each { |chunk| yield chunk }
end
end
So our env['async.callback'] is basically a method called post_process defined in the connection.rb class accessed via method(:post_process) allowing our method to be handled like a closure, which contains access to the #response object. When the reactor starts it first sends the header data in the next_tick where it yields the head, but the body is empty at this point so nothing gets yielded.
After this our each method overrides the old implementation owned by the #response object so when the add_timers fire the post_process which gets triggered sends the data that we supply using the body.call(["Wooah..."]) to the browser (or wherever)
Completely in awe of macournoyer and the team committing to thin. Please correct my understanding if you feel this is not how it works.

Resources