I'm following along the book from PragProg "Programming Phoenix" and I'm currently on the chapters about Phoenix's channels.
At some point there's an example about setting up a simple channel with one topic and handling in/out messages between the client and server. No fancy stuff, everything worked as advertised.
Then I started investigating Phoenix.Channel's API and found "broadcast_from" function.
Doing a bit of research it was clear to me that using "broadcast_from" (from the channel) would have sent the message to all connected clients but the one whose message I was currently handling.
My current code is
defmodule Rumbl.VideoChannel do
use Rumbl.Web, :channel
def join("videos:" <> video_id, _params, socket) do
:timer.send_interval(5000, :ping)
{:ok, %{status: "successful join"}, assign(socket, :video_id, String.to_integer(video_id))}
end
def handle_info(:ping, socket) do
count = socket.assigns[:count] || 1
#push socket, "ping", %{count: count}
broadcast_from! socket, "test", %{id: 1, status: :critical}
{:noreply, assign(socket, :count, count + 1)}
end
end
I expected that, upon client's connection, the client would not receive
the "test" messages. And that was, indeed, the outcome. Until I opened another browser window and connected to the channel. At that point
both windows started receiving the "test" messages. It also happened if the second window was opened from another device (such as an iPhone).
Is that the normal behaviour or is it me misusing/misunderstanding the documentation?
Thanks in advance for your support.
I think, it's the normal behaviour. From the broadcast_from docu:
The channel that owns the socket will not receive the published message.
It's a bit confusing. As I understand the docu, when you open another window, join creates another socket and therefore another pid from which you receive the test message in your first window. With one window you will see no test message. With two windows you will see the test messages each from the other pid of the two sockets.
Related
I need to perform some actions when the user leaves a channel (in most cases where they close the tab voluntarily, but there may also be a connection loss/timeout etc.)
According to posts like https://elixirforum.com/t/phoenix-presence-run-some-code-when-user-leaves-the-channel/17739 and How to detect if a user left a Phoenix channel due to a network disconnect?, intercepting the "presence_diff" event from Presence seems to be a foolproof way to go, as it should also covers the cases where the connection terminates abnormally.
Strangely, the presence_diff event seems to only be triggered when I track the user via Presence.track, but not when the user leaves.
Meanwhile, adding a terminate(reason, socket) callback in my channel correctly catches the leave event.
I wonder what could be wrong in my configuration. Or did I not understand the use of Presence correctly?
Example code:
def join("participant:" <> participant_id, _payload, socket) do
if socket.assigns.participant_id == participant_id do
send(self(), :after_participant_join)
{:ok, socket}
else
{:error, %{reason: "unauthorized"}}
end
end
def handle_info(:after_participant_join, socket) do
experiment_id = socket.assigns.experiment_id
Presence.track(socket, experiment_id, %{
# keys to track
})
# Broadcast something
# broadcast(socket, ...)
{:noreply, socket}
end
intercept(["presence_diff"])
def handle_out("presence_diff", payload, socket) do
# Only gets triggered at Presence.track, but not when the connection is closed.
IO.puts("presence_diff triggered, payload is #{inspect(payload)}")
leaves = payload.leaves
for {experiment_id, meta} <- leaves do
IO.puts("Leave information: #{meta}")
# Do stuffs
end
end
# This works, however.
def terminate(reason, socket) do
IO.puts("terminated. #{inspect(reason)}")
# Do stuffs.
end
OK I think I know what happened: Each "participant:" <> participant_id topic is, as its name suggests, only subscribed to by one participant. Therefore, when that participant quits, the process also dies and nobody is able to act on the presence_diff message.
A separate process is still needed. One can call MyApp.Endpoint.subscribe from that process to subscribe to the "participant:" <> participant_id topic and act on the presence_diff messages.
Or one can set up an external monitor. See How to detect if a user left a Phoenix channel due to a network disconnect?
I'm developing a web-socket server that I need to send real-time messages using Phoenix Framework to my clients.
The basic idea of my web-socket server is that a client can subscribe for some type of information and expect to receive only it, other clients would never receive it unless they subscribe to it too, the same information is broadcasted to every (and only) client subscribed to it in real-time.
Also, these information are separated in categories and sub categories, going down to 4 levels of categories.
So, for example, let's say I have 2 types of category information CatA, and CatB, each category can have sub categories, so CatA can have CatA.SubCatA and CatA.SubCatB sub categories, each sub categories can also have other subcategories and so on.
These information are generated by services, one for each root category (they handle all the information for the subcategories too), so we have CatAService and CatBService. These services needs to run as the server starts, always generating new information and broadcasting it to anyone that is subscribed to it.
Now, I have clients that will try to subscribe to these information, my solution for now is to have a channel for each information type available, so a client can join a channel to receive information of the channel's type.
For that I have something like that in the js code:
let channel = socket.channel("CatA:SubCatA:SubSubCatA", {})
channel.join()
channel.on("new_info", (payload) => { ... }
In this case, I would have a channel that all clients interested in SubSubCatA from SubCatA from CatA can join and a service for CatA that would generate and broadcast the information for all it's sub categories and so on.
I'm not sure if I was able to explain exactly what I want, but if something is not clear, please tell me what so I can better explain it, also, I made this (very bad) image as an example of how all the communication would happen https://ibb.co/fANKPb .
Also, note that I could only have one channel for each category and broadcast all the subcategories information for everyone that joined that category channel, but I'm very concerned about performance and network bandwidth, So my objective is to only send the information to only the clients that requested it.
Doing some tests here, it seems that If the client joins the channel as shown in the js code above, I can do this:
MyServerWeb.Endpoint.broadcast "CatA:SubCatA:SubSubCatA", "new_info", message
and that client (and all the other clients listening to that channel, but only then) will receive that message.
So, my question is divided in two parts, one is more generic and is what are the correct ways to achieve what I described above.
The second is if the solution I already came up is a good way to solve this since I'm not sure if the length of the string "CatA:SubCatA:SubSubCatA" creates an overhead when the server parses it or if there is some other limitation that I'm not aware.
Thanks!
You have to make separate channels for each class of clients and depending upon the ids which you are getting, you can broadcast the messages after checking about the clients joining the channel
def join("groups:" <> group_slug, _params, socket) do
%{team_id: team_id, current_user: user} = socket.assigns
case Repo.get_by(Group, slug: group_slug, team_id: team_id) do
nil ->
{:error, %{message: "group not found"}}
group ->
case GroupAuthorization.can_view?(group.id, user.id) do
true ->
messages = MessageQueries.group_latest_messages(group.id, user)
json = MessageView.render("index.json", %{messages: messages})
send self(), :after_join
{:ok, %{messages: json}, assign(socket, :group, group)}
false ->
{:error, %{message: "unauthorized"}}
end
end
end
This is an example of sending messages only to the users in groups which are subscribed and joined to the group. Hope this helps.
I have read through the zguide but haven't found the kind of pattern I'm looking for:
There is one central server (with known endpoint) and many clients (which may come and go).
Clients keep sending hearbeats to the server, but they don't want the server to reply.
Server receives heartbeats, but it does not reply to clients.
Hearbeats sent when clients and server are disconnected should somehow be dropped to prevent a heartbeat flood when they go back online.
The closet I can think of is the DEALER-ROUTER pattern, but since this is meant to be used as an async REQ-REP pattern (no?), I'm not sure what would happen if the server just keep silent on incoming "requests." Also, the DEALER socket would block rather then start dropping heartbeats when the send High Water Mark is reached, which would still result in a heartbeat flood.
The PUSH/PULL pattern should give you what you need.
# Client example
import zmq
class Client(object):
def __init__(self, client_id):
self.client_id = client_id
ctx = zmq.Context.instance()
self.socket = ctx.socket(zmq.PUSH)
self.socket.connect("tcp://localhost:12345")
def send_heartbeat(self):
self.socket.send(str(self.client_id))
# Server example
import zmq
class Server(object):
def __init__(self):
ctx = zmq.Context.instance()
self.socket = ctx.socket(zmq.PULL)
self.socket.bind("tcp://*:12345") # close quote
def receive_heartbeat(self):
return self.socket.recv() # returns the client_id of the message's sender
This PUSH/PULL pattern works with multiple clients as you wish. The server should keep an administration of the received messages (i.e. a dictionary like {client_id : last_received} which is updated with datetime.utcnow() on each received message. And implement some housekeeping function to periodically check the administration for clients with old timestamps.
Fairly new to zeromq and trying to get a basic pub/sub to work. When I run the following (sub starting before pub) the publisher finishes but the subscriber hangs having not received all the messages - why ?
I think the socket is being closed but the messages have been sent ? Is there a way of ensuring all messages are received ?
Publisher:
import zmq
import random
import time
import tnetstring
context=zmq.Context()
socket=context.socket(zmq.PUB)
socket.bind("tcp://*:5556")
y=0
for x in xrange(5000):
st = random.randrange(1,10)
data = []
data.append(random.randrange(1,100000))
data.append(int(time.time()))
data.append(random.uniform(1.0,10.0))
s = tnetstring.dumps(data)
print 'Sending ...%d %s' % (st,s)
socket.send("%d %s" % (st,s))
print "Messages sent: %d" % x
y+=1
print '*** SERVER FINISHED. # MESSAGES SENT = ' + str(y)
Subscriber :-
import sys
import zmq
import tnetstring
# Socket to talk to server
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5556")
filter = "" # get all messages
socket.setsockopt(zmq.SUBSCRIBE, filter)
x=0
while True:
topic,data = socket.recv().split()
print "Topic: %s, Data = %s. Total # Messages = %d" % (topic,data,x)
x+=1
In ZeroMQ, clients and servers always try to reconnect; they won't go down if the other side disconnects (because in many cases you'd want them to resume talking if the other side comes up again). So in your test code, the client will just wait until the server starts sending messages again, unless you stop recv()ing messages at some point.
In your specific instance, you may want to investigate using the socket.close() and context.term(). It will block until all the messages have been sent. You also have the problem of a slow joiner. You can add a sleep after the bind, but before you start publishing. This works in a test case, but you will want to really understand what is the solution vs a band-aid.
You need to think of the PUB/SUB pattern like a radio. The sender and receiver are both asynchronous. The Publisher will continue to send even if no one is listening. The subscriber will only receive data if it is listening. If the network goes down in the middle, the data will be lost.
You need to understand this in order to design your messages. For example, if you design your messages to be "idempotent", it doesn't matter if you lose data. An example of this would be a status type message. It doesn't matter if you have any of the previous statuses. The latest one is correct and message loss doesn't matter. The benefits to this approach is that you end up with a more robust and performant system. The downsides are when you can't design your messages this way.
Your example includes a type of message that requires no loss. Another type of message would be transactional. For example, if you just sent the deltas of what changed in your system, you would not be able to lose the messages. Database replication is often managed this way which is why db replication is often so fragile. To try to provide guarantees, you need to do a couple things. One thing is to add a persistent cache. Each message sent needs to be logged in the persistent cache. Each message needs to be assigned a unique id (preferably a sequence) so that the clients can determine if they are missing a message. A second socket (ROUTER/REQ) needs to be added for the client to request the missing messages individually. Alternatively, you could just use the secondary socket to request resending over the PUB/SUB. The clients would then all receive the messages again (which works for the multicast version). The clients would ignore the messages they had already seen. NOTE: this follows the MAJORDOMO pattern found in the ZeroMQ guide.
An alternative approach is to create your own broker using the ROUTER/DEALER sockets. When the ROUTER socket saw each DEALER connect, it would store its ID. When the ROUTER needed to send data, it would iterate over all client IDs and publish the message. Each message should contain a sequence so that the client can know what missing messages to request. NOTE: this is a sort of reimplementation of Kafka from linkedin.
So, I'm trying to simulate some basic HTTP persistent connections using sockets and Ruby - for a college class.
The point is to build a server - able to handle multiple clients - that receives a file path and gives back the file content - just like an HTTP GET.
The current server implementation loops listening for clients, fires a new thread when there's an incoming connection and reads the file paths from this socket. It's very dumb, but it works fine when working with non-presistent connections - one request per connection.
But they should be persistent.
Which means the client shouldn't worry about closing the connection. In the non-persistent version the servers echoes the response and close the connection - goodbye client, farewell.
But being persistent means the server thread should loop and wait for more incoming requests until... well until there's no more requests. How does the server knows that? It doesn't! Some sort of timeout is needed. I tried to do that with Ruby's Timeout, but it didn't work.
Googling for some solutions - besides being thoroughly advised to avoid using Timeout module - I've seen a lot of posts about the IO.select method, that should handle this socket waiting issue way better than using threads and stuff (which really sounds cool, considering how Ruby threads (don't) work). I'm trying to understand here how IO.select works, but still wasn't able to make it work in the current scenario.
So I aske basically two things:
how can I efficiently work this timeout issue on the server-side, either using some thread based solution, low-level socket options or some IO.select magic?
how can the client side know that the server has closed its side of the connection?
Here's the current code for the server:
require 'date'
module Sockettp
class Server
def initialize(dir, port = Sockettp::DEFAULT_PORT)
#dir = dir
#port = port
end
def start
puts "Starting Sockettp server..."
puts "Serving #{#dir.yellow} on port #{#port.to_s.green}"
Socket.tcp_server_loop(#port) do |socket, client_addrinfo|
handle socket, client_addrinfo
end
end
private
def handle(socket, addrinfo)
Thread.new(socket) do |client|
log "New client connected"
begin
loop do
if client.eof?
puts "#{'-' * 100} end connection"
break
end
input = client.gets.chomp
body = content_for(input)
response = {}
if body
response.merge!({
status: 200,
body: body
})
else
response.merge!({
status: 404,
body: Sockettp::STATUSES[404]
})
end
log "#{addrinfo.ip_address} #{input} -- #{response[:status]} #{Sockettp::STATUSES[response[:status]]}".send(response[:status] == 200 ? :green : :red)
client.puts(response.to_json)
end
ensure
socket.close
end
end
end
def content_for(path)
path = File.join(#dir, path)
return File.read(path) if File.file?(path)
return Dir["#{path}/*"] if File.directory?(path)
end
def log(msg)
puts "#{Thread.current} -- #{DateTime.now.to_s} -- #{msg}"
end
end
end
Update
I was able to simulate the timeout behaviour using the IO.select method, but the implementation doesn't feel good when combining with a couple of threads for accepting new connections and another couple for handling requests. The concurrency makes the situation mad and unstable, and I'm probably not sticking with it unless I can figure out a better way of using this solution.
Update 2
Seems like Timeout is still the best way to handle this. I'm sticking with it till find a better option.
I still don't know how to deal with zombie client connections.
Solution
I endend up using IO.select (got inspired when looking at the webrick code). You cha check the final version here (lib/http/server/client_handler.rb)
You should implement something like heartbeat packets.Client side should send special packets to after few secs/mins to ensure that server doesn't time out the connection on the client end.You just avoid doing anything in this call.