websockets from common lisp - websocket

I want my web application to push live updates notifications to the clients.
I use common lisp and hunchentoot on ccl.
What libraries I should use?
I have found clws and hunchensockets.
Latter one is not recommended for production use.
I need production level code.
For the first one, clws, at the github there is an example. But I could not figure out how to send data to the client without sending a message from the client and by just opening socket connection form the client.
Seemingly there is not much difference from the classical http style, iff client requests then server responses. What am i missing there?

Here's a trick for finding example code:
https://github.com/search?l=common-lisp&q=defsystem+clws&ref=searchresults&type=Code
Of course, these examples vary in quality.
A similar approach may to work at other larger code hosting services.

One should use the
write-to-client-text
or
write-to-clients-text to send server initiated responses to the client, for one client and many or all of them, respectively.
one should first have list of clients that is connected to the resource created at the examples, by creating a class for the resource like that.
(defclass echo-resource (ws-resource)
((clients :initform () :accessor clients)))
what is not mentioned in the examples there is to name this resource instance once defined to later use it.
(setf res1 (make-instance 'echo-resource))
(register-global-resource "/echo"
res1
(origin-prefix "http://127.0.0.1" "http://localhost"))
then you can gather the list of the connected clients to this resource by clients accessor of the class echo-resource
(clients res1)
now the functions I mentioned at top can be used from this package like that
(write-to-client-text (car (clients res1)) "new message to one")
(write-to-clients-text (clients res1) "<p id='mesagetoall'>new message to all</a>")

Related

How do I expose my own Elixir websocket using WebSockex

I see the basic example for Elixir's WebSockex library here but it doesn't really explain how I can expose my own websocket to the internet. This question has answers which explain how to chat to an existing websocket externally, but I want to expose my own websocket. I'm actually using websockex as part of a Phoenix application, so perhaps bits of Phoenix might help here?
I obviously know the ip:port combo of my phoenix application so given these, how do I expose a websockex websocket on that ip:port? In other words, what should I pass as the URL? in this basic example code:
defmodule WebSocketExample do
use WebSockex
def start_link(url, state) do
WebSockex.start_link(url, __MODULE__, state)
end
def handle_frame({type, msg}, state) do
IO.puts "Received Message - Type: #{inspect type} -- Message: #{inspect msg}"
{:ok, state}
end
def handle_cast({:send, {type, msg} = frame}, state) do
IO.puts "Sending #{type} frame with payload: #{msg}"
{:reply, frame, state}
end
end
Please note that I need to expose a raw websocket, not a Phoenix channel, as the consumer doesn't understand Phoenix channels. If Phoenix can expose a raw websocket then I'll consider that a solution too.
If neither Phoenix nor WebSockex can help, what are my options?
Websockex is a client library, I don't think it has any code for exposing a websocket. Since you're already using phoenix, you probably can do what you need with phoenix channels.
If you're on cowboy (and you probably are, since it's the default), then you can also use it to expose a raw websocket. However, it requires some fiddling with routing. You will need to replace YourAppWeb.Endpoint with a manual configuration of cowboy:
{
Plug.Cowboy,
scheme: :http,
plug: YourAppWeb.Endpoint,
options: endpoint_options(),
dispatch: [
_: [
# Dispatch paths beginning with /ws to a websocket handler
{"/ws/[...]", YourApp.WebsocketHandler, []},
# Dispatch other paths to the phoenix endpoint
{:_, Plug.Cowboy.Handler, {YourAppWeb.Endpoint, endpoint_options()}}
]
]
}
I have honestly only done this with raw plug, so you might need to convert the endpoint to be a Plug instead of a Phoenix.Endpoint. Then, you need to implement YourApp.WebsocketHandler to conform to cowboy's API and perform a websocket upgrade (and handle sending/receiving messages), as described in cowboy docs. You can also see this gist for a more fleshed-out example.
WebSockex implements many callbacks, including, but not limited to WebSockex.handle_connect/2. It holds WebSockex.Conn in a state and passes it to all callbacks.
WebSockex.Conn is a plain old good struct, having socket field.
So from any callback (I’d do it from WebSockex.handle_connect/2) you might share this socket with the process which needs it and use it then from there.
Also, you can borrow some internals and check how the connection is being created.
You’ll see it uses WebSockex.Conn.new/2 that returns an initialized connection, that, in turn, holds a socket. In that case, you’ll be obliged to supervise the process that holds the socket manually.
The power of OSS is all answers are one mouse click far from questions.

How to architecture a web-socket server with client subscription of specific responses in Phoenix?

I'm developing a web-socket server that I need to send real-time messages using Phoenix Framework to my clients.
The basic idea of my web-socket server is that a client can subscribe for some type of information and expect to receive only it, other clients would never receive it unless they subscribe to it too, the same information is broadcasted to every (and only) client subscribed to it in real-time.
Also, these information are separated in categories and sub categories, going down to 4 levels of categories.
So, for example, let's say I have 2 types of category information CatA, and CatB, each category can have sub categories, so CatA can have CatA.SubCatA and CatA.SubCatB sub categories, each sub categories can also have other subcategories and so on.
These information are generated by services, one for each root category (they handle all the information for the subcategories too), so we have CatAService and CatBService. These services needs to run as the server starts, always generating new information and broadcasting it to anyone that is subscribed to it.
Now, I have clients that will try to subscribe to these information, my solution for now is to have a channel for each information type available, so a client can join a channel to receive information of the channel's type.
For that I have something like that in the js code:
let channel = socket.channel("CatA:SubCatA:SubSubCatA", {})
channel.join()
channel.on("new_info", (payload) => { ... }
In this case, I would have a channel that all clients interested in SubSubCatA from SubCatA from CatA can join and a service for CatA that would generate and broadcast the information for all it's sub categories and so on.
I'm not sure if I was able to explain exactly what I want, but if something is not clear, please tell me what so I can better explain it, also, I made this (very bad) image as an example of how all the communication would happen https://ibb.co/fANKPb .
Also, note that I could only have one channel for each category and broadcast all the subcategories information for everyone that joined that category channel, but I'm very concerned about performance and network bandwidth, So my objective is to only send the information to only the clients that requested it.
Doing some tests here, it seems that If the client joins the channel as shown in the js code above, I can do this:
MyServerWeb.Endpoint.broadcast "CatA:SubCatA:SubSubCatA", "new_info", message
and that client (and all the other clients listening to that channel, but only then) will receive that message.
So, my question is divided in two parts, one is more generic and is what are the correct ways to achieve what I described above.
The second is if the solution I already came up is a good way to solve this since I'm not sure if the length of the string "CatA:SubCatA:SubSubCatA" creates an overhead when the server parses it or if there is some other limitation that I'm not aware.
Thanks!
You have to make separate channels for each class of clients and depending upon the ids which you are getting, you can broadcast the messages after checking about the clients joining the channel
def join("groups:" <> group_slug, _params, socket) do
%{team_id: team_id, current_user: user} = socket.assigns
case Repo.get_by(Group, slug: group_slug, team_id: team_id) do
nil ->
{:error, %{message: "group not found"}}
group ->
case GroupAuthorization.can_view?(group.id, user.id) do
true ->
messages = MessageQueries.group_latest_messages(group.id, user)
json = MessageView.render("index.json", %{messages: messages})
send self(), :after_join
{:ok, %{messages: json}, assign(socket, :group, group)}
false ->
{:error, %{message: "unauthorized"}}
end
end
end
This is an example of sending messages only to the users in groups which are subscribed and joined to the group. Hope this helps.

Golang "import cycle not allowed" after splitting up my program into subpackages

I have a large Go program that is spread across 50+ miscellaneous Go files in the root of my package folder. I know that this is considered terrible, so I've decided to embark upon splitting up the program into some subpackages for better organization.
Unfortunately, after splitting off the logical parts of my programs into subpackages, I'm running into the dreaded "import cycle not allowed" error. This is because the Go compiler refuses to compile anything with circular imports. But the different logical parts of my program need to communicate with each other...
I've done some research online and found some excellent resources, like this excellent StackOverflow question that attempts to explain what to think about to solve this problem at a high level.
My apologies, but this post is way over my head, and I was wondering if someone could spell out an exact solution for my specific code situation, and hopefully in simpler language aimed at a complete beginner to Go.
A brief description of how my code is organized and what it does:
It connects to 3 different servers using 3 different protocols (Twitch.tv, Discord, and a custom WebSocket server).
It seems obvious to make 3 subpackages, one for each server type, and then initialize all of them in a main.go file.
Each subpackage is not just an interface; it contains a collection of global variables (that track the connection + other things) and a bunch of functions. (Note that I can refactor this such that its all contained within one giant interface, if necessary.)
95% of the time, the subpackages receive messages from their individual servers and send messages back to their individual servers, so the subpackages are mostly compartmentalized.
However, sometimes the Twitch.tv module needs to send a message to the Discord server, and the Discord server needs to send a message to the Twitch.tv server. So the Discord server needs to be able to call the "Send()" functions inside the Twitch.tv subpackage, and the Twitch.tv subpackage needs to be able to call the "Send()" function of the Discord subpackage! So this is where my circular problem comes from.
It looks like you want to keep your protocol specific code in separate packages.
If you don't want much refactor, I'd suggest you to create a package with dispatcher. Each server imports dispatcher package and register a handler for specific protocol. When it needs to call another server, just send a message via dispatcher to specified handler.
In addition to the channel-based approaches proposed by TechSphinX and Oleg, you can use an interface-based approach and simple dependency injection.
You can use a setup function, probably in or called from main(), that creates instances of each service client. These should each implement Send() and have fields for the other clients they need to use. Create a Sender interface in its own package, and put your message struct in there as well.
After creating the instances, you can then set the other clients on each instance. This way they can send to whatever they need to send to, without circular dependencies. You can even put all the clients into a struct to make the injection easier.
For example:
// pkg sender
type Sender interface {
Send(m Message) error // or whatever it needs to be
}
type Message struct {
// Whatever goes in a message
}
type Dispatcher struct {
TwitchClient Sender
DiscordClient Sender
WebClient Sender
}
// pkg main
func setup() {
d := sender.Dispatcher{
TwitchClient: twitch.New(),
DiscordClient: discord.New(),
WebClient: web.New(),
}
d.TwitchClient.Dispatcher = d
d.DiscordClient.Dispatcher = d
d.WebClient.Dispatcher = d
}
// pkg twitch
type TwitchClient struct {
Dispatcher sender.Dispatcher
// other fields ...
}
func New() *TwitchClient {
return new(TwitchClient) // or whatever
}
func (t *TwitchClient) Send(m sender.Message) error {
// send twitch message...
// Need to send a Discord message?
t.Dispatcher.DiscordClient.Send(m)
}
Tailored to your particular case:
From what you describe the only reason for the packages to import each other is that they need to call each others Send() functions.
Channels to communicate
Create channel(s) in main and give it to both packages on init. Then they can communicate with each other without knowing of each others existence.
It sounds like the server/protocol packages are useful on their own, and the requirement to send a message from one kind of a server to another kind is a feature of your specific application. In other words, the server/protocol packages don't need to send messages to each other, your application needs to.
I usually put application-specific functionality into an app package. Package app can import all your protocol packages.
You can also do this in package main, but I've found an app package to be a more useful instrument. (My main package is usually just the single main.go file.)

Pubnub chat application with storage

I'm looking to develop a chat application with Pubnub where I want to make sure all the chat messages that are send is been stored in the database and also want to send messages in chat.
I found out that I can use the Parse with pubnub to provide storage options, But I'm not sure how to setup those two in a way where the messages and images send in the chat are been stored in the database.
Anyone have done this before with pubnub and parse? Are there any other easy options available to use with pubnub instead of using parse?
Sutha,
What you are seeking is not a trivial solution unless you are talking about a limited number of end users. So I wouldn't say there are no "easy" solutions, but there are solutions.
The reason is your server would need to listen (subscribe) to every chat channel that is active and store the messages being sent into your database. Imagine your app scaling to 1 million users (doesn't even need to get that big, but that number should help you realize how this can get tricky to scale where several server instances are listening to channels in a non-overlapping manner or with overlap but using a server queue implementation and de-duping messages).
That said, yes, there are PubNub customers that have implemented such a solution - Parse not being the key to making this happen, by the way.
You have three basic options for implementing this:
Implement a solution that will allow many instances of your server to subscribe to all of the channels as they become active and store the messages as they come in. There are a lot of details to making this happen so if you are not up to this then this is not likely where you want to go.
There is a way to monitor all channels that become active or inactive with PubNub Presence webhooks (enable Presence on your keys). You would use this to keep a list of all channels that your server would use to pull history (enable Storage & Playback on your keys) from in an on-demand (not completely realtime) fashion.
For every channel that goes active or inactive, your server will receive these events via the REST call (and endpoint that you implement on your server - your Parse server in this case):
channel active: record "start chat" timetoken in your Parse db
channel inactive: record "end chat" timetoken in your Parse db
the inactive event is the kickoff for a process that uses start/end timetokens that you recorded for that channel to get history from for channel from PubNub: pubnub.history({channel: channelName, start:startTT, end:endTT})
you will need to iterate on this history call until you receive < 100 messages (100 is the max number of messages you can retrieve at a time)
as you retrieve these messages you will save them to your Parse db
New Presence Webhooks have been added:
We now have webhooks for all presence events: join, leave, timeout, state-change.
Finally, you could just save each message to Parse db on success of every pubnub.publish call. I am not a Parse expert and barely know all of its capabilities but I believe they have some sort or store local then sync to cloud db option (like StackMob when that was a product), but even if not, you will save msg to Parse cloud db directly.
The code would look something like this (not complete, likely errors, figure it out or ask PubNub support for details) in your JavaScript client (on the browser).
var pubnub = PUBNUB({
publish_key : your_pub_key,
subscribe_key : your_sub_key
});
var msg = ... // get the message form your UI text box or whatever
pubnub.publish({
// this is some variable you set up when you enter a chat room
channel: chat_channel,
message: msg
callback: function(event){
// DISCLAIMER: code pulled from [Parse example][4]
// but there are some object creation details
// left out here and msg object is not
// fully fleshed out in this sample code
var ChatMessage = Parse.Object.extend("ChatMessage");
var chatMsg = new ChatMessage();
chatMsg.set("message", msg);
chatMsg.set("user", uuid);
chatMsg.set("channel", chat_channel);
chatMsg.set("timetoken", event[2]);
// this ChatMessage object can be
// whatever you want it to be
chatMsg.save();
}
error: function (error) {
// Handle error here, like retry until success, for example
console.log(JSON.stringify(error));
}
});
You might even just store the entire set of publishes (on both ends of the conversation) based on time interval, number of publishes or size of total data but be careful because either user could exit the chat and the browser without notice and you will fail to save. So the per publish save is probably best practice if a bit noisy.
I hope you find one of these techniques as a means to get started in the right direction. There are details left out so I expect you will have follow up questions.
Just some other links that might be helpful:
http://blog.parse.com/learn/building-a-killer-webrtc-video-chat-app-using-pubnub-parse/
http://www.pubnub.com/blog/realtime-collaboration-sync-parse-api-pubnub/
https://www.pubnub.com/knowledge-base/discussion/293/how-do-i-publish-a-message-from-parse
And we have a PubNub Parse SDK, too. :)

RPCs on Websockets with Scala and JS (like SignalR)

I want to implement an application based on Scala and Play! 2.1 where all data transport is handled through websockets in real-time. Since the application supports collaboration of several users, I want to be able to call methods on a) the server, b) one client, c) all clients.
For example, let's say there are Users Bob, Jane and Carl.
Carl creates a "note" which is sent through the socket and then, if successfully stored, added to the DOM through basic Javascript (let's say addNote(note)) on all clients.
A sample call could look like this:
// sends message type createCard to server, takes <form id="card"> as data and receives a JSON object as response
mysocket.send("createCard", $('#card').serialize(), { success: function(data) {
var card = data.card
mysocket.allClients().addCard(card); // appends <div id="card"> to DOM
});
Is this possible or am I going about this the wrong way entirely?
See SignalJ - a port of SignalR ideas to PlayFramework and Akka.

Resources