What is typically the best way for integrating HTML5 Websockets into an Ember.js application?
I've used Pusher.com in the past and have used a similar setup to this: http://blog.pusher.com/backbone-js-now-realtime-with-pusher/
I'm looking for the equivalent to Ember.js
Thanks guys!
I have setup an application using sockets. The way that I do it is that my main App file has a socket object and then receives messages from socket.io and puts them into an array. Any controllers that might be interested in listening for certain socket messages bind to the array and get new messages that come in. For example my chatController cares about chat messages, and my eventController cares about showing or hiding images based on socket events that are fired.
Below is some coffeescript code of how I set this up.
window.App = Ember.Application.create
init: ->
#_super()
#setSocketIO()
##
# socket - socket.io object for communicating with the server
socket: null
##
# socketMessages - array used to store any socket.io messages emitted from the server
socketMessages: []
##
# setSocketIO - setup socket.io connection and message endpoints
setSocketIO: ->
#set 'socket', io.connect()
##
# if the socket errors out, then reconnect it
App.socket.on "error", (err) ->
App.socket.socket.reconnect()
##
# receive new messages
App.socket.on "event-message-receive", (data) =>
#createSocketMessage 'event-message-receive', data
##
# image url recieved
App.socket.on "event-image-set-receive", (data) =>
#createSocketMessage 'event-image-set-receive', data
##
# hide image recieved
App.socket.on "event-image-hide-receive", () =>
#createSocketMessage 'event-image-hide-receive'
and then in my chat controller I only listen for new messages received
App.ChatController = Ember.ArrayController.extend
##
# chat messages
chatMessages: []
##
# socketMessageBinding - bind to the App.socketMessage message queue for receiving new messages from the server
socketMessagesBinding: 'App.socketMessages'
##
# socketMessageAdded - called whenever a socket.io message is sent from teh server
socketMessageAdded: (->
# get the newest item on the stack
socketMessage = #socketMessages[#socketMessages.length-1]
if socketMessage.type == 'event-message-receive'
#chatMessages.pushObject socketMessage.data
).observes('socketMessages.#each')
and in my event controller I listen for image show and image hide
App.EventController = Ember.ArrayController.extend
##
# property to show or hide an img on the page
showImage: false
##
# socketMessageBinding - bind to the App.socketMessage message queue for receiving new messages from the server
socketMessagesBinding: 'App.socketMessages'
##
# socketMessageAdded - called whenever a socket.io message is sent from teh server
socketMessageAdded: (->
# get the newest item on the stack
socketMessage = #socketMessages[#socketMessages.length-1]
if socketMessage.type == 'event-image-set-receive'
#set 'showImage', true
else if socketMessage.type == 'event-image-hide-receive'
#set 'showImage', false
).observes('socketMessages.#each')
You can have a look at the following on GitHub from 8 months ago, but at the moment there is no WebSocketsAdapter, as much as I'd love to see one. For the most part, people seem to be making ad-hoc WebSocketAdapters for their own scenarios.
I imagine that once EmberJS releases version 1.0, then you'll begin seeing a lot more third-party add-ons for it. As EmberJS, and in particular, EmberJS's DS (DataStore) are changing so rapidly, it would seem a little premature to begin creating a WebSocketAdapter unless you were fully committed to keeping it up-to-date as EmberJS/DS change rapidly from one day to the next.
Related
I need horizontally scalable WebSocket connection server for chat like system, where browser clients connected to different WebSocket servers coould exchange messages within separate chat rooms.
Clients HaProxy WebSocket server1 WebSocket server2 Redis/ZeroMQ
| | | |
client A ----=------------>o<----------------|------------------>|
| | | |
client B ----=-------------|---------------->o<----------------->|
| | | |
Here client A and client B are connected through HaProxy to two different WebSocket servers, which exchange messages through Redis/ZeroMQ backend, like in that and that questions.
Thinking of building that architecture I wonder if already there is an opensource analog. What such a project would you suggest to look at?
Look into the Plezi Ruby framework. I'm the author and it has automatic Redis scalability built in.
(you just setup the ENV['PL_REDIS_URL'] with the Redis URL)
As for the architecture to achieve this, it's fairly simple... I think.
Each server instance "subscribes" to two channels: a global channel for "broadcasting" (messages sent to all users or a large "family" of users) and a unique channel for "unicasting" (messages intended for a specific user connected to the server).
Each server manages it's internal broadcasting system, so that messages are either routed to a specific user, to a family of connections or all users, as par their target audience.
You can find the source code here. The Redis integration is handled using this code together with the websocket object code.
Web socket broadcasts are handled using both the websocket object on_broadcast callback. The Iodine server handles the inner broadcasting within each server instance using the websocket implementation.
I already posted the inner process architecture details as an answer to this question
I think socket.io has cross server support as well.
Edit (some code)
Due to the comment, I thought I'd put in some code... if you edit your question and add more specifications about the feature you're looking for, I can edit the code here.
I'm using the term "room" since this is what you referred to, although I didn't envision Plezi as just a "chat" framework, it is a very simple use case to demonstrate it's real-time abilities.
If you're using Ruby, you can run the following in the irb terminal (make sure to install Plezi first):
require 'plezi'
class MultiRoom
def on_open
return close unless params[:room] && params[:name]
#name = params[:name]
puts "connected to room #{params[:room]}"
# # if you use JSON to get room data,
# # you can use room arrays like so:
# params[:room] = params[:room].split(',') unless params[:room].is_a?(Array)
end
def on_message data
to_room = params[:room]
# # if you use JSON you can try:
# to_room = JSON.parse(data)['room'] rescue nil
# # we can use class `broadcast`, to broadcast also to self
MultiRoom.broadcast :got_msg, to_room, data, #name if to_room
end
protected
def got_msg room, data, from
write ::ERB::Util.html_escape("#{from}: #{data}") if params[:room] == room
# # OR, on JSON, with room arrays, try something like:
# write data if params[:room].include?(room)
end
end
class EchoConnection
def on_message data
write data
MultiRoom.broadcast "myroom", "Echo?", "system" if data == /^test/i
end
end
route '/echo', EchoConnection
route '/:name/(:room)', MultiRoom
# # add Redis auto-scaling with:
# ENV['PL_REDIS_URL'] = "redis://:password#my.host:6389/0"
exit # if running in terminal, using irb
You can test it out by connecting to: ws://localhost:3000/nickname/myroom
to connect to multiple "rooms" (you need to re-write the code for JSON and multi-room), try: ws://localhost:3000/nickname/myroom,your_room
test the echo by connecting to ws://localhost:3000/echo
Notice that the echo acts differently and allows you to have different websockets for different concerns - i.e., having one connection for updates and messages using JSON and another for multiple file uploading using raw binary data over websockets.
I have read through the zguide but haven't found the kind of pattern I'm looking for:
There is one central server (with known endpoint) and many clients (which may come and go).
Clients keep sending hearbeats to the server, but they don't want the server to reply.
Server receives heartbeats, but it does not reply to clients.
Hearbeats sent when clients and server are disconnected should somehow be dropped to prevent a heartbeat flood when they go back online.
The closet I can think of is the DEALER-ROUTER pattern, but since this is meant to be used as an async REQ-REP pattern (no?), I'm not sure what would happen if the server just keep silent on incoming "requests." Also, the DEALER socket would block rather then start dropping heartbeats when the send High Water Mark is reached, which would still result in a heartbeat flood.
The PUSH/PULL pattern should give you what you need.
# Client example
import zmq
class Client(object):
def __init__(self, client_id):
self.client_id = client_id
ctx = zmq.Context.instance()
self.socket = ctx.socket(zmq.PUSH)
self.socket.connect("tcp://localhost:12345")
def send_heartbeat(self):
self.socket.send(str(self.client_id))
# Server example
import zmq
class Server(object):
def __init__(self):
ctx = zmq.Context.instance()
self.socket = ctx.socket(zmq.PULL)
self.socket.bind("tcp://*:12345") # close quote
def receive_heartbeat(self):
return self.socket.recv() # returns the client_id of the message's sender
This PUSH/PULL pattern works with multiple clients as you wish. The server should keep an administration of the received messages (i.e. a dictionary like {client_id : last_received} which is updated with datetime.utcnow() on each received message. And implement some housekeeping function to periodically check the administration for clients with old timestamps.
I'm looking to develop a chat application with Pubnub where I want to make sure all the chat messages that are send is been stored in the database and also want to send messages in chat.
I found out that I can use the Parse with pubnub to provide storage options, But I'm not sure how to setup those two in a way where the messages and images send in the chat are been stored in the database.
Anyone have done this before with pubnub and parse? Are there any other easy options available to use with pubnub instead of using parse?
Sutha,
What you are seeking is not a trivial solution unless you are talking about a limited number of end users. So I wouldn't say there are no "easy" solutions, but there are solutions.
The reason is your server would need to listen (subscribe) to every chat channel that is active and store the messages being sent into your database. Imagine your app scaling to 1 million users (doesn't even need to get that big, but that number should help you realize how this can get tricky to scale where several server instances are listening to channels in a non-overlapping manner or with overlap but using a server queue implementation and de-duping messages).
That said, yes, there are PubNub customers that have implemented such a solution - Parse not being the key to making this happen, by the way.
You have three basic options for implementing this:
Implement a solution that will allow many instances of your server to subscribe to all of the channels as they become active and store the messages as they come in. There are a lot of details to making this happen so if you are not up to this then this is not likely where you want to go.
There is a way to monitor all channels that become active or inactive with PubNub Presence webhooks (enable Presence on your keys). You would use this to keep a list of all channels that your server would use to pull history (enable Storage & Playback on your keys) from in an on-demand (not completely realtime) fashion.
For every channel that goes active or inactive, your server will receive these events via the REST call (and endpoint that you implement on your server - your Parse server in this case):
channel active: record "start chat" timetoken in your Parse db
channel inactive: record "end chat" timetoken in your Parse db
the inactive event is the kickoff for a process that uses start/end timetokens that you recorded for that channel to get history from for channel from PubNub: pubnub.history({channel: channelName, start:startTT, end:endTT})
you will need to iterate on this history call until you receive < 100 messages (100 is the max number of messages you can retrieve at a time)
as you retrieve these messages you will save them to your Parse db
New Presence Webhooks have been added:
We now have webhooks for all presence events: join, leave, timeout, state-change.
Finally, you could just save each message to Parse db on success of every pubnub.publish call. I am not a Parse expert and barely know all of its capabilities but I believe they have some sort or store local then sync to cloud db option (like StackMob when that was a product), but even if not, you will save msg to Parse cloud db directly.
The code would look something like this (not complete, likely errors, figure it out or ask PubNub support for details) in your JavaScript client (on the browser).
var pubnub = PUBNUB({
publish_key : your_pub_key,
subscribe_key : your_sub_key
});
var msg = ... // get the message form your UI text box or whatever
pubnub.publish({
// this is some variable you set up when you enter a chat room
channel: chat_channel,
message: msg
callback: function(event){
// DISCLAIMER: code pulled from [Parse example][4]
// but there are some object creation details
// left out here and msg object is not
// fully fleshed out in this sample code
var ChatMessage = Parse.Object.extend("ChatMessage");
var chatMsg = new ChatMessage();
chatMsg.set("message", msg);
chatMsg.set("user", uuid);
chatMsg.set("channel", chat_channel);
chatMsg.set("timetoken", event[2]);
// this ChatMessage object can be
// whatever you want it to be
chatMsg.save();
}
error: function (error) {
// Handle error here, like retry until success, for example
console.log(JSON.stringify(error));
}
});
You might even just store the entire set of publishes (on both ends of the conversation) based on time interval, number of publishes or size of total data but be careful because either user could exit the chat and the browser without notice and you will fail to save. So the per publish save is probably best practice if a bit noisy.
I hope you find one of these techniques as a means to get started in the right direction. There are details left out so I expect you will have follow up questions.
Just some other links that might be helpful:
http://blog.parse.com/learn/building-a-killer-webrtc-video-chat-app-using-pubnub-parse/
http://www.pubnub.com/blog/realtime-collaboration-sync-parse-api-pubnub/
https://www.pubnub.com/knowledge-base/discussion/293/how-do-i-publish-a-message-from-parse
And we have a PubNub Parse SDK, too. :)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I know there are threads out there on this topic but do seem to answer quite what I am looking for. I have never done any push technology before so some guidance here is appreciated. I understand how when something has changed that that triggers the push to any browser that is listening but I do not think that quite fits the scenario that I am looking at trying to do.
We are rebuilding our users web application where they track shipments. We will be allowing the users to build there own searches that match how they do their job. For example, some will look for any shipment that is scheduled to deliver today where others look for shipments that are to be picked up today and still other that look for shipments that need to be scheduled for pickup. So when they come in an open the application I can give them a count for each of their work tasks that they need to do today. So now what I want is that the count will change based on the SQL being re-run. But I do not want the user to have to refresh the page to see the new count.
How do I have this SQL run and push the current count to any browser that is using this SQL. What is the mechanism that automatically re-runs this SQL? Keep in mind that I will have 50 or more of these unique SQLs that will need to be executed and the count pushed.
Thanks for your guidance!
I think this falls pretty cleanly into AJAX's role. AJAX will allow you to make GET and POST requests to the server, which will process the query and return results to a JS function. At risk of jQuery evangelism, the API makes this sort of thing extremely easy and standard, and you can have pretty much any event you want to activate it.
This has a few concerns, namely client-side inputs and SQL injection. If you're sending any input through a POST request, you have to be VERY careful to sanitize everything. Use prepared statements, don't perform query string concatination+execution, and generally assume the user will try to send text that you don't want them to. Give some server-side bounding to what inputs will be acknowledged successfully (e.g. If the options are "Left" or "Right", and they give "Bottom", either default it or drop it).
Client activates request (timed or event)
JS makes AJAX call to server (optionally with parameters)
Server validates any inputs and processes query
Server sends results back
JS uses results to modify DOM
AJAX pulling is one solution, although others exist that might be better suited and save you resources*...
For example, having a persistent Websocket connection would help minimize the cost of establishing new connections and having repeated requests that are mostly redundant.
As a result, your server should have a lower workload and your application would require less bandwidth (if these are important to you).
Even using a Websocket connection just to tell your client when to send an AJAX request can sometime save resources.
Here's a quick Websocket Push demo
You can use many different Websocket solutions. I wrote a quick demo using the Plezi framework because it's super easy to implement, but there are other ways to go about this.
The Plezi framework is a Ruby framework that runs it's own HTTP and Websocket server, independent of Rack.
The example code includes a controller for a model (DemoController), a controller for the root index page (DemoIndex) and a controller for the Websocket connection (MyWSController).
The example code seems longer because it's all in one script - even the HTML page used as a client... but it's really quite easy to read.
The search requirements are sent from the client to the web server (the search requires that the model's object ID is between 0 and 50).
Every time an object is created (or updated), an alert is sent to all the connected clients, first running each client's searches and then sending any updates.
The rest of the time the server is resting (except pinging every 45 seconds or so, to keep the websocket connection alive).
To see the demo in action, just copy and paste the following code inside your IRB terminal** and visit the demo's page:
require 'plezi'
class MyWSController
def on_open
# save the user data / register them / whatever
#searches = []
end
def on_message data
# get data from the user
data = JSON.parse data
# sanitize data, create search parameters...
raise "injection attempt: #{data}}" unless data['id'].match(/^\([\d]+\.\.[\d]+\)\z/)
# save the search
#searches << data
end
def _alert options
# should check #searches here
#searches.each do |search|
if eval(search['id']).include? options[:info][:id]
# update
response << {event: 'alert'}.merge(options).to_json
else
response << "A message wouldn't be sent for id = #{options[:info][:id]}.\nSaved resources."
end
end
end
end
class DemoController
def index
"use POST to post data here"
end
# called when a new object is created using POST
def save
# ... save data posted in params ... then:
_send_alert
end
# called when an existing object is posted using POST or UPDATE
def update
# ... update data posted in params ... then:
_send_alert
end
def demo_update
_send_alert message: 'info has been entered', info: params.update(id: rand(100), test: 'true')
" This is a demo for what happens when a model is updated.\n
Please Have a look at the Websocket log for the message sent."
end
# sends an alert to
def _send_alert alert_data
MyWSController.broadcast :_alert, alert_data
end
end
class DemoIndex
def index search = '(0..50)'
response['content-type'] = 'text/html'
<<-FINISH
<html>
<head>
<style>
html, body {height: 100%; width:100%;}
#output {margin:0 5%; padding: 1em 2em; background-color:#ddd;}
#output li {margin: 0.5em 0; color: #33f;}
</style>
</head><body>
<h1> Welcome to your Websocket Push Client </h1>
<p>Please open the following link in a <b>new</b> tab or browser, to simulate a model being updated: <a href='#{DemoController.url_for id: :demo_update, name: 'John Smith', email: 'john#gmail.com'}', target='_blank'>update simulation</a></p>
<p>Remember to keep this window open to view how a simulated update effects this page.</p>
<p>You can also open a new client (preferably in a new tab, window or browser) that will search only for id's between 50 and 100: <a href='#{DemoIndex.url_for :alt_search}'>alternative search</a></p>
<p>Websocket messages recieved from the server should appeare below:</p>
<ul id='output'>
</ul>
<script>
var search_1 = JSON.stringify({id: '#{search}'})
output = document.getElementById("output");
websocket = new WebSocket("#{request.base_url 'ws'}/ws");
websocket.onmessage = function(e) { output.innerHTML += "<li>" + e.data + "</li>" }
websocket.onopen = function(e) { websocket.send(search_1) }
</script>
</body></html>
FINISH
end
def alt_search
index '(50..100)'
end
end
listen
route '/model', DemoController
route '/ws', MyWSController
route '/', DemoIndex
exit
To view this demo visit localhost:3000 and follow the on screen instructions.
The demo will instruct you to open a number of browser windows, simulating different people accessing your server and doing different things.
As you can see, both the client side javascript and the server side handling aren't very difficult to write, while Websockets provide for a very high level of flexibility and allows for better resource management (for instance, the search parameters need not be sent over and ver again to the server).
* the best solution for your application depends on your specific design. I'm just offering another point of view.
** ruby's terminal is run using irb from bash, make sure first to install the plezi network using gem install plezi
Using v0.7.1 of the Ruby amqp library and Ruby 1.8.7, I am trying to post a large number (millions) of short (~40 bytes) messages to a RabbitMQ server. My program's main loop (well, not really a loop, but still) looks like this:
AMQP.start(:host => '1.2.3.4',
:username => 'foo',
:password => 'bar') do |connection|
channel = AMQP::Channel.new(connection)
exchange = channel.topic("foobar", {:durable => true})
i = 0
EM.add_periodic_timer(1) do
print "\rPublished #{i} commits"
end
results = get_results # <- Returns an array
processor = proc do
if x = results.shift then
exchange.publish(x, :persistent => true,
:routing_key => "test.#{i}")
i += 1
EM.next_tick processor
end
end
EM.next_tick(processor)
AMQP.stop {EM.stop} end
The code starts processing the results array just fine, but after a while (usually, after 12k messages or so) it dies with the following error
/Library/Ruby/Gems/1.8/gems/amqp-0.7.1/lib/amqp/channel.rb:807:in `send':
The channel 1 was closed, you can't use it anymore! (AMQP::ChannelClosedError)
No messages are stored on the queue. The error seems to be happening just when network activity from the program to the queue server starts.
What am I doing wrong?
First mistake is that you didn't post the RabbitMQ version that you are using. Lots of people are running old obsolete version 1.7.2 because that is what is in their OS package repositories. Bad move for anyone sending the volume of messages that you are. Get RabbitMQ 2.5.1 from the RabbitMQ site itself and get rid of your default system package.
Second mistake is that you did not tell us what is in the RabbitMQ logs.
Third mistake is that you said nothing about what is consuming the messages. Is there another process running somewhere that has declared a queue and bound it to the exchange. There is NO message queue unless somebody declares it to RabbitMQ and binds it to an exchange. Even then messages will only flow if the binding key for the queue matches the routing key that you publish with.
Fourth mistake. You have routing keys and binding keys mixed up. The routing key is a string such as topic.test.json.echos and the binding key (used to bind a queue to an exchange) is a pattern like topic.# or topic..json.
Updated after your clarifications
Regarding versions, I'm not sure when it was fixed but there was a problem in 1.7.2 with large numbers of persistent messages causing RabbitMQ to crash when it rolled over its persistence log, and after crashing it was unable to restart until someone manually undid the rollover.
When you say that a connection is being opened and closed, I hope that it is not per message. That would be a strange way to use AMQP.
Let me repeat. Producers do NOT write messages to queues. They write messages to exchanges which then route the messages to queues based on the routing key (string) and the queue's binding key (pattern). In your example I misread the use of the # sign, but I see nothing which declares a queue and binds it to the exchange.