Realtime display using redis pubsub in ruby? - ruby

I have stream of data coming to me via an http hit. I want to update data in realtime. I have started pushing HTTP hits data to a redis pubsub. Now I want to show it to users.
I want to update user's screen as soon as I get some data on redis channel. I want to use ruby as that is the language I am comfortable with.

I would use Sinatra's "stream" feature coupled with EventSource on the client side. Leaves IE out, though.
Here's some mostly functional server side code pulled from https://github.com/redis/redis-rb/blob/master/examples/pubsub.rb (another option is https://github.com/pietern/hiredis-rb):
get '/the_stream', provides: 'text/event-stream' do
stream :keep_open do |out|
redis = Redis.new
redis.subscribe(:channel1, :channel2) do |on|
on.message do |channel, msg|
out << "data: #{msg}\n\n" # This is an EventSource message
end
end
end
end
Client side. Most modern browsers support EventSource, except IE:
var stream = new EventSource('/the_stream');
stream.onmessage = function(e) {
alert("I just got this from the server: " + e.data);
}

As of I know you can do this via Faye check this link out
There are couple approach If you wish you can try
I remember myself building a Long Polling server using thin and sinatra to achieve something like this now If u wish you can do the same
I know of few like this and this flash client that you can use to connect directly to redis
There is EventMachine Websocket implementation u can use and hook it up with HTML 5 and Flash for non HTML 5 browser
Websocket-Rack
Other Approach you can try just a suggestion since most of them arent written in ruby
Juggernaut ( I dont think it based on Redis Pub-sub Thing also there used to ruby thing earlier not sure of now)
Socket.io
Webd.is
NULLMQ Not a redis pub sub but this is Zero MQ implementation in javascript
There are few other approach you can find If u google up :)
Hope this help

Related

How to send an email with multiple attachments from Gmail using API client library for .NET

My app uses Google API client library for .NET to send emails with attachments.
When using Send(), I'm facing some limitations when it comes to file size of the attachments. So, I guess switching to Resumable upload as upload method may help. But it's pretty much undocumented.
Looking into source code, I guess using different Send() overload may be the way forward, but I can't figure out how to use it properly.
So, instead of attaching the files into message and calling it like this:
var gmailResult = gmail.Users.Messages.Send(new Message
{
Raw = base64UrlEncodedMessage
}, "me").Execute();
I should not attach the files to message and do something like following?
var gmailResult = gmail.Users.Messages.Send(new Message
{
Raw = base64UrlEncodedMessage
}, "me", fileStream, contentType).Upload();
The second version does not return any API error, but does nothing. I'm obviously missing something here.
How do I attach more than one attachment?
This is kind of an old question, but putting an answer here just in case anyone else needs it:
I was able to achieve this by converting my mime message into a stream (attachment(s) included), and then calling this overload on Send:
UsersResource.MessagesResource.SendMediaUpload googleSendRequest = service.Users.Messages.Send(null, "youremail#gmail.com", mimeMessageStream, "message/rfc822");
IUploadProgress created = googleSendRequest.Upload();
This will upload all of the attachments with the email message content and then send it off. I was able to send two 5 megabyte attachments in an email. Previously I was not able to send even one of those via the other Send method that takes in a base64 encoded mime message.

Should I use HTTP or xmlhttprequest on node.js? When?

I'm still exploring REST, node.js and generally web development. What I found out is that xmlhttprequest is mostly(if not always) used when using AJAX. As I learned AJAX is for asynchronous Javascript and XML. So my question is should I be using xmlhttprequest in my node.js project, just when I want to do asynchronous parts on my webpage? or does node.js HTTP also have opportunity to asynchronous javascript? How can I balance well the use of HTTP and xmlhttprequest(or AJAX) so that I don't get too messy in all my REST API stuff?
P.S. I kinda don't want to use AJAX, because of XML. I have heard that XML is much heavier in data than JSON and isn't worth using anymore. Is it true? What would you recommend me to do?
non async on node?
you're trying to build an endpoint api so all the other cases of not using async should be thrown out the window. As soon as you have a single non async code in your node.js project it will freeze the entire process until it is complete. Remember Node.js runs a single Thread (theoretically) which means all the other concurrent users are gonna get frozen.. that's one way to make people really upset.
say for instance you need to read a file from your Node.js server on a get request from a client (let's say a browser) well you want to make it a callback/promise never do non-async with an API server there is just no reason not to (in your case).
example below
import * as express from "express";
import * as fs from 'fs';
let app = express();
app.get('/getFileInfo', function(req, res) {
fs.readFile('filePath', 'UTF-8', function(err, data) {
if (err) {
console.log(err);
res.json({error: err});
} else {
res.json({data: data});
}
})
});
//users will freeze while the file is read until it is done reading
app.get('/nonasync', function(req, res) {
let data = fs.readFileSync('path', 'utf-8');
res.json({data:data});
});
the exact same idea applies to your web browser.. if you are going to not do something async in the browsers javascript the entire web application will be unresponsive because it also runs in the same manner, it has one main loop and unless they are in callbacks/promises/observable the website will freeze. Ajax is a much neater/nicer way to implement post/get/put/delete/get:id from a server then an XMLHttpRequest. now both of these have an option to send and receive JSON not only XML. Ajax is safer due to supporting different browser compatibility specs as XMLHttpRequest has some limitations in IE and Safari I believe.
NOTE: if you're not using a framework with node.js you should, it helps keep your endpoints neat and testable along with being able to pass the project on to others without them having to learn the way you implemented your req, res structure
some frameworks for node
Express 4 (my preference, api doc is really really good and strong
community)
Restify (used by Netflix - really light)
Hapi (never used but heard of)
some frameworks for web browsers you might like
angular 2 (my preference as I'm from a MEAN stack)
reactJS (created by big blue Facebook)
knockoutJS (simple and easy)
all the browser frameworks have their own implementation of the RESTful api's, but more are leaning towards Observable objects.

ActiveMQ with Ruby Stomp gem - subscribing fails

New to ActiveMQ. Using ruby stomp gem. I believe I'm successfully publish'ing messages to the server, as I see them in the queue in my browser admin client. But on subscribe nothing happens, no error, no output. The "in subscribe" test text from puts never appears in stdout, nor does the msg.
Should I be using a different naming format for the queues?
require 'stomp'
port = 61613
client = Stomp::Client.new( 'admin', 'admin', '127.0.0.1', port )
client.publish("/queue/mine2", "hello world!")
puts "about to subscribe"
client.subscribe("/queue/mine2") do |msg|
puts "in subscribe"
puts msg
end
client.close
I believe You are closing the client before it gets a chance to receive anything.
If there is no preemption between client.subscribe and client.close background thread that listens for new messages never gets run.
You should try adding
client.join
before closing it.
Although client.join did successfully pull down the first message or two for me, after it ran, the code completely stopped working, and the subscriber would simply hang again. I was starting my client in a very similar way (just lacking creds):
client = Stomp::Client.new('localhost', 61613)
But I was able to get it working by using a URL instead:
client = Stomp::Client.new('stomp://localhost:61613')
With creds, it would look something like:
client = Stomp::Client.new('stomp://login:passcode#host:port')
Hope this helps the next person with this issue.

Faye does not publish when using a browser on another computer in the network

I have faye implementation in my rails application. The publish method works correctly when both browsers are on the same computer. When I access the application from another browser on another computer, it only works from client to server and does not publish to other clients. Also the publish event does not push to client when there are changes in the browser on the server.
Controller publish code:
def publish(channel, data)
message = {
:channel => channel,
:data => data,
:ext => {:faye_token => FAYE_OUTGOING_AUTH_TOKEN}
}
uri = URI.parse('http://localhost:9292/faye')
Net::HTTP.post_form(uri, :message => message.to_json)
end
Command to run faye:
rackup faye.ru -s thin -E production -d
Example:
A: Server,
B: Client1,
C: Client2
A B and C are different computers in same network, and are all subscribed to the same channel.
If I input data on B, A will see the data but C will not see the data until I refresh the page (Which is getting the data from db).
If I input data on A, it does not get published to the other clients.
If I input data on C, to a channel that only C and B are subscribed to, only C gets to see the data, and it is not published to B.
If A, B, and C were different browsers on the same computer, all the above cases would work.
I have ran this in Development mode, and have tried WEBrick, Unicorn, and Thin.
Any help would be appreciated.
Thanks.
To resolve the issue I replaced all instances of "localhost" with the address of the server on which Faye is running. This includes for subscribing clients to channels as well.
Hope it helps,
Cheers!
Hey Babak: I am also facing same kind of problem, I am using nodejs + express + faye. so I should add ip_addr:port to each subscribe,client
var client = new Faye.Client('/faye',{
endpoints:{
websocket:'http:ws.chat-yummyfoods.rhcloud.com'
}
,timeout: 20
});
client.disable('websocket');
console.log("client:"+client);
var subscription=client.subscribe('/channel', function(message) {
console.log("Message:"+message.text);
$('#messages').append('<p>'+message.text+'</p>');
});
subscription.then(function(){
console.log('subscribe is active');
alert('subscribe is active');
});

Looking for a Ruby Web Socket library that falls back to long polling [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm currently using em-websocket with Event Machine. It works great, but I also want to provide long polling and/or Flash fall-backs for browsers that don't support Web Sockets (and also so I can run it on Heroku).
I'm basically looking for a Ruby version of Socket.IO, or enough libraries to piece together to get the features I want. I've seen some examples that use Socket.IO, Redis, and a Ruby library that interacts with the Redis DB, but I'd rather keep it simple and just keep it all in Event Machine, rather than having to manage 3 applications instead of one.
Check out Faye - https://github.com/faye/faye.
You can do this with Socket.IO on the client side and em-websocket with async_sinatra and Thin on the server-side. See here for some info on the topic.
I was searching for the same and ended up writing the Plezi websocket framework which I wanted to make easier and more intuitive to use... You can even use it inside your Rails/Sintra app (it will replace your Rack server with Iodine if you do so, and both apps will share the same network connection and process)...
a simple websocket chat/echo server - running over the websocket echo sample page - can look something like this:
require 'plezi'
class BroadcastCtrl
def index
redirect_to 'http://www.websocket.org/echo.html'
end
def on_message data
# the following two lines are the same as:
# self.class.broadcast :_send_message, data
broadcast :_send_message, data
_send_message data
end
def _send_message data
response << data
end
end
route '/', BroadcastCtrl
This is very comfortable for a long-pulling fallback position, as the framework supports both RESTful HTTP and HTTP Streaming.
You can also look into the Plezi client or using any Plezi's Auto-Dispatch feature for auto-dispatching any JSON event to a method. This makes it super easy to write an API for both Websockets and AJAX (AJAJ actually).
Here's a more complicated example, showcasing auto-dispatching, some recursive method calling (using broadcasting, writing data to all the connected clients), AJAX v.s Websoocket recognition, http only requests and websocket only events.
require 'plezi'
class BroadcastCtrl
#auto_dispatch = true
def index event = nil
{event: 'update', target: 'body',
data: 'my content'}.to_json
end
def chat event = nil, broadcast = false
if broadcast # it's recursive broadcasting
return write(event.to_json)
end
if event == nil #it's AJAX
msg = params[:msg]
else # it's websockets
msg = event[:msg]
end
self.class.broadcast :chat, ({event: 'chat', msg: msg}), true
end
def http_only
{event: 'foo', data: 'bar'}.to_json
end
protected
def websocket_only event
{event: 'foo', data: 'bar'}.to_json
end
end
route '/', BroadcastCtrl
The framework also supports easy and native Redis integration, so that broadcasts could propagate through different processes or machines seamlessly.
It also supports slim, haml, sass, coffee-script and hrb templates, so it's possible to move the whole application to one framework, instead of running Sinatra/Rails with a parallel real-time solution (via middleware, via a different app or via a different port access).
...but, to each their own, I guess.

Resources