PubSubHubbub and Ruby on Rails: subscription verification - heroku

I am trying to implement Superfeedr subscriptions using PubSubHubbub and Ruby on Rails. The problem is, the subscriptions are never confirmed, even though my callback prints out the hub.challenge string, which it successfully receives.
def push
feed = Feed.find(params[:id])
if feed.present?
if params['hub.mode'].present? and params['hub.verify_token'] == feed.secret
feed.update_attribute(:is_active, (params['hub.mode'] == 'subscribe'))
render text: params['hub.challenge']
return
elsif params['hub.secret'] == feed.secret
parse(feed, request.raw_post)
end
end
render nothing: true
end
It sets feed.is_active = true, but Superfeedr Analytics shows no sign of subscription.
I am using 1 dyno Heroku hosting and async verification method.

The first thing you should check is the HTTP status code and the response BODY of your subscription request. I expect the code to be 422 to indicate that subscription was failed, but the body will help us know exactly what is going on.
Also, do you see the verification request in the logs?
A common issue with heroku is that if you use hub.verify=sync, you will need 2 dynos, because you have to concurrent requests in this case...

Related

How can I push data to a browser where the data is based on a SQL statement? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I know there are threads out there on this topic but do seem to answer quite what I am looking for. I have never done any push technology before so some guidance here is appreciated. I understand how when something has changed that that triggers the push to any browser that is listening but I do not think that quite fits the scenario that I am looking at trying to do.
We are rebuilding our users web application where they track shipments. We will be allowing the users to build there own searches that match how they do their job. For example, some will look for any shipment that is scheduled to deliver today where others look for shipments that are to be picked up today and still other that look for shipments that need to be scheduled for pickup. So when they come in an open the application I can give them a count for each of their work tasks that they need to do today. So now what I want is that the count will change based on the SQL being re-run. But I do not want the user to have to refresh the page to see the new count.
How do I have this SQL run and push the current count to any browser that is using this SQL. What is the mechanism that automatically re-runs this SQL? Keep in mind that I will have 50 or more of these unique SQLs that will need to be executed and the count pushed.
Thanks for your guidance!
I think this falls pretty cleanly into AJAX's role. AJAX will allow you to make GET and POST requests to the server, which will process the query and return results to a JS function. At risk of jQuery evangelism, the API makes this sort of thing extremely easy and standard, and you can have pretty much any event you want to activate it.
This has a few concerns, namely client-side inputs and SQL injection. If you're sending any input through a POST request, you have to be VERY careful to sanitize everything. Use prepared statements, don't perform query string concatination+execution, and generally assume the user will try to send text that you don't want them to. Give some server-side bounding to what inputs will be acknowledged successfully (e.g. If the options are "Left" or "Right", and they give "Bottom", either default it or drop it).
Client activates request (timed or event)
JS makes AJAX call to server (optionally with parameters)
Server validates any inputs and processes query
Server sends results back
JS uses results to modify DOM
AJAX pulling is one solution, although others exist that might be better suited and save you resources*...
For example, having a persistent Websocket connection would help minimize the cost of establishing new connections and having repeated requests that are mostly redundant.
As a result, your server should have a lower workload and your application would require less bandwidth (if these are important to you).
Even using a Websocket connection just to tell your client when to send an AJAX request can sometime save resources.
Here's a quick Websocket Push demo
You can use many different Websocket solutions. I wrote a quick demo using the Plezi framework because it's super easy to implement, but there are other ways to go about this.
The Plezi framework is a Ruby framework that runs it's own HTTP and Websocket server, independent of Rack.
The example code includes a controller for a model (DemoController), a controller for the root index page (DemoIndex) and a controller for the Websocket connection (MyWSController).
The example code seems longer because it's all in one script - even the HTML page used as a client... but it's really quite easy to read.
The search requirements are sent from the client to the web server (the search requires that the model's object ID is between 0 and 50).
Every time an object is created (or updated), an alert is sent to all the connected clients, first running each client's searches and then sending any updates.
The rest of the time the server is resting (except pinging every 45 seconds or so, to keep the websocket connection alive).
To see the demo in action, just copy and paste the following code inside your IRB terminal** and visit the demo's page:
require 'plezi'
class MyWSController
def on_open
# save the user data / register them / whatever
#searches = []
end
def on_message data
# get data from the user
data = JSON.parse data
# sanitize data, create search parameters...
raise "injection attempt: #{data}}" unless data['id'].match(/^\([\d]+\.\.[\d]+\)\z/)
# save the search
#searches << data
end
def _alert options
# should check #searches here
#searches.each do |search|
if eval(search['id']).include? options[:info][:id]
# update
response << {event: 'alert'}.merge(options).to_json
else
response << "A message wouldn't be sent for id = #{options[:info][:id]}.\nSaved resources."
end
end
end
end
class DemoController
def index
"use POST to post data here"
end
# called when a new object is created using POST
def save
# ... save data posted in params ... then:
_send_alert
end
# called when an existing object is posted using POST or UPDATE
def update
# ... update data posted in params ... then:
_send_alert
end
def demo_update
_send_alert message: 'info has been entered', info: params.update(id: rand(100), test: 'true')
" This is a demo for what happens when a model is updated.\n
Please Have a look at the Websocket log for the message sent."
end
# sends an alert to
def _send_alert alert_data
MyWSController.broadcast :_alert, alert_data
end
end
class DemoIndex
def index search = '(0..50)'
response['content-type'] = 'text/html'
<<-FINISH
<html>
<head>
<style>
html, body {height: 100%; width:100%;}
#output {margin:0 5%; padding: 1em 2em; background-color:#ddd;}
#output li {margin: 0.5em 0; color: #33f;}
</style>
</head><body>
<h1> Welcome to your Websocket Push Client </h1>
<p>Please open the following link in a <b>new</b> tab or browser, to simulate a model being updated: <a href='#{DemoController.url_for id: :demo_update, name: 'John Smith', email: 'john#gmail.com'}', target='_blank'>update simulation</a></p>
<p>Remember to keep this window open to view how a simulated update effects this page.</p>
<p>You can also open a new client (preferably in a new tab, window or browser) that will search only for id's between 50 and 100: <a href='#{DemoIndex.url_for :alt_search}'>alternative search</a></p>
<p>Websocket messages recieved from the server should appeare below:</p>
<ul id='output'>
</ul>
<script>
var search_1 = JSON.stringify({id: '#{search}'})
output = document.getElementById("output");
websocket = new WebSocket("#{request.base_url 'ws'}/ws");
websocket.onmessage = function(e) { output.innerHTML += "<li>" + e.data + "</li>" }
websocket.onopen = function(e) { websocket.send(search_1) }
</script>
</body></html>
FINISH
end
def alt_search
index '(50..100)'
end
end
listen
route '/model', DemoController
route '/ws', MyWSController
route '/', DemoIndex
exit
To view this demo visit localhost:3000 and follow the on screen instructions.
The demo will instruct you to open a number of browser windows, simulating different people accessing your server and doing different things.
As you can see, both the client side javascript and the server side handling aren't very difficult to write, while Websockets provide for a very high level of flexibility and allows for better resource management (for instance, the search parameters need not be sent over and ver again to the server).
* the best solution for your application depends on your specific design. I'm just offering another point of view.
** ruby's terminal is run using irb from bash, make sure first to install the plezi network using gem install plezi

The Google API ruby client apparently does not fetch all events

I have used the Google API Ruby client example repo on Github as a starting point for my calendar app.
It works well and usually without issues. Lately, however, I noticed that my app in production does not fetch all events. I present events in a table after a few calendars have been requested through the ruby client. On my initial call there are just the dates which I generate with my app, no data is fetched from the calendars at all. Upon second request some calendars will return data and reloading the app a couple more times all calendars will return events.
So to me it seems like there is some caching going on. If data is not returned quick enough an empty response is sent back to me. Requesting again sends some data and the more I request the more data is returned.
Maybe there is something wrong with my setup which is not perfect I assume:
get '/' do
#fetch all calendars
result = api_client.execute(:api_method => calendar_api.calendar_list.list,
:authorization => user_credentials)
cals = result.data.items.map do |i|
#skip id calendar does not belong to owner or is the "private" primary one
next if i.primary || i.accessRole != "owner"
i.summary
end
cals.compact!
#save all users mentioned in calendars in a set
events_list = result.data.items.map do |i|
#skip calendar if primary or not owned by user (cannot be changed anyway)
next if i.primary || i.accessRole != "owner"
r = api_client.execute(:api_method => calendar_api.events.list,
:parameters => {'calendarId' => i.id},
:timeMax => DateTime.now.next_month,
:timeMin => DateTime.now.beginning_of_month)
#capture all calendars and their events and map it to an Array
r.data.items.delete_if { |item| item.status == "cancelled" }
end
#remove skipped entries (=nil)
events_list.compact!
slim :home, :locals => { :title => Konfig::TITLE, :events_list => events_list, :cals => cals}
end
BTW: :timeMax and :timeMin are not working as expected either but that's a different question I suppose.
Code does not seem to be the issue here.
I would guess one of the following is happening
You are being rate limited? (Unlikely to hit limits with a smaple app but stil easy to check with response codes)
Loading is taking more than 2 seconds (default net http response timeout is 2 seconds and default faraday setup is with net:http)
What would I do in this situation. I would the following before deciding on next steps
Print the response object error codes and http headers in the api client gem from its response object. I would be looking for what is the caching header in response and what is the status code.
If you have ngrep on your production machine, just let it print and show the entire http request response instead of printing it in the gem.
If the response takes more than 2 seconds, increase timeout settings for net::http and check.

How Can I Use xmpp4r To Detect The Online/Offline Status Of A Given Jabber ID?

What is the proper xmpp4r way to know if a given contact is online before sending them a message?
Can you post sample xmpp4r code for doing this?
Here is my use case:
If contact online, send :normal message
Else, email contact
Here are things I have working code for:
Send messages of various types
Get a roster/contact list
Register a call back to detect changes in presence
However, I can't find a place that directly addresses a work flow like this:
Loop through each JID in your roster
If jid.is_online? == true, send IM
Else, send email
I've read that you should send a JID a message of type :headline and if that fails, you know the user is offline. In my tests, if the user is ONLINE, they'll receive a message of type headline. This is suboptimal, as users should only receive messages to read, not noise to determine online status.
I've read that on sign on, all of your contacts will bounce a presence status back at you, and that status is the sole indication that they are online - assuming that there isn't a disconnect or presence change you've yet to receive. So you should register a presence call back, record the initial users who ping you back, and then add or remove from the list based on your running roster presence callback.
If this is truly the way to do it:
Can I get some example code of how to collect all the "I'm here" presence confirmations on sign on via xmpp4r?
Why, oh why, was xmpp designed this way and why is this better than offering an "is_online_and_available" method?
So the answer here is adding a message call back and checking inside the block for the type:
m = Message.new(to, body)
cl.send(m)
cl.add_message_callback do |m|
if m.type == :error
puts "type: #{m.type}"
else
puts "not an error"
end
end
This requires threading as you have to be listening for the response.

Using the xmpp4r Ruby gem, how can I synchronously discover if a contact is online?

I'm new to XMPP and the xmpp4r library, so please forgive my noob question if this is obviously documented somewhere.
What's the most straightforward way, in a synchronous manner, to find out if a given JID is online? (so that I can call something like is_online?(jid) in an if statement)
My details:
I'm writing a Sinatra app that will attempt to send a message to a user when a particular url gets requested on the web server, but it should only try to send the message to the user if that user is currently online. Figuring out if a given JID is online is my problem.
Now, I know that if I connect and wait a few seconds for all the initial presence probe responses to come back to the Roster helper, then I can inspect any of those presences from my Roster and call #online? on them to get the correct value. But, I don't know when all of the presence updates have been sent, so there's a race condition there and sometimes calling #online? on a presence from my roster will return false if I just haven't received that presence probe response yet.
So, my current thinking is that the most straightforward way to find out if someone is online is to construct a new Presence message of type :probe and send that out to the JID that I'm interested in. Here's how I'm doing it right now:
#jabber is the result of Client::new
#email is the jid I'm interested in polling
def is_online?(jabber, email)
online = false
p = Presence.new
p.set_to(email)
p.set_from(jabber.jid)
p.set_type(:probe)
pres = jabber.send(p) do |returned_presence|
online = returned_presence.nil?
end
return online
end
Now, this works in cases where the user is actually online, but when the user is offline, it looks like the presence probe message that comes back is being caught by some other presence_callback handler that doesn't know what to do with it, and my is_online? function never finishes returning a value.
Can anyone help me by providing a simple example is_online? function that I can call, or point me in the right direction for how I can detect when the roster is done getting all the initial presence updates before I try checking a presence for #online?
As it turns out, there's not a synchronous way to ask for a JID presence. You've just got to ask for what you want, then wait for your response handler to fire when the response arrives.

What's the fastest way for a true sinatra(ruby/rack) after_filter?

Okay it's a simple task. After I render html to the client I want to execute a db call with information from the request.
I am using sinatra because it's a lightweight microframework, but really i up for anything in ruby, if it's faster/easier(Rack?). I just want to get the url and redirect the client somewhere else based on the url.
So how does one go about with rack/sinatra a real after_filter. And by after_filter I mean after response is sent to the client. Or is that just not dooable without threads?
I forked sinatra and added after filters, but there is no way to flush the response, even send_data which is suppose to stream files(which is obviously for binary) waits for the after_filters.
I've seen this question: Multipart-response-in-ruby but the answer is for rails. And i am not sure if it really flushes the response to the client and then allows for processing afterwards.
Rack::Callbacks has some before and after callbacks but even those look like they would run before the response is ever sent to the client here's Rack::Callbacks implementation(added comment):
def call(env)
#before.each {|c| c.call(env) }
response = #app.call(env)
#after.each {|c| c.call(env) }
response
#i am guessing when this method returns then the response is sent to the client.
end
So i know I could call a background task through the shell with rake. But it would be nice not to have too... Also there is NeverBlock but is that good for executing a separate process without delaying the response or would it still make the app wait as whole(i think it would)?
I know this is a lot, but in short it's simple after_filter that really runs after the response is sent in ruby/sinatra/rack.
Thanks for reading or answering my question! :-)
Modified run_later port to rails to do the trick the file is available here:
http://github.com/pmamediagroup/sinatra_run_later/tree/master

Resources