The Google API ruby client apparently does not fetch all events - ruby

I have used the Google API Ruby client example repo on Github as a starting point for my calendar app.
It works well and usually without issues. Lately, however, I noticed that my app in production does not fetch all events. I present events in a table after a few calendars have been requested through the ruby client. On my initial call there are just the dates which I generate with my app, no data is fetched from the calendars at all. Upon second request some calendars will return data and reloading the app a couple more times all calendars will return events.
So to me it seems like there is some caching going on. If data is not returned quick enough an empty response is sent back to me. Requesting again sends some data and the more I request the more data is returned.
Maybe there is something wrong with my setup which is not perfect I assume:
get '/' do
#fetch all calendars
result = api_client.execute(:api_method => calendar_api.calendar_list.list,
:authorization => user_credentials)
cals = result.data.items.map do |i|
#skip id calendar does not belong to owner or is the "private" primary one
next if i.primary || i.accessRole != "owner"
i.summary
end
cals.compact!
#save all users mentioned in calendars in a set
events_list = result.data.items.map do |i|
#skip calendar if primary or not owned by user (cannot be changed anyway)
next if i.primary || i.accessRole != "owner"
r = api_client.execute(:api_method => calendar_api.events.list,
:parameters => {'calendarId' => i.id},
:timeMax => DateTime.now.next_month,
:timeMin => DateTime.now.beginning_of_month)
#capture all calendars and their events and map it to an Array
r.data.items.delete_if { |item| item.status == "cancelled" }
end
#remove skipped entries (=nil)
events_list.compact!
slim :home, :locals => { :title => Konfig::TITLE, :events_list => events_list, :cals => cals}
end
BTW: :timeMax and :timeMin are not working as expected either but that's a different question I suppose.

Code does not seem to be the issue here.
I would guess one of the following is happening
You are being rate limited? (Unlikely to hit limits with a smaple app but stil easy to check with response codes)
Loading is taking more than 2 seconds (default net http response timeout is 2 seconds and default faraday setup is with net:http)
What would I do in this situation. I would the following before deciding on next steps
Print the response object error codes and http headers in the api client gem from its response object. I would be looking for what is the caching header in response and what is the status code.
If you have ngrep on your production machine, just let it print and show the entire http request response instead of printing it in the gem.
If the response takes more than 2 seconds, increase timeout settings for net::http and check.

Related

Is there a way to delay cache revalidation in service worker?

I am currently working on performance improvements for a React-based SPA. Most of the more basic stuff is already done so I started looking into more advanced stuff such as service workers.
The app makes quite a lot of requests on each page (most of the calls are not to REST endpoints but to an endpoint that basically makes different SQL queries to the database, hence the amount of calls). The data in the DB is not updated too often so we have a local cache for the responses, but it's obviously getting lost when a user refreshes a page. This is where I wanted to use the service worker - to keep the responses either in cache store or in IndexedDB (I went with the second option). And, of course, the cache-first approach does not fit here too well as there is still a chance that the data may become stale. So I tried to implement the stale-while-revalidate strategy: fetch the data once, then if the response for a given request is already in cache, return it, but make a real request and update the cache just in case.
I tried the approach from Jake Archibald's offline cookbook but it seems like the app is still waiting for real requests to resolve even when there is a cache entry to return from (I see those responses in Network tab).
Basically the sequence seems to be the following: request > cache entry found! > need to update the cache > only then show the data. Doing the update immediately is unnecessary in my case so I was wondering if there is any way to delay that? Or, alternatively, not to wait for the "real" response to be resolved?
Here's the code that I currently have (serializeRequest, cachePut and cacheMatch are helper functions that I have to communicate with IndexedDB):
self.addEventListener('fetch', (event) => {
// some checks to get out of the event handler if certain conditions don't match...
event.respondWith(
serializeRequest(request).then((serializedRequest) => {
return cacheMatch(serializedRequest, db.post_cache).then((response) => {
const fetchPromise = fetch(request).then((networkResponse) => {
cachePut(serializedRequest, response.clone(), db.post_cache);
return networkResponse;
});
return response || fetchPromise;
});
})
);
})
Thanks in advance!
EDIT: Can this be due to the fact that I put stuff into IndexedDB instead of cache? I am sort of forced to use IndexedDB instead of the cache because those "magic endpoints" are POST instead of GET (because of the fact they require the body) and POST cannot be inserted into the cache...

How can I push data to a browser where the data is based on a SQL statement? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I know there are threads out there on this topic but do seem to answer quite what I am looking for. I have never done any push technology before so some guidance here is appreciated. I understand how when something has changed that that triggers the push to any browser that is listening but I do not think that quite fits the scenario that I am looking at trying to do.
We are rebuilding our users web application where they track shipments. We will be allowing the users to build there own searches that match how they do their job. For example, some will look for any shipment that is scheduled to deliver today where others look for shipments that are to be picked up today and still other that look for shipments that need to be scheduled for pickup. So when they come in an open the application I can give them a count for each of their work tasks that they need to do today. So now what I want is that the count will change based on the SQL being re-run. But I do not want the user to have to refresh the page to see the new count.
How do I have this SQL run and push the current count to any browser that is using this SQL. What is the mechanism that automatically re-runs this SQL? Keep in mind that I will have 50 or more of these unique SQLs that will need to be executed and the count pushed.
Thanks for your guidance!
I think this falls pretty cleanly into AJAX's role. AJAX will allow you to make GET and POST requests to the server, which will process the query and return results to a JS function. At risk of jQuery evangelism, the API makes this sort of thing extremely easy and standard, and you can have pretty much any event you want to activate it.
This has a few concerns, namely client-side inputs and SQL injection. If you're sending any input through a POST request, you have to be VERY careful to sanitize everything. Use prepared statements, don't perform query string concatination+execution, and generally assume the user will try to send text that you don't want them to. Give some server-side bounding to what inputs will be acknowledged successfully (e.g. If the options are "Left" or "Right", and they give "Bottom", either default it or drop it).
Client activates request (timed or event)
JS makes AJAX call to server (optionally with parameters)
Server validates any inputs and processes query
Server sends results back
JS uses results to modify DOM
AJAX pulling is one solution, although others exist that might be better suited and save you resources*...
For example, having a persistent Websocket connection would help minimize the cost of establishing new connections and having repeated requests that are mostly redundant.
As a result, your server should have a lower workload and your application would require less bandwidth (if these are important to you).
Even using a Websocket connection just to tell your client when to send an AJAX request can sometime save resources.
Here's a quick Websocket Push demo
You can use many different Websocket solutions. I wrote a quick demo using the Plezi framework because it's super easy to implement, but there are other ways to go about this.
The Plezi framework is a Ruby framework that runs it's own HTTP and Websocket server, independent of Rack.
The example code includes a controller for a model (DemoController), a controller for the root index page (DemoIndex) and a controller for the Websocket connection (MyWSController).
The example code seems longer because it's all in one script - even the HTML page used as a client... but it's really quite easy to read.
The search requirements are sent from the client to the web server (the search requires that the model's object ID is between 0 and 50).
Every time an object is created (or updated), an alert is sent to all the connected clients, first running each client's searches and then sending any updates.
The rest of the time the server is resting (except pinging every 45 seconds or so, to keep the websocket connection alive).
To see the demo in action, just copy and paste the following code inside your IRB terminal** and visit the demo's page:
require 'plezi'
class MyWSController
def on_open
# save the user data / register them / whatever
#searches = []
end
def on_message data
# get data from the user
data = JSON.parse data
# sanitize data, create search parameters...
raise "injection attempt: #{data}}" unless data['id'].match(/^\([\d]+\.\.[\d]+\)\z/)
# save the search
#searches << data
end
def _alert options
# should check #searches here
#searches.each do |search|
if eval(search['id']).include? options[:info][:id]
# update
response << {event: 'alert'}.merge(options).to_json
else
response << "A message wouldn't be sent for id = #{options[:info][:id]}.\nSaved resources."
end
end
end
end
class DemoController
def index
"use POST to post data here"
end
# called when a new object is created using POST
def save
# ... save data posted in params ... then:
_send_alert
end
# called when an existing object is posted using POST or UPDATE
def update
# ... update data posted in params ... then:
_send_alert
end
def demo_update
_send_alert message: 'info has been entered', info: params.update(id: rand(100), test: 'true')
" This is a demo for what happens when a model is updated.\n
Please Have a look at the Websocket log for the message sent."
end
# sends an alert to
def _send_alert alert_data
MyWSController.broadcast :_alert, alert_data
end
end
class DemoIndex
def index search = '(0..50)'
response['content-type'] = 'text/html'
<<-FINISH
<html>
<head>
<style>
html, body {height: 100%; width:100%;}
#output {margin:0 5%; padding: 1em 2em; background-color:#ddd;}
#output li {margin: 0.5em 0; color: #33f;}
</style>
</head><body>
<h1> Welcome to your Websocket Push Client </h1>
<p>Please open the following link in a <b>new</b> tab or browser, to simulate a model being updated: <a href='#{DemoController.url_for id: :demo_update, name: 'John Smith', email: 'john#gmail.com'}', target='_blank'>update simulation</a></p>
<p>Remember to keep this window open to view how a simulated update effects this page.</p>
<p>You can also open a new client (preferably in a new tab, window or browser) that will search only for id's between 50 and 100: <a href='#{DemoIndex.url_for :alt_search}'>alternative search</a></p>
<p>Websocket messages recieved from the server should appeare below:</p>
<ul id='output'>
</ul>
<script>
var search_1 = JSON.stringify({id: '#{search}'})
output = document.getElementById("output");
websocket = new WebSocket("#{request.base_url 'ws'}/ws");
websocket.onmessage = function(e) { output.innerHTML += "<li>" + e.data + "</li>" }
websocket.onopen = function(e) { websocket.send(search_1) }
</script>
</body></html>
FINISH
end
def alt_search
index '(50..100)'
end
end
listen
route '/model', DemoController
route '/ws', MyWSController
route '/', DemoIndex
exit
To view this demo visit localhost:3000 and follow the on screen instructions.
The demo will instruct you to open a number of browser windows, simulating different people accessing your server and doing different things.
As you can see, both the client side javascript and the server side handling aren't very difficult to write, while Websockets provide for a very high level of flexibility and allows for better resource management (for instance, the search parameters need not be sent over and ver again to the server).
* the best solution for your application depends on your specific design. I'm just offering another point of view.
** ruby's terminal is run using irb from bash, make sure first to install the plezi network using gem install plezi

Bypass personal voicemail to Twilio Voicemail Twimlet

I have a basic call forward using Twilio's verb, but am running into trouble with voicemail. Instead of the receiving user's personal voicemail, I want to reroute to a Twimlet that records the voicemail and emails it. With my current code, I'm altering the 'timeout' parameter between 3-10 seconds with mixed results. Sometimes the Twimlet voicemail picks up first, and sometimes the call gets picked up then the Twimlet fires off on a live call. Is there some way to detect a voicemail is about to pick up and redirect to the twimlet with consistency?
post '/number/forward/:sid' do
#number = Number.find_by_twilio_sid(params[:sid])
#forward = Number.find_by_parent_id(#number.id)
if #forward.extension == nil
Twilio::TwiML.build do |r|
r.Dial #forward.number, :callerId => #number.number, :timeout => '7', :action => "http://twimlets.com/voicemail?Email=email%40gmail.com&Message=Thank%20you%20for%20calling%2C%20please%20leave%20a%20message.&T", :method => "GET"
end
end
end
Twilio developer evangelist here.
Whilst we do have experimental answering machine detection in the REST API, there is no affordance for that within TwiML. There was another question/answer on SO that answers this better. Please see: Use IfMachine in TwiML when using <Dial>

facebook type status and comment using ajax, jquery in asp.net mvc3

i am working on status update and commenting application in asp.net mvc3 like Facebook wall and comment. User can comment my wall and all stuff like Facebook.
http://demos.99points.info/facebook_wallpost_system/
like above demo, i want to create my application.
how can i do that using mvc3 and ajax?
i successfully updated user status to database but cant get all status updates of same user, i want to use partial view to display all status of user below the status textarea.
and if user write some status and share status that time status message saved in database and again reflect to same view Asynchronously.
how can i do that using ajax?
I'm going to answer this in broad terms as I don't have any knowledge of asp.net or mvc3. However, it seems to me that you're looking for more architectural direction.
You will need to setup an endpoint which generates the status page, call it /status.asp. This will create the text area and load the existing status messages from your database.
Then create a second endpoint, say /api/status.asp. This is not a view, but an API in your application that let's a user create (and, if you want, retrieve/modify/delete?) a status.
When a user hits enter in the textarea, fire off an XHR request to /api/status.asp with the new status. It is common, but not required, to do this as a POST request (read up on REST - http://en.wikipedia.org/wiki/Representational_State_Transfer). This API should then save the new status to your DB, and return the uid of the status along with the status message, perhaps as JSON (or XML or YAML if you prefer, it's up to you). For example, in JSON:
{
status: [
{
uid: '1234567890987654321'
msg: 'Hello World'
}
]
}
(To send the XHR request it's easiest to use a JS library like Dojo ( http://dojotoolkit.org/reference-guide/dojo/xhr.html#dojo-xhr ) or JQuery ( http://api.jquery.com/jQuery.ajax/ ).)
When your XHR request returns, check the status is a 200 (everything went OK), then read the data returned. Write some Javascript to create a new DOM node, inject the status message into that DOM node and add it to the bottom of the previous status nodes.
Bonus Points:
If you want person B to be looking at person A's /status.asp page and for that page to auto-update when person A posts a new status, you'll need to do a little more work. Firstly, modify /api/status.asp to return a list of the last x (say, 10?) status updates when called via HTTP GET. Include the UID of each status along with the status text.
Call your /api/status.asp API repeatedly* (perhaps with a timestamp of the last time you called it, and get your API to only return status posts after that time), loop over the the results and check to see if that post is already included in the users page (perhaps by having an id on each DOM node matching the UID of the status). If not, add it to the page.
*you have a number of options for doing this. For example, simply setup a JS timeout (easy, but not very efficient), or use Comet (eg http://cometd.org/) or WebSockets ( http://websocket.org/ ). I'd go for a timeout first, get it working and then figure out if a better technology is required.

PubSubHubbub and Ruby on Rails: subscription verification

I am trying to implement Superfeedr subscriptions using PubSubHubbub and Ruby on Rails. The problem is, the subscriptions are never confirmed, even though my callback prints out the hub.challenge string, which it successfully receives.
def push
feed = Feed.find(params[:id])
if feed.present?
if params['hub.mode'].present? and params['hub.verify_token'] == feed.secret
feed.update_attribute(:is_active, (params['hub.mode'] == 'subscribe'))
render text: params['hub.challenge']
return
elsif params['hub.secret'] == feed.secret
parse(feed, request.raw_post)
end
end
render nothing: true
end
It sets feed.is_active = true, but Superfeedr Analytics shows no sign of subscription.
I am using 1 dyno Heroku hosting and async verification method.
The first thing you should check is the HTTP status code and the response BODY of your subscription request. I expect the code to be 422 to indicate that subscription was failed, but the body will help us know exactly what is going on.
Also, do you see the verification request in the logs?
A common issue with heroku is that if you use hub.verify=sync, you will need 2 dynos, because you have to concurrent requests in this case...

Resources