Run a JS function on Server every 5 minutes - ajax

I apologize if the way I'm asking this is why I haven't found an answer yet, but I've got a simple JS script that makes an AJAX request and gets data from an API and stores it.
I'd like to put that script on a server and have it run every 5 minutes, not client-side, but server-side.
I've found a resource called Later.js but I am not sure how to set it up on a server to automatically initialize and run.
Any help is greatly appreciated!!!

It's hard to know without your exact code.
You should be able to do something with jQuery, like:
function foo () {
//your code here
}
foo(); //run function once on startup
setInterval(foo, 5 * 60 * 1000) //and again every five minutes
Later.js is a cool library for calculating time in milliseconds, but that's all it does. If you have installed and required it via NPM, you could use it like:
var 5min = later.parse.text('every 5 min');
setInterval(foo, 5min);
As you can see, you might as well just use standard JS/jQuery to solve your issue.

Related

Has Google Apps Script slowed down in the last few years?

I have a Google Apps Script that uses onEdit() to add a datestamp to Column B any time Column A is edited. It's pretty much the exact scenario asked in How to ensure onEdit functions do not miss-fire.
Even with a completely empty/new spreadsheet and no other processes in the script, the execution duration is about 1 second per event trigger. And it actually takes close to 2 seconds before I see the datestamp appear in Column B.
In the solution provided in the above linked question (from 2019), the runtime was reported to be about 0.06 seconds. Almost 20 time faster than I'm experiencing. I experience the same slow (~1sec/event) speed, even when using the exact code supplied in that solution (see below).
Has GAS slowed down in the last few years? Is there something else that might be going on that would cause the slower runtime? I know 1 second isn't exactly "slow", but Column A is frequently edited--sometimes faster than once/second.
function onEdit(event) {
var sh = event.source.getActiveSheet();
if (sh.getName() === 'Dolly Returns') {
var col = event.range.getColumn();
if (col === 2) {
var row = event.range.getRow();
sh.getRange(row, 1).setValue(new Date());
}
}
}
Google Apps Script has not slowed down over the years.
Apps Script runs on Google's servers, and the resources available on those servers vary all the time. Further, the "0.06" seconds you quote was most likely timed through a script that runs on a server, while the "2 seconds" you mention is likely the time you perceive when you are looking at the Google Sheets user interface. It takes time for script updates to show up in your browser. That probably explains almost all of the difference.
Apps Script is nowadays based on the V8 JavaScript engine, which is much faster than the Rhino engine of the days of yore. However, SpreadsheetApp and Sheets API calls remain very slow. Those calls are what a Sheets script project would typically spend almost all of its runtime with.
The onEdit(e) function you quote is inefficient because it calls two API methods every time any value in the spreadsheet is edited. When the edit happens on the 'Dolly Returns' sheet, it calls yet another API method, and when it happens in column B in that sheet, it calls yet another API method before doing its thing.
To optimize it, use the event object, like this:
function onEdit(e) {
let sheet;
if (e.range.columnStart !== 2
|| (sheet = e.range.getSheet()).getName() !== 'Dolly Returns') {
return;
}
sheet.getRange(e.range.rowStart, 1).setValue(new Date());
}
This way, the function will not call any API methods for edits that happen outside of column B. See these onEdit(e) best practices.
Try this:
function onEdit(e) {
var sh = e.range.getSheet();
if (sh.getName() == 'Dolly Returns' && e.range.columnStart == 2) {
sh.getRange(e.range.rowStart, 1).setValue(new Date());
}
}
Using the event object as opposed to function to get row and column is much faster since the data is already in the event object

Cypress cy.wait() only waits for the first networkcall, need to wait for all calls

I would like to wait until the webpage is loaded with items. Each is getting retreived with a GET.
And I would like to wait on all these items until the page is loaded fully. I already made a interceptions for these. Named: 4ItemsInEditorStub
I have tried cy.wait('#4ItemsInEditorStub.all')
But this gives an timeout error at the end.
How can I let Cypress wait untill all "4ItemsInEditorStub" interceptions are completed?
Trying to wait on alias.all won't work -- Cypress has no idea what .all means in this context, or what value it should have. Even after your 4 expected calls are completed, there could be a fifth call after that (Cypress doesn't know). alias.all should only be used with cy.get(), to retrieve all yielded calls by that alias.
Instead, if you know that it will always be four calls, you can just wait four times.
cy.wait('4ItemsInEditorStub')
.wait('4ItemsInEditorStub')
.wait('4ItemsInEditorStub')
.wait('4ItemsInEditorStub');
You can either hard code a long enough wait (ie. cy.wait(3_000)) to cover the triggered request time and then use cy.get('#4ItemsInEditorStub.all')
cy.wait(10_000)
cy.get('#4ItemsInEditorStub.all')
// do some checks with the calls
or you can use unique intercepts and aliases to wait on all 4
cy.intercept('/your-call').as('4ItemsInEditorStub1')
cy.intercept('/your-call').as('4ItemsInEditorStub2')
cy.intercept('/your-call').as('4ItemsInEditorStub3')
cy.intercept('/your-call').as('4ItemsInEditorStub4')
cy.visit('')
cy.wait([
'#4ItemsInEditorStub1',
'#4ItemsInEditorStub2',
'#4ItemsInEditorStub3',
'#4ItemsInEditorStub4',
])
There is a package cypress-network-idle that makes the job simple
cy.waitForNetworkIdlePrepare({
method: 'GET',
pattern: '**/api/item/*',
alias: 'calls',
})
cy.visit('/')
// now wait for the "#calls" to finish
cy.waitForNetworkIdle('#calls', 2000) // no further requests after 2 seconds
Installation
# install using NPM
npm i -D cypress-network-idle
# install using Yarn
yarn add -D cypress-network-idle
In cypress/support/e2e.js
import 'cypress-network-idle'
Network idle testing looks good, but you might find it difficult to set the right time period, which may change each time you run (depending on network speed).
Take a look at my answer here Test that an API call does NOT happen in Cypress.
Using a custom command you can wait for a maximum number of calls without failing if there are actually less calls.
For example, if you have 7 or 8 calls, setting the maximum to 10 ensures you wait for all of them
Cypress.Commands.add('maybeWaitAlias', (selector, options) => {
const waitFn = Cypress.Commands._commands.wait.fn
return waitFn(cy.currentSubject(), selector, options)
.then((pass) => pass, (fail) => fail)
})
cy.intercept(...).as('allNetworkCalls')
cy.visit('/');
// up to 10 calls
Cypress._.times(10, () => {
cy.maybeWaitAlias('#allNetworkCalls', {timeout:1000}) // only need short timeout
})
// get array of all the calls
cy.get('#allNetworkCalls.all')
.then(calls => {
console.log(calls)
})

Cypress: how to wait for all requests to finish

I am using cypress to test our web application.
In certain pages there are different endpoint requests that are executed multiple times. [ e.g. GET /A GET /B GET /A].
What would be the best practise in cypress in order to wait for all requests to finish and guarantee that page has been fully loaded.
I don't want to use a ton cy.wait() commands to wait for all request to be processed. (there are a lot of different sets of requests in each page)
You can use the cy.route() feature from cypress. Using this you can intercept all your Get requests and wait till all of them are executed:
cy.server()
cy.route('GET', '**/users').as('getusers')
cy.visit('/')
cy.wait('#getusers')
I'm sure this is not recommended practice but here's what I came up with. It effectively waits until there's no response for a certain amount of time:
function debouncedWait({ debounceTimeout = 3000, waitTimeout = 4000 } = {}) {
cy.intercept('/api/*').as('ignoreMe');
let done = false;
const recursiveWait = () => {
if (!done) {
// set a timeout so if no response within debounceTimeout
// send a dummy request to satisfy the current wait
const x = setTimeout(() => {
done = true; // end recursion
fetch('/api/blah');
}, debounceTimeout);
// wait for a response
cy.wait('#ignoreMe', { timeout: waitTimeout }).then(() => {
clearTimeout(x); // cancel this wait's timeout
recursiveWait(); // wait for the next response
});
}
};
recursiveWait();
}
According to Cypress FAQ there is no definite way. But I will share some solutions I use:
Use the JQuery sintax supported by cypress
$('document').ready(function() {
//Code to run after it is ready
});
The problem is that after the initial load - some action on the page can initiate a second load.
Select an element like an image or select and wait for it to load. The problem with this method is that some other element might need more time.
Decide on a maindatory time you will wait for the api requests (I personaly use 4000 for my app) and place a cy.wait(mandatoryWaitTime) where you need your page to be loaded.
I faced the same issue with our large Angular application doing tens of requests as you navigate through it.
At first I tried what you are asking: to automatically wait for all requests to complete. I used https://github.com/bahmutov/cypress-network-idle as suggested by #Xiao Wang in this post. This worked and did the job, but I eventually realized I was over-optimizing my tests. Tests became slow. Test was waiting for all kinds of calls to finish, even those that weren't needed at that point in time to finish (like 3rd party analytics etc).
So I'd suggest not trying to wait for everything at a step, but instead finding the key API calls (you don't need to know the full path, even api/customers is enough) in your test step, use cy.intercept() and create an alias for it. Then use cy.wait() with your alias. The result is that you are waiting only when needed and only for the calls that really matter.
// At this point, there are lots of GET requests that need to finish in order to continue the test
// Intercept calls that contain a GET request with a request path containing /api/customer/
cy.intercept({ method: 'GET', url: '**/api/customer/**' }).as("customerData");
// Wait for all the GET requests with path containing /api/customer/ to complete
cy.wait("#customerData");
// Continue my test knowing all requested data is available..
cy.get(".continueMyTest").click()

Programmatically change database for heroku dataclips

We just upgraded our Heroku postgres database using the follower changeover method. We have over 50 dataclips attached to the old database, and now we need to move them over to the new database. However, doing them one by one will take a lot of time.
Is there a programatic way to update the database a dataclip is attached to, perhaps with the CLI tools?
At least once the old database has been deprovisioned, you can now (as of March 2016) reattach them to another database:
Go to https://dataclips.heroku.com/clips/recoverable. It will display your old database and a set of 'orphaned' dataclips and you can choose to transfer them to another database (in my case the promoted follower from the changeover).
Note that this only affects the dataclips that you created, it does not affect the dataclips one of your team members created and that you only had access to. So they will have to go through this process as well.
Official devcenter article: https://devcenter.heroku.com/articles/dataclips#dataclip-recovery
Thanks to Heroku CSRF measures, programmatically updating data clips is much more difficult than you might expect. You'll need to suck it up and start clicking buttons by hand, or beg their support team to do it for you, which is just as difficult.
There is no official support for programmatically moving the dataclips. That being said, you can script it out against their HTTP API.
The base URL is https://dataclips.heroku.com/api/v1/. There are three relevant endpoints:
clips /clips
resources (databases) /heroku_resources
move clip /clips/:slug/move
Find the slug of the clip you want to move, find the resource id of the new database, and make a post to the move clip endpoint:
POST /api/v1/clips/fjhwieufysdufnjqqueyuiewsr/move
Content-Type: application/json
{"heroku_resource_id":"resource123456789#heroku.com"}
I had over 300 dataclips to move. I used the following technique to update them all (essentially reverse engineering the dataclips API).
Open Chrome with Web Developer tools, Network tab.
Log into Heroku Dataclips
Observe the network call which returns all the dataclips, in JSON (https://dataclips.heroku.com/api/v1/clips). Take this response and extract out all dataclip slugs.
Update the database for one dataclip. Observe the network call which does this (https://dataclips.heroku.com/api/v1/clips/:slug/move). Right click, Copy as cURL. This is the easiest way to get all the correct parameters, since the API uses cookies for authentication.
Write a script that loops through each dataclip slug, and shells out to curl. In Ruby, this looks like:
slugs = <paste ids here>.split("\n")
slugs.each do |slug|
command = %Q(curl -v 'https://dataclips.heroku.com/api/v1/clips/#{slug}/move' -H 'Cookie: ...' --data '{"heroku_resource_id":"resource1234567#heroku.com"}')
puts command
system(command)
end
You can contact Heroku support, and they will bulk transfer the dataclips to your new database for you.
Batch working on dataclips
I've finally found a solution to work on my Dataclips as a batch using the javascript console and some scraping technique. I needed it to retrieve every dataclips. But it guess It can be updated as such:
// Go to the dataclip listing (https://data.heroku.com/dataclips).
// Then execute this script in your console.
// Be careful, this will focus a new window every 4 seconds, preventing
// you from working 4 seconds times the number of dataclips you have.
// Retrieve urls and titles
let dataclips = Array.
from(document.querySelectorAll('.rt-td:first-child a')).
map(el => ({ url: el.href, title: el.innerText }))
/**
* Allows waiting for a given timeout before execution.
* #param {number} seconds
*/
const timeout = function(seconds) {
return new Promise(resolve => {
setTimeout(() => {
resolve()
}, seconds);
})
}
/**
* Here are all the changes you want to apply to every single
* dataclip.
* #param {object} window
*/
const applyChanges = function(window) {
}
// With a fast connection, 4 seconds is OK. Dial it down if you
// have errors.
const expectedLoadTime = 4000 // ms
// This is the main loop, windows are opened one by one to ensure focus and a
// correct loading time.
for (const dataclip of dataclips) {
// This opens another window from the script, having access to its DOM.
// See https://github.com/buonomo/kazoo for a funnier example usage!
// And don't be shy to star and share :D
const externWindow = window.open(dataclip.url)
// A hack to wait for loading, this could be improved for sure.
await timeout(expectedLoadTime)
applyChanges(externWindow)
externWindow.close()
}
You'd still have to implement applyChanges yourself which I conceed is a bit tedious and I don't have time to do it know (if one does, please share!). But at least it can be done on all of your dataclips in a single function.
For an example usage of this script, you can take a look at the gist I made to scrape every dataclips and related errors.

Efficient way of executing 1 million AJAX requests

I want to call 1 million different URLs with AJAX.
What I did is (Javascript, jQuery used):
var numbers = [1, 2, 3...1000000]; // numbers.length = 1000000
$(function () {
$.each(numbers, function(key, val) {
$.ajax({
url: '/getter.php',
data: { id: val},
success: function () {
console.info(id);
}
});
});
});
I loop over 1 million integers, and passing them to my getter.php (which is doing something cool with that numbers).
The problem is after ~1,5k of requests I get Google Chrome dead.
I know I do this ineffective way, that's why am asking for help - how to actually do it right? How to GET request a php script 1 million times (not necessary with JavaScript!)?
You could use a persistent connection between your php script and the client requesting the data.
I think you are bumping into a limitation of the time to live on the single request you are calling. Also the HTTP requests functions as a Request - Response why, when you put it in your foreach statement, each single statement is being processed alone like:
1. iteration: GET /getter.php with value 1 .... wait... Oh theres a response
2. iteration GET /getter.php with value 2 .... wait Oh another response
This is a seemingly long and wrong proces as you might already have figured out.
Another approach would be to set up a persistent socket, which functions like the TCP procotol:
1. open the connection
2. send all the data
3. close the connection
Have you considered trying with a websocket?
Heres a few tutorials:
HTML5websocket
http://www.tutorialspoint.com/html5/html5_websocket.htm
PHP socket:
http://www.phpbuilder.com/articles/application-architecture/optimization/creating-real-time-applications-with-php-and-websockets.html
EDIT:
Also see this article on the difference between AJAX and websocket.
"AJAX is great if you aren’t in a hurry, but if you’re moving a high volume of data then the overhead of creating an HTTP connection every time is going to be a bottleneck. You need a persistent connection instead. In addition, AJAX always has to poll the server for data rather than receive it via push from the server. If you want speed and efficiency you need WebSockets."
http://blog.safe.com/2014/08/websockets-ajax-webhooks-comparison/

Resources