How to trigger Shiny App - rstudio

I have a shiny app, which has a chunk of code to preload required data. This process takes a long time, but it only needs to run once each day.
The problem is the shiny_preload_data() function only gets triggered when the first user access the app and this user has to wait for a long time for the data to be ready.
Is there a way to trigger the shiny_preload_data() before the first user opens a browser to access this app?
In side my server.R function, the code structure looks like this:
shiny_preload_data()
shinyServer(function(input, output, clientData, session) {
....
}

rm(list = ls())
library(shiny)
autoInvalidate <- reactiveTimer(10000,session = NULL)
GetData <- function(){
if(!exists("nextCall")){
Data <<- mtcars
# 84600 is + 1 day
nextCall <<- Sys.time() + 120
}
else if (Sys.time() >= nextCall){
Data <<- iris
# 84600 is + 1 day
nextCall <<- Sys.time() + 120
message(paste0("Next call at: ",nextCall))
}
else{
return()
}
}
ui <- fluidPage(mainPanel(tableOutput("table")))
server <- function(input, output, session){
observeEvent(autoInvalidate(),{
GetData()
})
output$table <- renderTable({
autoInvalidate()
Data
})
}
shinyApp(ui = ui, server=server)
As pointed out by #Taegost its best if you do that via a separate method using cron job, here is some examples how to do it.
If you want your app to do this lets say every x hours,minutes or daily you can write a function similar to mine. I simply check for file existence and compare the nextCall timestamp with the current one
For demonstration purposes I put the checking, which is the reactiveTimer to 10 secs and getting new data every 2 mins

Related

Vercel serverless function timeout, using functions that take more then 10 sec to execute

I am building a pet project like a multiplayer quiz using Next JS deployed on Vercel.
Everything works perfect on a localhost, but when I deploy it on Vercel as a cloud function (in the API route) I meet a problem that serverless function can only last for 10 seconds.
So I want to understand what is the best practice to handle the problem.
version of the cycle in an api route looks like this:
export async function quizGameProcess(
roomInitData: InitGameData,
questions: QuestionInDB\[\],
) {
let questionNumber = 0;
let maxPlayerPoints = 0;
const pointsToWin = 10;
while (maxPlayerPoints \< pointsToWin) {
const currentQuestion = questions\[questionNumber\];
// Wait 5 seconds before start
await new Promise(resolve =\>
setTimeout(resolve, 5000),
);
// Show question to players for 15 seconds
await questionShowInRoom(roomInitData, currentQuestion);
await new Promise(resolve => setTimeout(resolve, 15000)); <====== Everything works great until this moment
// Show the right answer for 5 seconds
await AnswerPushInRoom(roomInitData);
await new Promise(resolve =>
setTimeout(resolve, 5),
);
maxPlayerPoints = await countPlayerPoints(roomInitData)
...
questionNumber++
So i need 15 seconds to show players the question and cloud function returns error while invoking it.
questionShowInRoom() function just changes a string in the database from :
room = {activeWindow: prepareToStart}
to
room = {activeWindow: question}
after 15 seconds it must change it to:
room = {activeWindow: showAnswer}
So the function must return something before 10 seconds, but if you return something - the route stops execution.
I cant use VPS because the project must stay as one Next JS project folder and must be easy maintained in one place and be free.
So if i divide the code - and make some 'worker', how it should be invoked? By some other route? isnt that a bad practice?
Or of it will be the frontend just making polling every second trying to invoke it until timestamp difference become more than 15 seconds.. looks like a strange decision.
So what is the best practice to handle the problem?

Starting Activity Indicator while Running a database download in a background thread

I am running a database download in a background thread. The threads work fine and I execute group wait before continuing.
The problem I have is that I need to start an activity indicator and it seems that due to the group_wait it gets blocked.
Is there a way to run such heavy process, ensure that all threads get completed while allowing the activity indicator to run?
I start the activity indicator with (I also tried starting the indicator w/o the dispatch_async):
dispatch_async(dispatch_get_main_queue(), {
activityIndicator.startAnimating()
})
After which, I start the thread group:
let group: dispatch_group_t = dispatch_group_create()
let queue: dispatch_queue_t = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) //also tried QOS_CLASS_BACKGROUND
while iter > 0 {
iter--
dispatch_group_enter(group)
dispatch_group_async(group, queue, {
do {
print("in queue \(iter)")
temp += try query.findObjects()
query.skip += query.limit
} catch let error as NSError {
print("Fetch failed: \(error.localizedDescription)")
}
dispatch_group_leave(group)
})
}
// Wait for all threads to finish and proceed
As I am using Parse, I have modified the code as follows (psuedo code for simplicity):
trigger the activity indicator with startAnimating()
call the function that hits Parse
set an observer in the Parse class on an int to trigger an action when the value reaches 0
get count of new objects in Parse
calculate how many loop iterations I need to pull all the data (using max objects per query = 1000 which is Parse max)
while iterations > 0 {
create a Parse query object
set the query skip value
use query.findObjectsInBackroundWithBlock ({
pull objects and add to a temp array
observer--
)}
iterations--
}
When the observer hits 0, trigger a delegate to return to the caller
Works like a charm.

Why fetching data from sqlite block Node.JS?

I want to fetch and export to csv file huge amount (5 - 12 milions rows) of archive data from Sqlite database. While doing this the whole server is blocked. No other connection can be handled by server (for example I couldn't open website in another tab in browser).
Node.JS server part:
function exportArchiveData(response, query){
response.setHeader('Content-type', 'text/csv');
response.setHeader('Content-disposition', 'attachment; filename=archive.csv');
db.fetchAllArchiveData(
query.ID,
function(error, data){
if(!error)
response.write(data.A + ';' + data.B + ';' + data.C + '\n');
},
function(error, retrievedRows){
response.end();
});
};
Sqlite DB module:
module.exports.SS.prototype.fetchAllArchiveData = function (
a, callback, complete) {
var self = this;
// self.sensorSqliteDb.all(
self.sensorSqliteDb.each(
'SELECT A, B, C '+
'FROM AD WHERE '+
' A="' + a + '"'+
' ORDER BY C ASC' +
';'
,
callback,
complete
);
};
I also create index on AD like CREATE INDEX IAD ON AD(A, C) and EXPLAIN QUERY PLAN show that this index is used by sqlite engine.
Still, when I call exportArchiveData server send the data properly but no other action can be performed during this. I have a huge amount of data (5 - 12 milions of rows to send) so it takes ~3 minutes.
How can I prevent this from blocking whole server?
I thought that if I use EACH and there will be callback's the server will be more responsive. Also Memory usage is huge (about 3GB and even more). Can I prevent this somehow?
In answer to comments, I would like to add some clarifications:
I use node-sqlite3 from developmentseed. It should be asynchronous and non-blocking. And it is. When statement is prepared I can request main page. But when server start serving data, then Node.js server is blocked. I guess thats because request for home page is one request to call some callback while there are milions request for callback handling archive data "EACH".
If I use sqlite3 tool from linux command line I do not get rows immediately but that is not the problem as long as node-sqlite3 is non-blocking.
Yes. I'm hitting CPU max. What is worse, when I request twice as much data the whole memory is used, and then server freeze forever.
OK. I handle this problem this way.
Instead of using Database#each I use Database#prepare with multiple Statement#get.
What is more, I investigate that running out of memory was caused by full buffer of response. So now, I call for next row when I get previous and when response buffer have place for new data. Working perfect. And Now server is not blocked (only during preparing statement).
Sqlite module:
module.exports.SS.prototype.fetchAllArchiveData = function (
a) {
var self = this;
var statement = self.Db.prepare(
'SELECT A, B, C '+
'FROM AD WHERE '+
' A="' + a + '"'+
' ORDER BY C ASC' +
';'
,
function(error){
if(error != null){
console.log(error);
}
}
);
return statement;
};
Server side:
function exportArchiveData(response, query){
var respRet = null;
var i = 0;
var statement = db.fetchAllArchiveData(
query.ID);
var getcallback = function(err, row){
if(err != null){
console.mylog(err);
return;
}
if(typeof(row) != 'undefined'){
respRet = response.write(row.A + ';' + row.B + ';' + row.C + '\n');
console.log(i++ + ' ' + respRet);
if(respRet){
statement.get(getcallback);
}else{
console.log('should wait on drain');
response.on('drain', function(){
console.log('drain - set on drain to null, call statement');
response.on('drain', function(){});
statement.get(getcallback);
});
}
}else{
response.end();
}
};
statement.get(function(err, row){
response.setHeader('Content-type', 'text/csv');
response.setHeader('Content-disposition', 'attachment; filename=archive.csv');
getcallback(err, row);
});
};

Is measuring js execution time a way to tell how quickly the app is responding to requests?

I have something like a microtime() function at the very start of my node.js / express app.
function microtime (get_as_float) {
// Returns either a string or a float containing the current time in seconds and microseconds
//
// version: 1109.2015
// discuss at: http://phpjs.org/functions/microtime
// + original by: Paulo Freitas
// * example 1: timeStamp = microtime(true);
// * results 1: timeStamp > 1000000000 && timeStamp < 2000000000
var now = new Date().getTime() / 1000;
var s = parseInt(now, 10);
return (get_as_float) ? now : (Math.round((now - s) * 1000) / 1000) + ' ' + s;
}
The code of the actual app looks something like this:
application.post('/', function(request, response) {
t1 = microtime(true);
//code
//code
response.send(something);
console.log("Time elapsed: " + (microtime(true) - t1));
}
Time elapsed: 0.00599980354309082
My question is, does this mean that from the time a POST request hits the server to the time a response is sent out is give or take ~0.005s?
I've measured it client-side but my internet is pretty slow so I think there's some lag that has nothing to do with the application itself. What's a quick and easy way to check how quickly the requests are being processed?
Shameless plug here. I've written an agent that tracks the time usage for every Express request.
http://blog.notifymode.com/blog/2012/07/17/profiling-express-web-framwork-with-notifymode/
In fact when I first started writing the agent, I took the same approach. But I soon realized that it is not accurate. My implementation tracks the time difference between request and the response by substituting the Express router. That allowed me to add tracker functions. Feel free to give it a try.

Django Session Persistent but Losing Data

I have been working for hours trying to understand the following problem: I have a user send an Ajax request to dynamically send a form and record that the number of forms to read on submission has increased. Toward this end I use request.session['editing_foo'] = { 'prefix_of_form_elements' : pkey } so that I can associate them with the database for saving and loading (-1 is for new forms that haven't been saved yet).
However, when I use the following code (see bottom) I get the following bizarre output:
1st Click:
{} foousername
next_key 1
1
{u'1-foo': -1}
2nd Click:
{} foousername
next_key 1
1
{u'1-foo': -1}
3rd Request:
{} foousername
next_key 1
1
{u'1-foo': -1}
What the heck is going on?
id_fetcher = re.compile(r'\d')
#login_required
def ajax_add_foo(request):
def id_from_prefix(key):
return int( id_fetcher.search(key).group(0) )
if 'editing_foos' not in request.session:
print "reinitializing"
request.session['editing_foos'] = {}
print request.session['editing_foos'], request.user
keys = request.session['editing_foos'].keys()
if len(keys) == 0:
next_key = 1
else:
print [ id_from_prefix(key) for key in keys ]
next_key = max([ id_from_prefix(key) for key in keys ]) + 1
print "next_key", next_key
fooform = FooForm(prefix=next_key)
print next_key
request.session['editing_foos'].update( {create_prefix(FooForm, next_key) : -1 } ) # This quote is new and has no pkey
print request.session['editing_foos']
return render_to_response( 'bar/foo_fragment.html',
{'fooform' : fooform, },
context_instance=RequestContext(request))
Thank you all very much!
Note: This is a followup to a previous question concerning the same source code.
I don't think I completely understand the question, but you may want to take a look at which session engine you're using
if you're using the cache session engine you need to make sure you have caching properly set up (for instance the dummy cache would just throw out your session data)
another possibility is that your session isn't being saved because you're not changing the session, you're changing a mutable object that is stored in the session. you can try forcing the session to save by adding this somewhere in your view:
request.session.modified = True

Resources