I have a Google Apps Script that uses onEdit() to add a datestamp to Column B any time Column A is edited. It's pretty much the exact scenario asked in How to ensure onEdit functions do not miss-fire.
Even with a completely empty/new spreadsheet and no other processes in the script, the execution duration is about 1 second per event trigger. And it actually takes close to 2 seconds before I see the datestamp appear in Column B.
In the solution provided in the above linked question (from 2019), the runtime was reported to be about 0.06 seconds. Almost 20 time faster than I'm experiencing. I experience the same slow (~1sec/event) speed, even when using the exact code supplied in that solution (see below).
Has GAS slowed down in the last few years? Is there something else that might be going on that would cause the slower runtime? I know 1 second isn't exactly "slow", but Column A is frequently edited--sometimes faster than once/second.
function onEdit(event) {
var sh = event.source.getActiveSheet();
if (sh.getName() === 'Dolly Returns') {
var col = event.range.getColumn();
if (col === 2) {
var row = event.range.getRow();
sh.getRange(row, 1).setValue(new Date());
}
}
}
Google Apps Script has not slowed down over the years.
Apps Script runs on Google's servers, and the resources available on those servers vary all the time. Further, the "0.06" seconds you quote was most likely timed through a script that runs on a server, while the "2 seconds" you mention is likely the time you perceive when you are looking at the Google Sheets user interface. It takes time for script updates to show up in your browser. That probably explains almost all of the difference.
Apps Script is nowadays based on the V8 JavaScript engine, which is much faster than the Rhino engine of the days of yore. However, SpreadsheetApp and Sheets API calls remain very slow. Those calls are what a Sheets script project would typically spend almost all of its runtime with.
The onEdit(e) function you quote is inefficient because it calls two API methods every time any value in the spreadsheet is edited. When the edit happens on the 'Dolly Returns' sheet, it calls yet another API method, and when it happens in column B in that sheet, it calls yet another API method before doing its thing.
To optimize it, use the event object, like this:
function onEdit(e) {
let sheet;
if (e.range.columnStart !== 2
|| (sheet = e.range.getSheet()).getName() !== 'Dolly Returns') {
return;
}
sheet.getRange(e.range.rowStart, 1).setValue(new Date());
}
This way, the function will not call any API methods for edits that happen outside of column B. See these onEdit(e) best practices.
Try this:
function onEdit(e) {
var sh = e.range.getSheet();
if (sh.getName() == 'Dolly Returns' && e.range.columnStart == 2) {
sh.getRange(e.range.rowStart, 1).setValue(new Date());
}
}
Using the event object as opposed to function to get row and column is much faster since the data is already in the event object
Related
I am currently making a turn based strategy game with laravel (mysql DB with InnoDB) engine and want to make sure that I don't have bugs due to race conditions, duplicate requests, bad actors etc...
Because these kind of bugs are hard to test, I wanted to get some clarification.
Many actions in the game can only occur once per turn, like buying a new unit. Here is a simplified bit of code for purchasing a unit.
$player = Player::find($player_id);
if($player->gold >= $unit_price && $player->has_purchased == false){
$player->has_purchased = true;
$player->gold -= $unit_price;
$player->save();
$unit = new Unit();
$unit->player_id = $player->id;
$unit->save();
}
So my concern would be if two threads both made it pass the if statement and then executed the block of code at the same time.
Is this a valid concern?
And would the solution be to wrap everything in a database transaction like https://betterprogramming.pub/using-database-transactions-in-laravel-8b62cd2f06a5 ?
This means that a good portion of my code will be wrapped around database transactions because I have a lot of instances that are variations of the above code for different actions.
Also there is a situation where multiple users will be able to update a value in the database so I want to avoid a situation where 2 users increment the value at the same time and it only gets incremented once.
Since you are using Laravel to presumably develop a web-based game, you can expect multiple concurrent connections to occur. A transaction is just one part of the equation. Transactions ensure operations are performed atomically, in your case it ensures that both the player and unit save are successful or both fail together, so you won't have the situation where the money is deducted but the unit is not granted.
However there is another facet to this, if there is a real possibility you have two separate requests for the same player coming in concurrently then you may also encounter a race condition. This is because a transaction is not a lock so two transactions can happen at the same time. The implication of this is (in your case) two checks happen on the same player instance to ensure enough gold is available, both succeed, and both deduct the same gold, however two distinct units are granted at the end (i.e. item duplication). To avoid this you'd use a lock to prevent other threads from obtaining the same player row/model, so your full code would be:
DB::transaction(function () use ($unit_price) {
$player = Player::where('id',$player_id)->lockForUpdate()->first();
if($player->gold >= $unit_price && $player->has_purchased == false){
$player->has_purchased = true;
$player->gold -= $unit_price;
$player->save();
$unit = new Unit();
$unit->player_id = $player->id;
$unit->save();
}
});
This will ensure any other threads trying to retrieve the same player will need to wait until the lock is released (which will happen at the end of the first request).
There's more nuances to deal with here as well like a player sending a duplicate request from double-clicking for example, and that can get a bit more complex.
For you purchase system, it's advisable to implement DB:transaction since it protects you from false records. Checkout the laravel docs for more information on this https://laravel.com/docs/9.x/database#database-transactions As for reactive data you need to keep track of, simply bind a variable to that data in your frontEnd, then use the variable to update your DB records.
In the case you need to exit if any exception or error occurs. If an exception is thrown the data will not save and rollback all the transactions. I recommand to use transactions as possible as you can. The basic format is:
DB::beginTransaction();
try {
// database actions like create, update etc.
DB::commit(); // finally commit to database
} catch (\Exception $e) {
DB::rollback(); // roll back if any error occurs
// something went wrong
}
See the laravel docs here
My google script web app is recently hitting qps limits. What would be a better way to improve performance.
I have about 50 active users. I use 15,000 rows google spreadsheet as a database and my app is serving json data requested by users from this spreadsheet. I use long-poll to keep connection alive for 5 min and close it if no update in spreadsheet happens. Then client reconnects. Web App is published to be executed as me.
My polling works like this:
function doGet(e){
var userHasVersion = e.parameter.userVersion
while (runningTime < 300001) {
var currentServerVersion = parseInt(cache.get("currentVersion"),10)
if(userVersion<currentServerVersion){
var returndata = []
for(var i = userVersion+1; i <= currentServerVersion;i++){
var newData = cache.get(i)
if(newData!=null){returnData.push(JSON.parse(cache.get(newData)))}
}
return ContentService.createTextOutput(JSON.stringify({currentServerVersion,data:returnData })).setMimeType(ContentService.MimeType.JSON);
} else {
Utilities.sleep(20000)
}
runningTime = calculateRunningTime()
}
}
What I have tried so far:
1) I optimized requests with CacheService to reduce calls to Spreadsheet. It helped for few months, but now I'm getting qps errors more and more often.
2) Asking Google team about quotas. They explained me, that there is no published quotas/limits for simultanous executions and they are subject to change without notice. They advised further usage of cacheService and better error handling.
I think to switch from long-polling to short-polling. But it feels like drawback. Should I try to further optimize performance or move to another service?
Would trying to use "execute app as user accessing the app" help? (users should use the same database)
Is Google Script API Executable different from Web App? It looks like it might fit, but I'm not sure if they share the same qps quotas.
I'm also considering GAE service, but I'd like to avoid going over free quota.
Any advice will be much appreciated!
I think that a following part can be improved. When data is retrieved from cache service, getAll() is more effectively than get(). I have ever measured the difference. That is about 890 times faster than get(). If the number of data retrieving from cache service is large, I think that the improvement of this part is important for performance.
Your script :
var returndata = []
for(var i = userVersion+1; i <= currentServerVersion;i++){
var newData = cache.get(i)
if(newData!=null){returnData.push(JSON.parse(cache.get(newData)))}
}
Improved script :
var ar = [];
for(var i = userVersion+1; i <= currentServerVersion;i++){
ar.push([i]);
}
var r = JSON.parse(JSON.stringify(cache.getAll(ar))); // Since key is number, I used this.
var returnData = [r[j] for each (j in r)if (!r[j])];
Since I cannot see your data, I cannot confirm this execution. So if errors occur, please tell me.
If I misunderstand your question, I'm sorry.
I apologize if the way I'm asking this is why I haven't found an answer yet, but I've got a simple JS script that makes an AJAX request and gets data from an API and stores it.
I'd like to put that script on a server and have it run every 5 minutes, not client-side, but server-side.
I've found a resource called Later.js but I am not sure how to set it up on a server to automatically initialize and run.
Any help is greatly appreciated!!!
It's hard to know without your exact code.
You should be able to do something with jQuery, like:
function foo () {
//your code here
}
foo(); //run function once on startup
setInterval(foo, 5 * 60 * 1000) //and again every five minutes
Later.js is a cool library for calculating time in milliseconds, but that's all it does. If you have installed and required it via NPM, you could use it like:
var 5min = later.parse.text('every 5 min');
setInterval(foo, 5min);
As you can see, you might as well just use standard JS/jQuery to solve your issue.
I need to design a rate limiter service for throttling requests.
For every incoming request a method will check if the requests per second has exceeded its limit or not. If it has exceeded then it will return the amount of time it needs to wait for being handled.
Looking for a simple solution which just uses system tick count and rps(request per second). Should not use queue or complex rate limiting algorithms and data structures.
Edit: I will be implementing this in c++. Also, note I don't want to use any data structures to store the request currently getting executed.
API would be like:
if (!RateLimiter.Limit())
{
do work
RateLimiter.Done();
}
else
reject request
The most common algorithm used for this is token bucket. There is no need to invent a new thing, just search for an implementation on your technology/language.
If your app is high avalaible / load balanced you might want to keep the bucket information on some sort of persistent storage. Redis is a good candidate for this.
I wrote Limitd is a different approach, is a daemon for limits. The application ask the daemon using a limitd client if the traffic is conformant. The limit is configured on the limitd server and the app is agnostic to the algorithm.
since you give no hint of language or platform I'll just give out some pseudo code..
things you are gonna need
a list of current executing requests
a wait to get notified where a requests is finished
and the code can be as simple as
var ListOfCurrentRequests; //A list of the start time of current requests
var MaxAmoutOfRequests;// just a limit
var AverageExecutionTime;//if the execution time is non deterministic the best we can do is have a average
//for each request ether execute or return the PROBABLE amount to wait
function OnNewRequest(Identifier)
{
if(count(ListOfCurrentRequests) < MaxAmoutOfRequests)//if we have room
{
Struct Tracker
Tracker.Request = Identifier;
Tracker.StartTime = Now; // save the start time
AddToList(Tracker) //add to list
}
else
{
return CalculateWaitTime()//return the PROBABLE time it will take for a 'slot' to be available
}
}
//when request as ended release a 'slot' and update the average execution time
function OnRequestEnd(Identifier)
{
Tracker = RemoveFromList(Identifier);
UpdateAverageExecutionTime(Now - Tracker.StartTime);
}
function CalculateWaitTime()
{
//the one that started first is PROBABLY the first to finish
Tracker = GetTheOneThatIsRunnigTheLongest(ListOfCurrentRequests);
//assume the it will finish in avg time
ProbableTimeToFinish = AverageExecutionTime - Tracker.StartTime;
return ProbableTimeToFinish
}
but keep in mind that there are several problems with this
assumes that by returning the wait time the client will issue a new request after the time as passed. since the time is a estimation, you can not use it to delay execution, or you can still overflow the system
since you are not keeping a queue and delaying the request, a client can be waiting for more time that what he needs.
and for last, since you do not what to keep a queue, to prioritize and delay the requests, this mean that you can have a live lock, where you tell a client to return later, but when he returns someone already took its spot, and he has to return again.
so the ideal solution should be a actual execution queue, but since you don't want one.. I guess this is the next best thing.
according to your comments you just what a simple (not very precise) requests per second flag. in that case the code can be something like this
var CurrentRequestCount;
var MaxAmoutOfRequests;
var CurrentTimestampWithPrecisionToSeconds
function CanRun()
{
if(Now.AsSeconds > CurrentTimestampWithPrecisionToSeconds)//second as passed reset counter
CurrentRequestCount=0;
if(CurrentRequestCount>=MaxAmoutOfRequests)
return false;
CurrentRequestCount++
return true;
}
doesn't seem like a very reliable method to control whatever.. but.. I believe it's what you asked..
I m developping a Winjs/HTML windows Store application .
I have to do some tests every period of time so let's me explain my need.
when i navigate to my specific page , I have to test (without a specific time in advance=loop)
So when my condition is verified it Will render a Flyout(Popup) and then exit from the Promise. (Set time out need a specific time but i need to verify periodically )
I read the msdn but i can't fullfill this goal .
If someone has an idea how to do it , i will be thankful.
Every help will be appreciated.
setInterval can be used.
var timerId = setInternal(function ()
{
// do you work.
}, 2000); // timer event every 2s
// invoke this when timer needs to be stopped or you move out of the page; that is unload() method
clearInternal(timerId);
Instead of polling at specific intervals, you should check if you can't adapt your code to use events or databinding instead.
In WinJS you can use databinding to bind input values to a view model and then check in its setter functions if your condition has been fulfilled.
Generally speaking, setInterval et al should be avoided for anything that's not really time-related domain logic (clocks, countdowns, timeouts or such). Of course there are situations when there's no other way (like polling remote services), so this may not apply to your situation at hand.