My google script web app is recently hitting qps limits. What would be a better way to improve performance.
I have about 50 active users. I use 15,000 rows google spreadsheet as a database and my app is serving json data requested by users from this spreadsheet. I use long-poll to keep connection alive for 5 min and close it if no update in spreadsheet happens. Then client reconnects. Web App is published to be executed as me.
My polling works like this:
function doGet(e){
var userHasVersion = e.parameter.userVersion
while (runningTime < 300001) {
var currentServerVersion = parseInt(cache.get("currentVersion"),10)
if(userVersion<currentServerVersion){
var returndata = []
for(var i = userVersion+1; i <= currentServerVersion;i++){
var newData = cache.get(i)
if(newData!=null){returnData.push(JSON.parse(cache.get(newData)))}
}
return ContentService.createTextOutput(JSON.stringify({currentServerVersion,data:returnData })).setMimeType(ContentService.MimeType.JSON);
} else {
Utilities.sleep(20000)
}
runningTime = calculateRunningTime()
}
}
What I have tried so far:
1) I optimized requests with CacheService to reduce calls to Spreadsheet. It helped for few months, but now I'm getting qps errors more and more often.
2) Asking Google team about quotas. They explained me, that there is no published quotas/limits for simultanous executions and they are subject to change without notice. They advised further usage of cacheService and better error handling.
I think to switch from long-polling to short-polling. But it feels like drawback. Should I try to further optimize performance or move to another service?
Would trying to use "execute app as user accessing the app" help? (users should use the same database)
Is Google Script API Executable different from Web App? It looks like it might fit, but I'm not sure if they share the same qps quotas.
I'm also considering GAE service, but I'd like to avoid going over free quota.
Any advice will be much appreciated!
I think that a following part can be improved. When data is retrieved from cache service, getAll() is more effectively than get(). I have ever measured the difference. That is about 890 times faster than get(). If the number of data retrieving from cache service is large, I think that the improvement of this part is important for performance.
Your script :
var returndata = []
for(var i = userVersion+1; i <= currentServerVersion;i++){
var newData = cache.get(i)
if(newData!=null){returnData.push(JSON.parse(cache.get(newData)))}
}
Improved script :
var ar = [];
for(var i = userVersion+1; i <= currentServerVersion;i++){
ar.push([i]);
}
var r = JSON.parse(JSON.stringify(cache.getAll(ar))); // Since key is number, I used this.
var returnData = [r[j] for each (j in r)if (!r[j])];
Since I cannot see your data, I cannot confirm this execution. So if errors occur, please tell me.
If I misunderstand your question, I'm sorry.
Related
I have a Google Apps Script that uses onEdit() to add a datestamp to Column B any time Column A is edited. It's pretty much the exact scenario asked in How to ensure onEdit functions do not miss-fire.
Even with a completely empty/new spreadsheet and no other processes in the script, the execution duration is about 1 second per event trigger. And it actually takes close to 2 seconds before I see the datestamp appear in Column B.
In the solution provided in the above linked question (from 2019), the runtime was reported to be about 0.06 seconds. Almost 20 time faster than I'm experiencing. I experience the same slow (~1sec/event) speed, even when using the exact code supplied in that solution (see below).
Has GAS slowed down in the last few years? Is there something else that might be going on that would cause the slower runtime? I know 1 second isn't exactly "slow", but Column A is frequently edited--sometimes faster than once/second.
function onEdit(event) {
var sh = event.source.getActiveSheet();
if (sh.getName() === 'Dolly Returns') {
var col = event.range.getColumn();
if (col === 2) {
var row = event.range.getRow();
sh.getRange(row, 1).setValue(new Date());
}
}
}
Google Apps Script has not slowed down over the years.
Apps Script runs on Google's servers, and the resources available on those servers vary all the time. Further, the "0.06" seconds you quote was most likely timed through a script that runs on a server, while the "2 seconds" you mention is likely the time you perceive when you are looking at the Google Sheets user interface. It takes time for script updates to show up in your browser. That probably explains almost all of the difference.
Apps Script is nowadays based on the V8 JavaScript engine, which is much faster than the Rhino engine of the days of yore. However, SpreadsheetApp and Sheets API calls remain very slow. Those calls are what a Sheets script project would typically spend almost all of its runtime with.
The onEdit(e) function you quote is inefficient because it calls two API methods every time any value in the spreadsheet is edited. When the edit happens on the 'Dolly Returns' sheet, it calls yet another API method, and when it happens in column B in that sheet, it calls yet another API method before doing its thing.
To optimize it, use the event object, like this:
function onEdit(e) {
let sheet;
if (e.range.columnStart !== 2
|| (sheet = e.range.getSheet()).getName() !== 'Dolly Returns') {
return;
}
sheet.getRange(e.range.rowStart, 1).setValue(new Date());
}
This way, the function will not call any API methods for edits that happen outside of column B. See these onEdit(e) best practices.
Try this:
function onEdit(e) {
var sh = e.range.getSheet();
if (sh.getName() == 'Dolly Returns' && e.range.columnStart == 2) {
sh.getRange(e.range.rowStart, 1).setValue(new Date());
}
}
Using the event object as opposed to function to get row and column is much faster since the data is already in the event object
I have a WCF named pipe service that receives a byte array and writes it into an SQLite DB.
When I moved the SQLite insert logic into the WCF service the write performance decreased almost by half.
I went through various recommendation online but nothing seems to help.
My current configuration looks like this:
pipeBinding.MaxBufferPoolSize = 5000000;
pipeBinding.MaxBufferSize = 5000000;
pipeBinding.MaxReceivedMessageSize = 5000000;
pipeBinding.ReaderQuotas.MaxArrayLength = 5000000;
pipeBinding.Security.Transport.ProtectionLevel = ProtectionLevel.None;
More tweaking recommendations would be more than welcome.
Using protobuf helped increasing the speed however most consuming action was a sum action on the SQLite table so I had change the structure of my db.
could someone help me to read NewRelic Summary and Trace details. Following screenshots have trace for a single transaction, which do not create any query to the database. It is just a simple query with few lines of Scala template code, which renders HTML page and returns it to the client. This is just a single transaction that is currently running in production. Production has plenty of more complex transaction running which do lots of external calls to Mongo, Maria, Queue, etc.
Does the trace reveal anything about where bottleneck could be? Are we for example running out of Threads or Workers. As I told most of the transactions do lots of web external calls, which might reserve single Thread for quite long time. How one can actually study if Threads or Workers are running out in Play application? We are using 2.1.4.
What actually happens in following calls?
Promise.apply 21.406ms
Async Wait 21.406ms
Actor.tell 48.366ms
PlayDefaultUpstreamHandler 6.292ms
Edit:
What is the purpose of following calls? Those have super high average call times.
scala.concurrent.impl.CallbackRunnable.run()
scala.concurrent.impl.Future$PromiseCompletingRunnable.run()
org.jboss.netty.handler.codec.http.HttpRequestDecoder.unfoldAndFireMessageReceived()
Edit:
play {
akka {
event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-min = 350
parallelism-max = 350
}
}
exports = {
fork-join-executor {
parallelism-min = 10
parallelism-max = 10
}
}
}
}
}
I'm not sure if this will help you 1 year later but I think the performance problems you were hitting are not related to Play, Akka or Netty.
The problem will be in your code business logic or in the database access. The big times that you see for PromiseCompletingRunnable and unfoldAndFireMessageReceived are misleading. This times are reported by newrelic in a wrong and misleading way. Please read this post:
Extremely slow play framework 2.3 request handling code
I faced a similar problem, and mine was in the database but newrelic reported big times in netty.
I hope this helps you even now.
I need to design a rate limiter service for throttling requests.
For every incoming request a method will check if the requests per second has exceeded its limit or not. If it has exceeded then it will return the amount of time it needs to wait for being handled.
Looking for a simple solution which just uses system tick count and rps(request per second). Should not use queue or complex rate limiting algorithms and data structures.
Edit: I will be implementing this in c++. Also, note I don't want to use any data structures to store the request currently getting executed.
API would be like:
if (!RateLimiter.Limit())
{
do work
RateLimiter.Done();
}
else
reject request
The most common algorithm used for this is token bucket. There is no need to invent a new thing, just search for an implementation on your technology/language.
If your app is high avalaible / load balanced you might want to keep the bucket information on some sort of persistent storage. Redis is a good candidate for this.
I wrote Limitd is a different approach, is a daemon for limits. The application ask the daemon using a limitd client if the traffic is conformant. The limit is configured on the limitd server and the app is agnostic to the algorithm.
since you give no hint of language or platform I'll just give out some pseudo code..
things you are gonna need
a list of current executing requests
a wait to get notified where a requests is finished
and the code can be as simple as
var ListOfCurrentRequests; //A list of the start time of current requests
var MaxAmoutOfRequests;// just a limit
var AverageExecutionTime;//if the execution time is non deterministic the best we can do is have a average
//for each request ether execute or return the PROBABLE amount to wait
function OnNewRequest(Identifier)
{
if(count(ListOfCurrentRequests) < MaxAmoutOfRequests)//if we have room
{
Struct Tracker
Tracker.Request = Identifier;
Tracker.StartTime = Now; // save the start time
AddToList(Tracker) //add to list
}
else
{
return CalculateWaitTime()//return the PROBABLE time it will take for a 'slot' to be available
}
}
//when request as ended release a 'slot' and update the average execution time
function OnRequestEnd(Identifier)
{
Tracker = RemoveFromList(Identifier);
UpdateAverageExecutionTime(Now - Tracker.StartTime);
}
function CalculateWaitTime()
{
//the one that started first is PROBABLY the first to finish
Tracker = GetTheOneThatIsRunnigTheLongest(ListOfCurrentRequests);
//assume the it will finish in avg time
ProbableTimeToFinish = AverageExecutionTime - Tracker.StartTime;
return ProbableTimeToFinish
}
but keep in mind that there are several problems with this
assumes that by returning the wait time the client will issue a new request after the time as passed. since the time is a estimation, you can not use it to delay execution, or you can still overflow the system
since you are not keeping a queue and delaying the request, a client can be waiting for more time that what he needs.
and for last, since you do not what to keep a queue, to prioritize and delay the requests, this mean that you can have a live lock, where you tell a client to return later, but when he returns someone already took its spot, and he has to return again.
so the ideal solution should be a actual execution queue, but since you don't want one.. I guess this is the next best thing.
according to your comments you just what a simple (not very precise) requests per second flag. in that case the code can be something like this
var CurrentRequestCount;
var MaxAmoutOfRequests;
var CurrentTimestampWithPrecisionToSeconds
function CanRun()
{
if(Now.AsSeconds > CurrentTimestampWithPrecisionToSeconds)//second as passed reset counter
CurrentRequestCount=0;
if(CurrentRequestCount>=MaxAmoutOfRequests)
return false;
CurrentRequestCount++
return true;
}
doesn't seem like a very reliable method to control whatever.. but.. I believe it's what you asked..
Titanium SDK version: 1.6.1
iPhone SDK version: 4.2
I am trying to build a solution to cache JSON calls. I have done a first attempt that does the job, but is there a better solution? I am using textfiles to save the JSON output, is this OK performance wise?
http://pastie.org/1734763
Thankful for all feedback!
I think that'd be ok. As long as the files aren't massive in number/size it should perform quite well.
The other approach you could try if you decide you're not happy with performance, or want to maintain less code, is to use App storage, which persists data beyond app sessions.
Titanium.App.setString('jsonResponse', this.responseText);
Titanium.App.setInt('expires', this.responseText.expires);
Then before you make your request you can check if the cache is indeed stale:
var expires = Titanium.App.getInt('expires');
// Get the current time in milliseconds, etc.
if(expires > current_time) {
// Cache is still valid
var response = Titanium.App.getString('jsonResponse');
var obj = JSON.parse(response);
}
else {
// Cache is stale - query for new data
}