I am trying to develop a webapp using Google Apps Script to be embedded into a Google Site which simply displays the contents of a Google Sheet and filters it using some simple parameters. For the time being, at least. I may add more features later.
I got a functional app, but found that filtering could often take a while as the client sometimes had to wait up to 5 seconds for a response from the server. I decided that this was most likely due to the fact that I was loading the spreadsheet by ID using the SpreadsheetApp class every time it was called.
I decided to cache the spreadsheet values in my doGet function using the CacheService and retrieve the data from the cache each time instead.
However, for some reason this has meant that what was a 2-dimensional array is now treated as a 1-dimensional array. And, so, when displaying the data in an HTML table, I end up with a single column, with each cell being occupied by a single character.
This is how I have implemented the caching; as far as I can tell from the API reference I am not doing anything wrong:
function doGet() {
CacheService.getScriptCache().put('data', SpreadsheetApp
.openById('####')
.getActiveSheet()
.getDataRange()
.getValues());
return HtmlService
.createTemplateFromFile('index')
.evaluate()
.setSandboxMode(HtmlService.SandboxMode.IFRAME);
}
function getData() {
return CacheService.getScriptCache().get('data');
}
This is my first time developing a proper application using GAS (I have used it in Sheets before). Is there something very obvious I am missing? I didn't see any type restrictions on the CacheService reference page...
CacheService stores Strings, so objects such as your two-dimensional array will be coerced to Strings, which may not meet your needs.
Use the JSON utility to take control of the results.
myCache.put( 'tag', JSON.stringify( myObj ) );
...
var cachedObj = JSON.parse( myCache.get( 'tag' ) );
Cache expires. The put method, without an expirationInSeconds parameter expires in 10 minutes. If you need your data to stay alive for more than 10 minutes, you need to specify an expirationInSeconds, and the maximum is 6 hours. So, if you specifically do NOT need the data to expire, Cache might not be the best use.
You can use Cache for something like controlling how long a user can be logged in.
You could also try using a global variable, which some people would tell you to never use. To declare a global variable, define the variable outside of any function.
Related
I am working with some coaching using Redis in Nodejs.
here is my code implimentation.
redis.get(key)
if(!key) {
redis.set(key, {"SomeValue": "SomeValue", "SomeAnohterValue":"SomeAnohterValue"}
}
return redis.get(key)
Till here everything works well.
But let's assume a situation where I need to get the value from a function call and set it to Redis and then I keep getting the same value from Redis whenever I want, in this case, I don't need to call the function again and again for getting the value.
But for an instance, the values have been changed or some more values have been added to my actual API call, now I need to call that function again to update the values again inside the Redis corresponding to that same key.
But I don't know how can I do this.
Any help would be appreciated.
Thank you in advanced.
First thing is that your initial code has a bug. You should use the set if not exist functionality that redis provides natively instead of doing check and set calls
What you are describing is called cache invalidation and is one of the hardest parts in software development
You need to do a 'notify' in some way when the value changes so that the fetchers know that it is time to grab the most up to date value.
One simple way would be to have a dirty boolean variable that is set to true when the value is updated and when fetching you check that variable. If dirty then get from redis and set to false else return the vue from prior
I am trying to develop a webapp using Google Apps Script to be embedded into a Google Site which simply displays the contents of a Google Sheet and filters it using some simple parameters. For the time being, at least. I may add more features later.
I got a functional app, but found that filtering could often take a while as the client sometimes had to wait up to 5 seconds for a response from the server. I decided that this was most likely due to the fact that I was loading the spreadsheet by ID using the SpreadsheetApp class every time it was called.
I decided to cache the spreadsheet values in my doGet function using the CacheService and retrieve the data from the cache each time instead.
However, for some reason this has meant that what was a 2-dimensional array is now treated as a 1-dimensional array. And, so, when displaying the data in an HTML table, I end up with a single column, with each cell being occupied by a single character.
This is how I have implemented the caching; as far as I can tell from the API reference I am not doing anything wrong:
function doGet() {
CacheService.getScriptCache().put('data', SpreadsheetApp
.openById('####')
.getActiveSheet()
.getDataRange()
.getValues());
return HtmlService
.createTemplateFromFile('index')
.evaluate()
.setSandboxMode(HtmlService.SandboxMode.IFRAME);
}
function getData() {
return CacheService.getScriptCache().get('data');
}
This is my first time developing a proper application using GAS (I have used it in Sheets before). Is there something very obvious I am missing? I didn't see any type restrictions on the CacheService reference page...
CacheService stores Strings, so objects such as your two-dimensional array will be coerced to Strings, which may not meet your needs.
Use the JSON utility to take control of the results.
myCache.put( 'tag', JSON.stringify( myObj ) );
...
var cachedObj = JSON.parse( myCache.get( 'tag' ) );
Cache expires. The put method, without an expirationInSeconds parameter expires in 10 minutes. If you need your data to stay alive for more than 10 minutes, you need to specify an expirationInSeconds, and the maximum is 6 hours. So, if you specifically do NOT need the data to expire, Cache might not be the best use.
You can use Cache for something like controlling how long a user can be logged in.
You could also try using a global variable, which some people would tell you to never use. To declare a global variable, define the variable outside of any function.
When I create a function like this:
v8::Function::New(<Isolate>, <C_Function>, <Data_Value>);
The Data_Value that I supply is useful for many things and I can access that when the function is called, with something like FunctionCallbackInfo->GetData().
But I have found no way to get back this data in a different scenario. Let's say I store that Function in a Persistent object, and then I would like to read which data is currently bound to it. Any ideas?
I don't think it's exposed via the API.
But there's an alternative:
manually construct a v8::FunctionTemplate
set its ->InstanceTemplate()->SetInternalFieldCount(num_fields)
get the v8::Function from the template with template->GetFunction(context),
now you should have function->InternalFieldCount() == num_fields
you can use function->SetInternalField(index, value) and function->GetInternalField(index) to store any data you want.
For complete examples, search for "SetInternalFieldCount" in V8's test-api.cc.
I’m experimenting with scripting a batch of OmniFocus tasks in JXA and running into some big speed issues. I don't think the problem is specific to OmniFocus or JXA; rather I think this is a more general misunderstanding of how getting objects works - I'm expecting it to work like a single SQL query that loads all objects in memory but instead it seems to do each operation on demand.
Here’s a simple example - let’s get the names of all uncompleted tasks (which are stored in a SQLite DB on the backend):
var tasks = Application('OmniFocus').defaultDocument.flattenedTasks.whose({completed: false})
var totalTasks = tasks.length
for (var i = 0; i < totalTasks; i++) {
tasks[i].name()
}
[Finished in 46.68s]
Actually getting the list of 900 tasks takes ~7 seconds - already slow - but then looping and reading basic properties takes another 40 seconds, presumably because it's hitting the DB for each one. (Also, tasks doesn't behave like an array - it seems to be recomputed every time it's accessed.)
Is there any way to do this quickly - to read a batch of objects and all their properties into memory at once?
Introduction
With AppleEvents, the IPC technology that JavaScript for Automation (JXA) is built upon, the way you request information from another application is by sending it an "object specifier," which works a little bit like dot notation for accessing object properties, and a little bit like a SQL or GraphQL query.
The receiving application evaluates the object specifier and determines which objects, if any, it refers to. It then returns a value representing the referred-to objects. The returned value may be a list of values, if the referred-to object was a collection of objects. The object specifier may also refer to properties of objects. The values returned may be strings, or numbers, or even new object specifiers.
Object specifiers
An example of a fully-qualified object specifier written in AppleScript is:
a reference to the name of the first window of application "Safari"
In JXA, that same object specifier would be expressed:
Application("Safari").windows[0].name
To send an IPC request to Safari to ask it to evaluate this object specifier and respond with a value, you can invoke the .get() function on an object specifier:
Application("Safari").windows[0].name.get()
As a shorthand for the .get() function, you can invoke the object specifier directly:
Application("Safari").windows[0].name()
A single request is sent to Safari, and a single value (a string in this case) is returned.
In this way, object specifiers work a little bit like dot notation for accessing object properties. But object specifiers are much more powerful than that.
Collections
You can effectively perform maps or comprehensions over collections. In AppleScript this looks like:
get the name of every window of Application "Safari"
In JXA it looks like:
Application("Safari").windows.name.get()
Even though this requests multiple values, it requires only a single request to be sent to Safari, which then iterates over its own windows, collecting the name of each one, and then sends back a single list value containing all of the name strings. No matter how many windows Safari has open, this statement only results in a single request/response.
For-loop anti-pattern
Contrast that approach to the for-loop anti-pattern:
var nameOfEveryWindow = []
var everyWindowSpecifier = Application("Safari").windows
var numberOfWindows = everyWindowSpecifier.length
for (var i = 0; i < numberOfWindows; i++) {
var windowNameSpecifier = everyWindowSpecifier[i].name
var windowName = windowNameSpecifier.get()
nameOfEveryWindow.push(windowName)
}
This approach may take much longer, as it requires length+1 number of requests to get the collection of names.
(Note that the length property of collection object specifiers is handled specially, because collection object specifiers in JXA attempt to behave like native JavaScript Arrays. No .get() invocation is needed (or allowed) on the length property.)
Filtering, and why your code example is slow
The really interesting part of AppleEvents is the so-called "whose clause". This allows you provide criteria with which to filter the objects from which the values will be returned from.
In the code you included in your question, tasks is an object specifier that refers to a collection of objects that have been filtered to only include uncompleted tasks using a whose clause. Note that this is still just reference at this point; until you call .get() on the object specifier, it's just a pointer to something, not the thing itself.
The code you included then implements the for-loop anti-pattern, which is probably why your observed performance is so slow. You are sending length+1 requests to OmniFocus. Each invocation of .name() results in another AppleEvent.
Furthermore, you're asking OmniFocus to re-filter the collection of tasks every time, because the object specifier you're sending each time contains a whose clause.
Try this instead:
var taskNames = Application('OmniFocus').defaultDocument.flattenedTasks.whose({completed: false}).name.get()
This should send a single request to OmniFocus, and return an array of the names of each uncompleted task.
Another approach to try would be to ask OmniFocus to evaluate the "whose clause" once, and return an array of object specifiers:
var taskSpecifiers = Application('OmniFocus').defaultDocument.flattenedTasks.whose({completed: false})()
Iterating over the returned array of object specifies and invoking .name.get() on each one would likely be faster than your original approach.
Answer
While JXA can get arrays of single properties of collections of objects, it appears that due to an oversight on the part of the authors, JXA doesn't support getting all of the properties of all of the objects in a collection.
So, to answer you actual question, with JXA, there is not a way to read a batch of objects and all their properties into memory at once.
That said, AppleScript does support it:
tell app "OmniFocus" to get the properties of every flattened task of default document whose completed is false
With JXA, you have to fall back to the for-loop anti-pattern if you really want all of the properties of the objects, but we can avoid evaluating the whose clause more than once by pulling its evaluation outside of the for loop:
var tasks = []
var taskSpecifiers = Application('OmniFocus').defaultDocument.flattenedTasks.whose({completed: false})()
var totalTasks = taskSpecifiers.length
for (var i = 0; i < totalTasks; i++) {
tasks[i] = taskSpecifiers[i].properties()
}
Finally, it should be noted that AppleScript also lets you request specific sets of properties:
get the {name, zoomable} of every window of application "Safari"
But there is no way with JXA to send a single request for multiple properties of an object, or collection of objects.
Try something like:
tell app "OmniFocus"
tell default document
get name of every flattened task whose completed is false
end tell
end tell
Apple event IPC is not OOP, it’s RPC + simple first-class relational queries. AppleScript obfuscates this, and JXA not only obfuscates it even worse but cripples it too; but once you learn to see through the faux-OO syntactic nonsense it makes a lot more sense. This and this may give a bit more insight.
[ETA: Omni recently implemented its own embedded JavaScriptCore-based scripting support in its apps; if JS is your thing you might find that a better bet.]
I'm trying to improve the performance of a website written in classic ASP.
It supports multiple languages the problem lies in how this was implemented. It has the following method:
GetTranslation(id,language)
Which is called all over the shop like this:
<%= GetTranslation([someid],[thelanguage]) %>
That method just looks up the ID and language in SQL and returns the translation. Simple.
But incredibly inefficient. On each page load, there's around 300 independent calls to SQL to get an individual translation.
I have already significantly improved performance for many scenarios:
A C# tool that scans the .asp files and picks up references to GetTranslation
The tool then builds up a "bulk-cache" method that (depending on the page) takes all the IDs it finds and in one fell swoop caches the results in a dictionary.
The GetTranslation method was then updated to check the dictionary for any requests it has and only go to SQL if it's not already in there (and cache it's own result if necessary)
This only goes so far.
When IDs of translations are stored in the database I can't pick these up (particularly easily).
Ideally the GetTranslation method would, on each call, build up one big SQL string that would only be executed at the end of the page request.
Is this possible in ASP? Can I have the result of a <%= ... %> to be a reference to something that is later resolved?
I would also sincerely appreciate any other creative ways I might improve the performance of this old, ugly beast.
I don't think you can do delayed execution in Classic ASP. As for suggestions on improving the performance. You can have a class like this :
Class TranslationManager
Private Sub Class_Initialize
End Sub
Private Sub Class_Terminate
End Sub
Private Function ExistsInCache(id, language)
ExistsInCache = _
Not IsEmpty(Application("Translation-" & id & "-" & language))
End Function
Private Function GetFromCache(id, language)
GetFromCache = Application("Translation-" & id & "-" & language)
End Function
Private Function GetFromDB(id, language)
//'GET THE RESULT FROM DB
Application("Translation-" & id & "-" & language) = resultFromDB
GetFromDB = resultFromDB
End Function
Public Default Function GetTranslation(id, language)
If ExistsInCache(id, language) Then
GetTranslation = GetFromCache(id, language)
Else
GetTranslation = GetFromDB(id, language)
End If
End Function
End Class
And use it like this in your code
Set tm = New TranslationManager
translatedValue = tm([someid], [thelanguage])
Set tm = Nothing
This would definitely reduce the calls to DB. But you need to be very careful about how much data you put into the application object. You don't wanna run out of memory. It's best you also track how long the translations stay in the memory and have them expired (deleted from the Application object) when they were not accessed for some time.
We use a i18n class with a namespace and language attribute in oure e-commerce system. The class has a default function called 'translate' which basically preforms a dictionary lookup. This dictionary is loaded using the memento pattern from a text file containing all the translations for the namespace and language.
The skeletons for these translations files are generated by a custom tool (written in vbscript actually) which parses the ASP's for i18n($somestring) calls. The filenames are based on the namespace and language e.g. "shoppingcart_step_1_FR.txt". The tool can actually update/extent existing translation files when we add new translatable strings to the ASP code, which is very important for maintenance.
The performance overhead for using this method is minimal. Due to the segmentation our largest translation file contains about 200 translatable strings (including static image urls). Loading it per request has very little effect on the performance. I guess one could cache the translation dictionaries in the application object using some third-party treadsafe dictionary, yet IMHO this isn't worth the trouble.
An extra tip, use variable replacement in your string to improve translatability.
For example use:
<%=replace(i18n("Buy $productname"), "$productname", product.name)%>
instead of
<%=i18n("Buy")%> <%=product.name%>
The first method is much more flexible for the translators. A lot of languages have different sentence structures.
<%= x %> simply translates into Response.Write(x). There's no way to make that deferred.
In fact, Classic ASP has no way to make anything deferred, as far as I can remember.
You've already done a lot in terms of writing the caching tool. Next step would be writing a tool to convert these ASP pages to ASP.NET.
Your cache is the best saving you can make here, you could do away with the complication of the .net pre-cacher and just have each call to GetTranslate check your dictionary for its entry, if not there then it can fetch it and cache it in Application space. That would be lightening fast. Then there just the problem of refreshing the Cache every now and then but that will be done for you every 24 hours ish when the worker process gets refreshed.
If you need it to be more up to date than that then you could pull out all your references as you do using your .net cacher. Then you could create a new function to go get all the entries for a given set of ids, write a call to this function near the top of your ASP pages for each page which would call your DB with one SQL string stashing the results in a local dictionary. Then modify GetTranslation to use the values from this dictionary. Would need to be updated but that could be part of your build process or just a job that runs every hour/night.
The approach I use in my own CMS is to load all the static Language strings from the Database into the Application object at start up using unique keys based on the variable name, language ID and CMS instance. Then I use page level caching into an array as the page gets created so repeatedly used variable gets cached and avoids lots of trips to the Application object where possible. I use an array instead of a dictionary as the way my CMS function each page is created in sections with each section being isolated from each other so I'd be creating multiple dictionaries per page which is undesirable.
Obviously the viability of this solution depends entirely on the number of variables you have and the number of translations you have plus how many variables you need to retrieve on each page.