GSCRIPT - How to cache HTML page content? - caching

I'm kinda stuck with something here.
I've a HTML template that i'm loading, and within this page there's a lot of JavaScript going on.
I'm trying to accelerate the operation by caching the template with the onOpen() of my Google Sheet. I can't figure how to cache my HTML page CalForm.html (from my internal Google Sheet scripts).
Here's what I have for now:
Creating the cache
function CacheCreate() {
CacheService.getScriptCache().put('CalCache', 'CalForm');
Browser.msgBox("done");
}
Get the cache
var evalSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Evaluation');
var row = evalSheet.getActiveCell().getRow();
var CalCache2 = CacheService.getScriptCache().get('CalCache');
Browser.msgBox(CacheService.getScriptCache().get('CalCache'))
initialize(row);
//var cache = CacheService.getScriptCache();
//var cache2 = cache.get('rss-feed-contents');
//Browser.msgBox(cache.get('rss-feed-contents'));
var html = HtmlService
.createTemplateFromFile(CalCache2)
.evaluate()
.setWidth(1200)
.setHeight(560)
.setSandboxMode(HtmlService.SandboxMode.NATIVE);
SpreadsheetApp.getUi().showModalDialog(html, 'Calculatrice');
Thanks for your help!

First you'll want to get the HTML from your "internal Google Sheet scripts", the following code will get the HTML (in string format), given that you have a File in your Script called "template.html".
var template = HtmlService.createHtmlOutputFromFile('template').getContent();
I run a check to see if the data is already in the cache...
function getObjectsFromCache(key, objects, flush)
{
Logger.log("Running: getObjectsFromCache(" + key + ", objects)");
var cache = CacheService.getScriptCache();
var cached = cache.get(key);
flush = false; // 1st run 45.33, 2nd run 46.475
//flush = true;// 65.818 seconds
if (cached != null && flush != true)
{
Logger.log("\tEXISTING DATA -> ");
cached = cache.get(key);
}
else
{
Logger.log("\tNEW DATA -> ");
//Logger.log("\tJSON.stringify(objects): " + JSON.stringify(objects) + ", length: " + objects.length);
//If you're working with spreadsheet objects, or array data, you'll want to put the data in the cache as a string, and then reformat the data to it's original format, when it is returned.
//cache.put(key, JSON.stringify(objects), 1500); // cache for 25 minutes
//In your case, the HTML does not need to be stringified.
cache.put(key, objects, 1500); // cache for 25 minutes
cached = objects;
}
return cached;
}
I commented-out the JSON.Stringify(objects), because in my original code I use another function called formatCachedData(cache, key) to return the different types of data - a multi-dimensional array (spreadsheet data), or Google user data from AdminDirectory.Users.list({...}).

Related

Prevent Browser Caching of UI5 Application Resources

We have an SAPUI5 App deployed on SAP PO. The problem is that whenever we do changes and deploy the new version of our application, the changes are not reflected and we need to do a Hard Reload and Clear browser Cache to fetch new changes.
This is causing a lot of issues as we cannot ask clients to clear cache after every change.
Below are the unsuccessful methods we tried so far:
Enabling "resources/sap-ui-cachebuster/sap-ui-core.js" in SAPUI5 bootstrap.
Using 'Application Cache buster' for application resource ( using sap-ui-cachebuster-info.json)
Setting HTML header to keep no cache:
<meta http-equiv='cache-control' content='no-cache, no-store, must-revalidate'>
<meta http-equiv='Expires' content='-1'>
<meta http-equiv='Pragma' content='no-cache'>
Clear cookies with below code:
document.cookie.split(";").forEach(function(c) {
document.cookie = c.replace(/^ +/, "").replace(/=.*/, "=;expires=" + new Date().toUTCString() + ";path=/");
});
None of the above solutions have worked so far. This is what we see in Networks tab of Chrome:
NOTE: Application is deployed on SAP PO 7.4 ( JAVA Stack)
We had the same issue than you on SAP MII and I have spent months with several OSS Calls for SAP to provide an acceptable solution.
They did so in the SP3 of SAP MII (we haven't updated yet but I hope their correction is right), but this will not apply in your case as you're on SAP PO but it's still a Java Stack.
So I think you should open an OSS Call, recommending to SAP to consult SAP Notes:
2463286 - Issues when reloading JavaScript files
2459768 - Force browsers to reload modified resource files
They will probably redirect you to the following stack overflow topic:
http://stackoverflow.com/questions/118884/how-to-force-browser-to-reload-cached-css-js-files
But this is only a work around, SAP web server on Java stack doesn't seem to be working correctly and they have to provide a correction.
Hope this will help you.
EDIT
Hi,
Here is an update, there is a work around that we sometime use.
We have a URL parameter which is used to identify if a reload of the page is needed.
See below a JS snippet that we embed the index.html page of the SAPUI5 app.
Hope this will help you.
<script>
window.onload = function () {
version = undefined;
fCheckVersionMatch = false;
onInit();
};
/***************************************************************************
* Function launch when we start the application it test
* - if the Reload parameters is set in the url
* - if we are loading an hold application with a false reload value
****************************************************************************/
var onInit = function() {
checkParamReload();
};
/***************************************************************************
* Check in the url if there is the reload value and if this value is less
* than the difference with the date now => avoid when using favorite link
* to load a previous version with an incorrect time stamp
****************************************************************************/
var checkParamReload = function() {
var sUrlParameters = window.top.document.location.search;
var regexReload = /(\?|&)reload=([^&]*)/;
var aReload = sUrlParameters.match(regexReload);
var nTime = aReload == null ? NaN : parseInt(aReload[2]);
if ( isNaN(nTime) || Math.abs(Date.now() - nTime) > 60000 ) {
// In case no reload tag is present or it's value is more than 1 minute ago, reload page
reloadPage(true); // True means force reload => reset retry count.
}
};
/***************************************************************************
* Reload page and make sure the reload param is updated.
* If force reload is used, retry count is resetted, otherwise it is
* it is incremented up to a limit, which - in case it is reached - stops
* the reload process and instead display an error message.
****************************************************************************/
var reloadPage = function (bForce) {
var retries = 0;
var oLocation = window.top.document.location;
var sSearch = oLocation.search;
sSearch = queryReplace(sSearch, "reload", _ => Date.now());
if (bForce) {
sSearch = queryReplace(sSearch, "retry", _ => 0);
} else {
sSearch = queryReplace(sSearch, "retry", function (n) {
if (isNaN(parseInt(n))) {
return 0;
} else {
retries = parseInt(n);
return retries + 1;
}
});
}
if (retries < 10) {
// Reload Page
window.top.document.location.replace(oLocation.origin + oLocation.pathname + sSearch + oLocation.hash);
} else {
// Display error
document.getElementById('Error').style.display = "block";
}
};
var queryReplace = function (sQuery, sAttribute, fnReplacement) {
// Match the attribute with the value
var matcher = new RegExp(`(\\?|&)${ sAttribute }=([^&]*)`);
var sNewQuery = sQuery.length < 2 ? `?${ sAttribute }=` : sQuery;
if (sNewQuery.search(matcher) < 0) {
// If we could not match, we add the attribute at the end
sNewQuery += "&" + sAttribute + "=" + fnReplacement("");
} else {
sNewQuery = sNewQuery.replace(matcher, (_, delim, oldVal) => delim + sAttribute + "=" + fnReplacement(oldVal));
}
return sNewQuery;
}
</script>

Get Picture from Client - save on MongoDB, expressJS, nodeJS

I'm trying to Implement a simple Picture upload from the client to my mongoDB.
I've read many explanations but I can't find a way from start to finish.
My clientside -
function profilePic(input) {
if (input.files && input.files[0]) {
var file = input.files[0];
localStorage.setItem('picture', JSON.stringify(file));
}
}
Later on I take the this JSON from the LocalStorage and send it to my server side like this:
var request = false;
var result = null;
request = new XMLHttpRequest();
if (request) {
request.open("POST", "usersEditProf/");
request.onreadystatechange = function() {
if (request.readyState == 4 && request.status == 200) {
.....//More code to send to Server
request.setRequestHeader('content-type', 'application/json');
request.send(JSON.stringify(localStorage.getItem('picture)));
}
}
On my serverside:
app.post('/usersEditProf/',users.editProfile);
/** Edits the Profile - sends the new one **/
exports.editProfile = function(req, res) {
var toEdit = req.body;
var newPic = toEdit.picture;
And thats where I get lost. is newPic actually holding the picture? I doubt it...
Do I need to change the path? What is the new path I need to give the picture?
How do I put it in my DB? Do I need GridFS?
When trying to simply put that in my collection, it looks like this (example with a image called bar.jpg:
picture: "{\"webkitRelativePath\":\"\",\"lastModifiedDate\":\"2012-10-08T23:34:50.000Z\",\"name\":\"bar.jpg\",\"type\":\"image/jpeg\",\"size\":88929}",
If you want to upload a blob through XMLHTTPRequest(), you need to use an HTML 5 FormData object:
https://developer.mozilla.org/en-US/docs/Web/API/FormData
It alows you to specify a filename to push, then you handle the incoming file as you would with a mime form post. Note the limitations on browser support when you use the FormData object. Your alternative is a form POST to a hidden frame, which works OK but is not nearly as clean looking in code as FormData.

Caching an aggregate of data with Service stack ToOptimizedResultUsingCache

I am currently using Service stack ICacheClient to cache in memory.
Note: the code below is some what pseudo code as I needed to remove customer specific names.
Lets say I have the following aggregate:
BlogPost
=> Comments
I want to do this following:
// So I need to go get the blogPost and cache it:
var blogPostExpiration = new TimeSpan(0, 0, 30);
var blogPostCacheKey = GenerateUniqueCacheKey<BlogPostRequest>(request);
blogPostResponse = base.RequestContext.ToOptimizedResultUsingCache<BlogPostResponse>(base.CacheClient, blogPostCacheKey, blogPostExpiration, () =>
_client.Execute((request)));
// Then, annoyingly I need to decompress it to json to get the response back into my domain entity structure: BlogPostResponse
string blogJson = StreamExtensions.Decompress(((CompressedResult)blogPostResponse).Contents, CompressionTypes.Default);
response = ServiceStack.Text.StringExtensions.FromJson<BlogPostResponse>(blogJson);
// Then I do the same so get the comments:
var commentsExpiration = new TimeSpan(0, 0, 30);
var commentsCacheKey = GenerateUniqueCacheKey<CommentsRequest>(request);
var commentsResponse = base.RequestContext.ToOptimizedResultUsingCache<CommentsResponse>(base.CacheClient, commentsCacheKey, commentsExpiration, () =>
_client.Execute((request)));
// And decompress again as above
string commentsJson = StreamExtensions.Decompress(((CompressedResult)commentsResponse).Contents, CompressionTypes.Default);
var commentsResponse = ServiceStack.Text.StringExtensions.FromJson<CommentsResponse>(commentsJson);
// The reason for the decompression becomes clear here as I need to attach my Comments only my domain emtity.
if (commentsResponse != null && commentsResponse.Comments != null)
{
response.Comments = commentsResponse.Comments;
}
What I want to know is there a shorter way to do the follow:
Get my data and cache it, get it back into my domain entity format without having to write all the above lines of code. I dont want to go through the following pain!:
Domain entity => json => decompress => domain entity.
Seems like a lot of wasted energy.
Any sample code or pointers to a better explanation of ToOptimizedResultUsingCache would be much appreciated.
Ok so im going to answer my own question. It seems that methods (extension methods) like ToOptimizedResult and ToOptimizedResultUsingCache are there to give you stuff like compression and caching for free.
But, if you want more control you just use the cache as you would normally:
// Generate cache key
var applesCacheKey = GenerateUniqueCacheKey<ApplesRequest>(request);
// attempt to get match details from cache
applesResponse = CacheClient.Get<ApplesDetailResponse>(applesDetailCacheKey);
// if there was nothing in cache then
if (applesResponse == null)
{
// Get data from storage
applesResponse = _client.Execute(request);
// Add the data to cache
CacheClient.Add(applesCacheKey, applesResponse, applesExpiration);
}
After you build up you aggregate and put it into cache you can compress the whole thing:
return base.RequestContext.ToOptimizedResult(applesResponse);
If you want to compress globally you can follow this post:
Enable gzip/deflate compression
Hope this makes sense.
RuSs

Filtering a loaded kml file in OpenLayers

I'm trying to create an interactive search engine (for finding event tickets) of which one of its features is a visual map that shows related venues using OpenLayers. I have a plethora of venues (3000+) in a kml file that I would like to selectively show a filtered subsection of. Below is the code I have but when I try to run it has a JavaScript error. Running firebug and chrome developer tools makes me think that it is not getting passed the parameters I give because it says that the variables are null. However, I cannot figure out why they are not getting passed. Any insight is greatly appreciated.
var map, drawControls, selectControl, selectedFeature, select;
$('#kml').load('venuesComplete.kml');
kml=$('#kml').html();
function showVenues(state, city, venue){
filterStrategy = new OpenLayers.Strategy.Filter({});
var kmllayer = new OpenLayers.Layer.Vector("KML", {
strategies: [filterStrategy,
new OpenLayers.Strategy.Fixed()],
protocol: new OpenLayers.Protocol.HTTP({
url: "venuesComplete.kml",
format: new OpenLayers.Format.KML({
extractStyles: true,
extractAttributes: true
})
})
});
select = new OpenLayers.Control.SelectFeature(kmllayer);
kmllayer.events.on({
"featureselected": onFeatureSelect,
"featureunselected": onFeatureUnselect
});
map.addControl(select);
select.activate();
filter = new OpenLayers.Filter.Comparison({
type: OpenLayers.Filter.Comparison.LIKE,
property: "",
value: ""
});
function clearFilter(){
filterStrategy.setFilter(null);
}
function setFilter(property, value){
filter.value = value;
filter.property = property;
filterStrategy.setFilter(filter);
}
var vector_style = new OpenLayers.Style();
if(venue!=""){
setFilter('name', venue);
}else if(city!=""){
setFilter('description', city);
}else if(state!=""){
setFilter('description', state);
}
map.addLayer(kmllayer);
function onPopupClose(evt) {
select.unselectAll();
}
function onFeatureSelect(event) {
var feature = event.feature;
var selectedFeature = feature;
var popup = new OpenLayers.Popup.FramedCloud("chicken",
feature.geometry.getBounds().getCenterLonLat(),
new OpenLayers.Size(100,100),
"<h2>"+feature.attributes.name + "</h2>" + feature.attributes.description +'<br>'+feature.attributes,
null,
true,
onPopupClose
);
document.getElementById('venueName').value=feature.attributes.name;
document.getElementById("output").innerHTML=event.feature.id;
feature.popup = popup;
map.addPopup(popup);
}
function onFeatureUnselect(event) {
var feature = event.feature;
if(feature.popup) {
map.removePopup(feature.popup);
feature.popup.destroy();
delete feature.popup;
}
}
}
function init() {
map = new OpenLayers.Map('map');
var google_map_layer = new OpenLayers.Layer.Google(
'Google Map Layer',
{type: google.maps.MapTypeId.HYBRID}
);
map.addLayer(google_map_layer);
state="";
state+=document.getElementById('stateProvDesc').value;
city="";
city+=document.getElementById('cityZip').value;
venue="";
venue+=document.getElementById('venueName').value;
showVenues(state,city,'Michie Stadium');
map.addControl(new OpenLayers.Control.LayerSwitcher({}));
map.zoomToMaxExtent();
}
IF I UNDERSTAND CORRECTLY, your kml does not load properly. if this is not the case, please disconsider my answer.
it is very important to check if your kml layer was properly loaded. i have a map that loads multiple dynamic (from php) kml layers and it is not uncommon to have a large layer simply not load. when that happens, the operation is aborted, but, as far as openlayers is concerned, the layer was properly loaded.
so i do 2 things: i check if the amount of loaded data meets the expected number of features in my orginal php kml parser (i use a jquery or ajax call for that) and then, in case there is a discrepancy, i try reloading (since this is a loop, i limit it to 5 attempts, so as not to loop infinitely).
check out some of my code here

Google Feed API: "The port specified in the feed URL is not supported."

I am trying to implement the Google Feed API to access an ATOM feed from a server and then display the data on a mobile web app. I get the error "The port specified in the feed URL is not supported." Any thoughts or suggestions?
// Google Feed API
//Our callback function, for when a feed is loaded.
function feedLoaded(result) {
if (!result.error) {
console.log("no error in loading feed");
// Grab the container we will put the results into
var container = document.getElementById("page_contents");
container.innerHTML = '';
// Loop through the feeds, putting the titles onto the page.
// Check out the result object for a list of properties returned in each entry.
// http://code.google.com/apis/ajaxfeeds/documentation/reference.html#JSON
for (var i = 0; i < result.feed.entries.length; i++) {
var entry = result.feed.entries[i];
var div = document.createElement("div");
div.appendChild(document.createTextNode(entry.content));
container.appendChild(div);
}
console.log(result.feed.entries.length);
} else {
var container = document.getElementById("page_contents");
container.innerHTML = '';
var div = document.createElement("div");
div.appendChild(document.createTextNode(result.error.message));
container.appendChild(div);
alert(result.error.message);
}
}
function OnLoad() {
// Create a feed instance that will grab Digg's feed.
var feed = new google.feeds.Feed("http://localhost:8082/frevvo/web/tn/billy.com/api/apps");
//var feed = new google.feeds.Feed("http://www.digg.com/rss/index.xml");
if (!feed) {
alert("feed object not created");
}
console.log("loading feed");
// Calling load sends the request off. It requires a callback function.
feed.load(feedLoaded);
}
The only place where you appear to be setting a port is where you specify the URL http://localhost:8082/frevvo/web/tn/billy.com/api/apps.
Change that URL to an RSS feed somewhere that doesn't use a non-standard port like that and this error will almost certainly go away.
For testing, I'll often go to a news web site and grab their RSS feed URL, e.g., http://rss.cnn.com/rss/cnn_topstories.rss.

Resources