Is it possible in Alfresco avoid the resource url random generated? - caching

I understand that in production it is a good practice to use random generated filename to force the browser to download the file and not use cache (cache busting).
For example Alfresco generate files like this:
http://localhost:8080/share/res/js/surf/3f1b42d3f51529dabe2af76fecf9ebcd.js
And If I make a change in my libraries it will change the URL.
My question: It is possible to avoid this behavior in development?
I will like a path like:
http://localhost:8080/share/res/js/surf/surf.js
The advantage of that is that I can put break points, make changes in the code and then reload the page.
Right now I have to each time I reload the page, I have to find one more time the part of the code I am working on and then putting the breakpoint reload one more time.

Related

Modify Wordpress plugin?

I have a third-party WP plugin which, although no longer updated, still works fine - and for which I've not been able to find an alternative.
It's 'Zajax' - which ajax-loads internal pages... thus enabling a streaming-radio audio-player to be fixed to the viewport-base, with continuous play throughout page-changes.
However, it appears to require absolute urls - on root-relative urls it reloads the whole page (and thus stops continuous-play).
This is a hindrance, because I normally use root-relative urls - and hence sometimes forget to ensure that all internal urls are absolute rather than root-relative.
I want to modify, so that it'll work with root-relative urls - but don't know enough to do this.
Actually using root-relative URL-s is not good idea, but if it is comfortable for you, then use small jQuery snippet which may help you with the problem.
jQuery("a").each(function(){
if (jQuery(this).attr("href").indexOf("http")==-1){
jQuery(this).attr("href","https://yourwebsiteurl.com/"+jQuery(this).attr("href"));
}
});
You can put this code to footer area of your website, it will detect root-related links and convert them to normal links. (without changing anything at your backend, of course)

Include source code from different directory

I have three different domains all on the same server and I want to run the code on all three domains from one source on the same server, but not sure the best way.
Here's what I have:
domain01.com
domain02.com
domain03.com
domain04.com/sourcecode
I want domain01-03 to run the code inside domain04.com/sourcecode so the user can go to their domain and not have to go to domain04.com to see their site. I want to keep all the code inside domain04.com because I don't want to have to put the code inside each domain every time I make a code change.
For whatever reason I can't get my head around the best way to do this -- and want to do it right.
Any advice?
Thanks!
All you need to do is create a mapping on the first three sites to the appropriate directory in the fourth site, eg map /domain04 to /full/path/to/domain04/sourcecode, then refererence its CFML resources via /domain04 in CFC and include paths. The inference here is the code does need to be accessible via the file system for all sites concerned.
Note that if you also want to server non-CFML files via HTTP (eg: images, css, js), then you will also need a web server virtual directory along the same lines.
None of this requires a framework, it's standard CF / web server functionality.
Are you using a framework? One like ColdBox could make this trivial if your code is written modularly. (Disclaimer, I am affiliated with ColdBox)
If not, it really depends on what the code is. CFCs can be mapped anywhere via ColdFusion mappings. Even .cfm files can be included as long as the file systems are visible. If you're wanting to basically have complete copy of a site in another web root without duplication, I would first consider using a shared source control repo and a build process that checks it out in the appropriate places, and secondly a good old, symlink will also work .

How do I set caching headers for my CSS/JS but ensure visitors always have the latest versions?

I'd like to speed up my site's loading time in part by ensuring all CSS/JS is being cached by the browser, as recommend by Google's PageSpeed tool. But I'd like to ensure that visitors have the latest CSS/JS files, if they are updated and the cache now contains old code.
From my research so far, appending something like "?459454" to the end of the CSS/JS url is popular. But wouldn't that force the visitor's browser to re-download the CSS/JS file every time?
Is there a way to set the files to be cached by the browser, but ensure the browser knows about updated versions of the cached files?
If you're using Apache, you can use mod_pagespeed (mentioned earlier by symcbean) to do this automatically.
It would work best if you also use the ModPagespeedLoadFromFile directive since that will create a new URL as soon as it detects that the resource has changed on disk, however it will work fine without that (it will use the cache expiry time returned when it fetches the resource to rewrite it).
If you're using nginx, you could use ngx_pagespeed.
If you're using IIS, you could use IISpeed, which is not a Google product and I don't know it's full feature set.
Version numbers will work, but you can also append a hash of the file to the filename with your web framework or asset build script:
<script src="script-5054a101c8b164cbfa570d97fe23cc0d.js"></script>
That way, once your HTML changes to reflect this new version, browsers will just download and cache the updated version of your script.
As you say, append a query string to the URL of the asset, but only change it if the content is different, or change it when you deploy a new version.
appending something like "?459454" to the end of the CSS/JS url is popular. But wouldn't that force the visitor's browser to re-download the CSS/JS file every time?
No it won't force them to download each time, however there are a lot of intermediate proxies out there which ignore query strings on cacheable content - hence many tools (including mod_pagespeed which does automatic url rewriting based on file conents, and content merging on the fly along with lots of other cool tricks) move the version information into the path / filename.
If you've only got .htaccess type access then you can strip the version information out to map direct to a file, or use a scripted 404 redirector (but this is probably only a good idea if you're behind a caching reverse proxy).

How can I set up custom ImageResizer urls?

I'm just getting started with ImageResizer and I'm stuck on what seem like totally basic questions:
I have an uploader that I use to put images into a directory that's not directly accessible over HTTP. (If I just put a image at, say, /images/myimage.jpg, then anyone could access it by just asking for it, whereas I want to limit access via thumbnails, watermarks, etc.). So I want to put it at /offlimits/myimage.jpg, but be able to serve it up at /public/images/myimage.jpg.
I don't really want to dump all the images in the same offlimits folder, because putting lots of files in one folder makes Windows unhappy. But I don't want to expose the details of that subdirectory structure either, so where do I put the mapping between the public facing url and the actual image location?
Most generally, I don't necessarily want an image extension at all, so I'd like to say /public/image_id?width=100... and have this map to /offlimits/sub1/sub2/sub3/image_id.jpg.
Can anyone advise about how to set this up?
Three part questions are generally frowned upon here at SO, but I'll bite anyhow :)
If you're allowing access to images based on authentication, then you need to use ASP.NET's URL Authorization feature. ImageResizer supports URL Authorization rules. If you just don't want the source files available, and want to force them resized or watermarked, read the docs on how to implement arbitrary rules like this.
You can rewrite image paths to your heart's content with Config.Current.Rewrite, which works just like the PostRewrite event mentioned earlier. Just remember you'll have to keep it all straight in your head later.
Image extensions are good things. Don't fight them. They let the server figure out the right mime-type to send and help errant browsers recover from related bugs. They prevent issues on several platforms and make the Save As dialog work. They significantly improve server efficiency as well, since handling logic doesn't have wait as long. This is particularly relevant because of the design of the IIS/ASP.NET modules system.

client-side file caching

If I understand correctly, a broswer caches images, JS files, etc. based on the file name. So there's a danger that if one such file is updated (on the server), the browser will use the cached copy instead.
A workaround for this problem is to rename all files (as part of the build), such that the file name includes an MD5 hash of it's contents, e.g.
foo.js -> foo_AS577688BC87654.js
me.png -> me_32126A88BC3456BB.png
However, in addition to renaming the files themselves, all references to these files must be changed. For exmaple a tag such as <img src="me.png"/> should be changed to <img src="me_32126A88BC3456BB.png"/>.
Obviously this can get pretty complicated, particularly when you consider that references to these files may be dynamically created within server-side code.
Of course, one solution is to completely disable caching on the browser (and any caches between the server and the browser) using HTTP headers. However, having no caching will create it's own set of problems.
Is there a better solution?
Thanks,
Don
The best solution seems to be to version filenames by appending the last-modified time.
You can do it this way: add a rewrite rule to your Apache configuration, like so:
RewriteRule ^(.+)\.(.+)\.(js|css|jpg|png|gif)$ $1.$3
This will redirect any "versioned" URL to the "normal" one. The idea is to keep your filenames the same, but to benefit from cache. The solution to append a parameter to the URL will not be optimal with some proxies that don't cache URLs with parameters.
Then, instead of writing:
<img src="image.png" />
Just call a PHP function:
<img src="<?php versionFile('image.png'); ?>" />
With versionFile() looking like this:
function versionFile($file){
$path = pathinfo($file);
$ver = '.'.filemtime($_SERVER['DOCUMENT_ROOT'].$file).'.';
echo $path['dirname'].'/'.str_replace('.', $ver, $path['basename']);
}
And that's it! The browser will ask for image.123456789.png, Apache will redirect this to image.png, so you will benefit from cache in all cases and won't have any out-of-date issue, while not having to bother with filename versioning.
You can see a detailed explanation of this technique here: http://particletree.com/notebook/automatically-version-your-css-and-javascript-files/
Why not just add a querystring "version" number and update the version each time?
foo.js -> foo.js?version=5
There still is a bit of work during the build to update the version numbers but filenames don't need to change.
Renaming your resources is the way to go, although we use a build number and embed that in to the file name instead of an MD5 hash
foo.js -> foo.123.js
as it means that all your resources can be renamed in a deterministic fashion and resolved at runtime.
We then use custom controls to generate links to resources at on page load based upon the build number which is stored in an app setting.
We followed a similar pattern to PJP, using Rails and Nginx.
We wanted user avatar images to be browser cached, but on an avatar's change we needed the cache to be invalidated ASAP.
We added a method to the avatar model to append a timestamp to the file name:
return "/images/#{sourcedir}/#{user.login}-#{self.updated_at.to_s(:flat_string)}.png"
In all places in the code where avatars were used, we referenced this method rather than an URL. In the Nginx configuration, we added this rewrite:
rewrite "^/images/avatars/(.+)-[\d]{12}.png" /images/avatars/$1.png;
rewrite "^/images/small-avatars/(.+)-[\d]{12}.png" /images/small-avatars/$1.png;
This meant if a file changed, its URL in the HTML changed, so the user's browser made a new request for the file. When the request reached Nginx, it got rewritten to the simple name of the file.
I would suggest using caching by ETags in this situation, see http://en.wikipedia.org/wiki/HTTP_ETag. You can then use the hash as the etag. A request will still be submitted for each resource, but the browser will only download items that have changed since last download.
Read up on your web server / platform docs on how to use etags properly, most decent platforms have built-in support.
Most modern browsers check the if-modified-since header whenever a cacheable resource is in a HTTP request. However, not all browsers support the if-modified-since header.
There are three ways to "force" the browser to load a cached resource.
Option 1 Create a query string with a version#. src="script.js?ver=21". The downside is many proxy servers wont cache a resource with query strings. It also requires site-wide updating for changes.
Option 2 Create a naming system for your files src="script083010.js". However the downside to option 1 is that this as well requires site-wide updates whenever a file changes.
Option 3 Perhaps the most elegant solution, simply set up the caching headers: last-modified and expires in your server. The main downside to this is users may have to recache resources because they expired yet never changed. Additionally, the last-modified header does not work well when content is being served from multiple servers.
Here a few resources to check out: Yahoo Google AskApache.com
This is really only an issue if your web server sets a far-future "Expires" header (setting something like ExpiresDefault "access plus 10 years" in your Apache config). Otherwise, a browser will make a conditional GET, based on the modified time and/or the Etag. You can verify what is happening on your site by using a web proxy or an extension like Firebug (on the Net panel). Your question doesn't mention how your web server is configured, and what headers it is sending with static files.
If you're not setting a far-future Expires header, there's nothing special you need to do. Your web server will usually handle conditional GETs for static files based on last modified time just fine. If you are setting a far-future Expires header then yes, you need to add some sort of version to the file name like your question and the other answers have mentioned already.
I have also been thinking about this for a site I support where it would be a big job to change all references. I have two ideas:
1.
Set distant cache expiry headers and apply the changes you suggest for the most commonly downloaded files. For other files set the headers so they expire after a very short time - eg. 10 minutes. Then if you have a 10 minute downtime when updating the application, caches will be refreshed by the time users go to the site. General site navigation should be improved as the files will only need downloading every 10 minutes not every click.
2.
Each time a new version of the application is deployed to a different context that contains the version number. eg. www.site.com/app_2_6_0/ I'm not really sure about this as users bookmarks would be broken on each update.
I believe that a combination of solutions works best:
Setting cache expiry dates for each type of resource (image, page, etc) appropreatly for that resource, for example:
Your static "About", "Contact" etc pages probably arn't going to change more than a few time a year, so you could easily put a cache time of a month on these pages.
Images used in these pages could have eternal cache times, as you are more likey to replace an image then to change one.
Avatar images might have an expiry time of a day.
Some resources need modified dates in their names. For example avatars, generated images, and the like.
Some things should never be caches, new pages, user content etc. In these cases you should cache on the server, but never on the client side.
In the end you need to carfully consider each type of resource to determine what cache time to instruct the browser to use, and always be conservitive if you are unsure. You can increase the time later, but it's much more pain to uncache something.
You might want to check out the approach taken by the grails "uiperformance" plugin, which you can find here. It does a lot of the things you mention, but automates them (set expiry time to a long time, then increments version numbers when files change).
So if you're using grails, you get this stuff for free. If you are not - maybe you can borrow the techniques employed.
Also - borrowed form the ui-performance page, - read the following 14 rules.
ETags seemingly provide a solution for this...
As per http://httpd.apache.org/docs/2.0/mod/core.html#fileetag, we can set the browser to generate ETags on file-size (instead of time/inode/etc). This generation should be constant across multiple server deployments.
Just enable it in (/etc/apache2/apache2.conf)
FileETag Size
& you should be good!
That way, you can simply reference your images as <img src='/path/to/foo.png' /> and still use all the goodness of HTTP caching.

Resources