How to enable Gzip on http-kit for minified ClojureScript? - heroku

My Clojure application is deployed on Heroku along with a minified 600KB+ ClojureScript JS artifact that I'm serving as a static file at /js/main.js.
How do I enable gzip compression on http-kit to reduce the size of my JS artifact over the wire?

Not supported by http-kit, usually you want to use a proxy like Nginx anyways. As you probably don't have that option with Heroku you can do the compression via ring middleware, see this issue for pointers.

Related

Polymer CLI Build | Cache Busting Techniques

A little background:
My app is built on Polymer 2.x and is being hosted on a server that does not support HTTP/2 and does not have a certificate for HTTPS. We must support IE11, Chrome, Firefox, and Safari.
Problem:
I'm running into an issue with browser caching. I was hoping to use the service worker that the Polymer CLI generates to handle the cache busting. However, since our client's server doesn't have HTTPS enabled, we are unable to utilize it. Regardless, we have to support IE 11 on a single build, so service workers aren't an option.
I cannot figure out a way to bust the cache on new deployments. On the Polymer CLI github page, there is an open issue regarding this topic, but there hasn't been any traction since 2016. From watching the YouTube videos on Polymer, it looks like there is (or used to be) a way to intercept the build using Gulp, but I can't seem to figure it out.
Any help is greatly appreciated!
You can use the polymer-build library if you want to build a Polymer project using gulp. You can read about it here:
https://github.com/Polymer/polymer-build
Briefly mentioned in the docs here:
https://www.polymer-project.org/2.0/toolbox/build-for-production#use-polymer-build
Then you can cache bust using something like gulp-rev, but you have to make sure that you're not rewriting the custom element names, just the filenames and import paths. (I tried this once with gulp-rev-all, and by default it was replacing <app-header> with <app-header-a9fe00> or something like that.)
While creating the build run polymer build sw-precache-config.js
I've created gulp-polymer-build to help with this very issue. It borrows from polymer-cli build code, and supports your build configs in polymer.json. It allows you to modify your source stream before building, and then modify the forked streams for all builds you've configured in polymer.json. This makes it easy to use gulp-rev and gulp-rev-replace to do asset versioning.

Using the JS SBT Librarys without running the SBTSDK on a Server

can the JavaScript librarys from the SBTSDK used standalone without running the SBTSDK on a server?
So that i can just include local js files from the sbt and use the APIs.
Is this possible?
Thanks.
because of cross origin domain restriction on a browser you won't be able to do that
there are two minimal Ajax proxies included in the toolkit, which doesn't require to deploy the full SBTSDK to a server.
this is the java based proxy https://github.com/OpenNTF/SocialSDK/tree/master/sdk/com.ibm.sbt.proxy.web
this is the php based proxy https://github.com/OpenNTF/SocialSDK/tree/master/php/php-core
relevant documentation for PHP deployment https://github.com/OpenNTF/SocialSDK/wiki/Introducing-the-SBT-PHP-Core - the java proxy is configured as other part of the SDK using managed-beans.xml and the sbt.properties files.
#Paul Bastide: I thought the SBT can be used as a simple JavaScript Wrapper in my JS Web Application by including e.g. sbt-standalone.js (similar to e.g. jquery)? I could write the whole logic to consume the IBM Connections API on my own in jquery. For sure I need to enbale CORS on serverside that this works. Is it really true that the SBT can't be used for this usecase?

grails serve index.html from CDN

I would like my grails app to be deployed in the root of my domain:
www.example.com
instead of
www.example.com/myapp
However www.example.com/index.html is going to be static (static html, images, etc). I'm concerned about performance of having the the application server serve up the homepage. Can I configure my grails app / the cdn to serve index.html and it's content, and have the application server handle the dynamic pages?
I am using grails 2.2.4
I will be using Amazon S3 + ElasticBeanstalk + CloudFront.
Or should I be worried about performance at all? I am new to grails but it's my understanding that static content should be served by the webserver (Apache). Since there is no apache, I'm looking for another option to keep the load off of the webserver. The CDN seems like a good idea.
You certainly can do that. My personal recommendation would be to keep your images on S3 and use Cloud Front on top of that. Unless your static HTML itself is excessively large, it would be my recommendation to let Grails be Grails and take advantage of using Grails Resources for your JS and CSS as typical Grails projects do - even if your index page won't be doing anything dynamic right now. The more you break off the Grails conventions, the more complex things like builds and continuous integration can become. Look at using caching, minifying plugins and performance is very good.
As for deploying to the root "/" context, you can either do this by "prod war ROOT.war" for your Tomcat (or wherever) deployment OR you can build it as "whateverapp.war" and handle the routing rules with Apache mod_jk for more complex situations.
I've done probably a dozen Grails projects and use a very similar architecture now.
The simplest thing to do is to serve your entire domain from CloudFront and then serve the home page from your Grails app. You can configure CloudFront to cache requests to the home page so you will minimize the number of requests to Grails. You can map CloudFront directly to the ELB running in your Elastic Beanstalk environment.
The default Elastic Beanstalk configuration doesn't give you any way of serving static files from Apache; all the requests to Elastic Beanstalk are proxied to Tomcat. You can use advanced configuration to set this up though (using the .ebextensions mechanism).
You can also set up the Cache plugin to set up full page caching on the server side (I recommend using the Cache EhCache plugin as well). Combining server-side caching with CDN caching will get you a long way.
BTW, another good option for serving static content is to use S3 itself to serve pages. If you aren't doing anything too complicated it will save you the work of setting up and running a web server, although with Elastic Beanstalk there's not much to do.

Can Nexus OSS compress virtual OBR metadata?

I'm trying to provision a client-side swing application using OBR and Felix. It works, but even after pruning many unused bundles, the obr.xml file is still about 1MB.
That file will be downloaded many times, even though it's not that dynamic.
If I could gzip that file, it compresses with a factor >20, less than 50kb remains.
Can Nexus do that for me? Could I use something like:
https://nexus.dexels.com/nexus/content/groups/obr/.meta/obr.gz
instead of:
https://nexus.dexels.com/nexus/content/groups/obr/.meta/obr.xml
I can't find anything about this, and it would make a lot of sense, I think.
I'm using nexus-obr-plugin-2.0.1-SNAPSHOT
You can enable gzip compression for responses by adding the following line to ${nexus_root}/conf/nexus.properties:
enable-restlet-encoder=true
If the felix client sends the "Accept-Encoding: gzip, deflate" header then the response will be compressed.

JSON GZIP Design choice

I am working on a web application with dynamic content generated by a servlet running in a JBoss container and static content/load balancing being handled by Apache + mod_jk.
The client end uses jQuery to making AJAX requests to the servlet which processes them and in turn generates large JSON responses.
One thing I noticed is that the original author chose to manually compress the output stream in the servlet using something like below.
gzout = new GZIPOutputStream(response.getOutputStream());
Could this have been handled using mod_deflate on the Apache end? If you can do this, is it considered better practice, can you explain why or why not?
I believe it makes more sense to apply HTTP compression within Apache in your case.
If a server is properly configured to compress responses of this type (application/json, if the server-side code is setting the correct content-type), then it's being wastefully re-compressed after the manual compression anyway.
Also, what happens here if a client that doesn't support gzip makes a request? If you're compressing the response at the server level, it will automatically respond appropriately based on the request's accept-encoding header.
A little additional research shows a couple of good alternatives.
Issue:
There is a network channel that exists between Apache and Jboss. Without compression of any kind on the jboss end you'd have the same latency and bandwidth issues you have between Apache and your clients.
Solutions:
You can use mod_deflate on the Apache and accept uncompressed responses from jboss and compress before delivering to your clients. I could see this making some sense in certain network topologies(Proposed by Dave Ward).
You can apply a Java EE filter. This will filter responses compressing them before they exist the JBoss container. This has the benefit of compression at the JBoss level without a bunch of nasty GZIP related code in your business servlet.
JBoss with by default uses Tomcat as its Servlet engine. If you navigate to $JBOSS_HOME/deploy/jbossweb-tomcat55.sar you can edit the server.xml file to turn 'compression=on' attribute on the HTTP/1.1 connector. This will compress all responses outbound from the container.
The trade-off between 2 and 3 is compressing piece meal for different servlets or compressing all responses.

Resources