Performance Engineering on Apostrophe CMS - performance

So after three weeks of 12hr shifts I am almost done with building a Knowledge Base system using Apostrophe. Now comes the task of speeding things on the front end. My questions are:
How do I add expires headers: Express has inbuilt milddleware called static, can I just implement it normally under the index.js file?
Minify JavaScript & CSS: http://apostrophecms.org/docs/modules/apostrophe-assets/ I saw that apostrophe has something inbuilt but its not clear how to enable it? And also do I need to put the assets in specific folder for it to work? Right now I have all the JS files under lib/modules/apostrophe-pages/public/js and CSS under public/css.
Enable GZIP on the server: there is nothing mentioned but again express does have gzip modules can I just implement them under lib/modules/apostrophe-express/index.js?
Any help will is greatly appreciated.

I'm the lead developer of Apostrophe at P'unk Avenue.
Sounds like you found our deployment HOWTO with the recently added material about minification, and so you have sorted out that part yourself. That's good.
As for expires headers and gzip on the server, while you could do that directly in node, we wouldn't! In general, we never have node talk directly to the end user. Instead, we use nginx as a reverse proxy, which gives us load balancing and allows us to deliver static files directly. nginx, being written in C/C++, is faster at that. Also the implementations of gzip and TLS are very much battle-tested. No need to make javascript do what it's not best at.
We usually configure nginx using mechanic, which we created to manage nginx with a few commands rather than writing configuration files by hand. Our standard recipe for that includes both gzip and expires headers.
However, here's an annotated version of the nginx configuration file it creates. You'll see that it covers load balancing, gzip and longer expiration times for static files.
# load balance across 4 instances of apostrophe listening on different ports
upstream upstream-example {
server localhost:3000;
server localhost:3001;
server localhost:3002;
server localhost:3003;
}
server {
# gzip transfer encoding
gzip on;
gzip_types text/css text/javascript image/svg+xml
application/vnd.ms-fontobject application/x-font-ttf
application/x-javascript application/javascript;
listen *:80;
server_name www.example.com example.com;
client_max_body_size 32M;
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log;
# reverse proxy: pass requests to nodejs backends
location #proxy-example-80 {
proxy_pass http://upstream-example;
proxy_next_upstream error timeout invalid_header http_500 http_502
http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Deliver static files directly if they exist matching the URL,
# if not proxy to node
location / {
root /opt/stagecoach/apps/example/current/public/;
try_files $uri #proxy-example-80;
# Expires header: 7-day lifetime for static files
expires 7d;
}
}

Related

Sinatra Session Not Persisting On Post

I am currently running a sinatra app using puma and an nginx reverse proxy. Sessions and cookies persist fine on any get requests as seen by logging:
{"user_id"=>1, "session_id"=>"89bb966142230a06fb5103db746c3011a741d88c7759dc2bff00c6bdd597c946"}
The user_id being the important part that signifies the session maintained its info. However as soon as I attempt to post through a form I instead lose this critical info:
"POST /price HTTP/1.0" 302 - 0.0045
{"session_id"=>"89bb966142230a06fb5103db746c3011a741d88c7759dc2bff00c6bdd597c946"}
My sinatra config for sessions is:
use Rack::Session::Cookie, :key => 'rack.session',
:path => '/',
:secret => ENV['secret']
Which seems to be working fine in all other scenarios of the app.
My Nginx reverse proxy settings for this app are:
server {
root /var/www/app/public;
access_log /var/www/app/var/log/nginx_access.log;
error_log /var/www/app/var/log/nginx_error.log;
location / {
try_files $uri #app;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://admin_server;
proxy_pass_request_headers on;
proxy_cookie_domain localhost $http_host;
}
#certbot ssl stuff
}
I'm at a relative loss now, as every other aspect of the authentication checks and session persistance seem to work fine, but the form post falls apart. Any guidance and help would be immensely helpful!

Nginx using etags with gzipped content when proxying

I want to use nginx to be a proxy between clients and a set of my apps, the problem I encountered is:
I have an app(on different machine than nginx) that is having static content(images, css etc.)
That static content is gzipped
I configured nginx as follows:
location ^~ /static/ {
etag on;
expires 1d;
more_set_headers "Vary: Accept-Encoding";
location ~* \.(css|js)$ {
gzip_static on;
expires -1;
proxy_pass http://my_upstream;
}
proxy_pass http://my_upstream;
}
and I was expecting to have etags working for things like js and css but they are not. I'm supposing it's because that js and css files are not on the same machine as nginx and it's the problem that gzip_static on suppose to fix.
So basically my question is, is it possible to have it working that way? And if it is how to do it:)
According to this forum post, etag on is not compatible with a proxy_pass that may modify the response (as gzip_static does):
The ETag header used in it's strong from means that it must be
changed whenever bits of an entity are changed. This basically
means that ETag headers have to be removed by filters which change
a response content.
As suggested in that same post, passing through Last-Modified will usually be enough for clients to make conditional requests.

Golang Goji: How to serve static content and api at the same time

I have been playing with Golang for the last two weeks and finally could make a real application work. It uses static HTML files served by NGINX and the API is using Goji Web Framework as backend. I don't use any Golang templating because everything is Angular.Js, so static is fine for my needs.
I would like to have the option to chose whether to use NGINX on production or let Go serve the static content at root using the same port the application uses (8000). This way development environments would not require NGINX to be installed.
So, tried adding a handle to the default mux like this
goji.DefaultMux.Handle("/*", serveStatic)
func serveStatic(w http.ResponseWriter, r *http.Request) {
//http.ServeFile(w, r, r.URL.Path[1:])
//http.FileServer(http.Dir("static"))
http.StripPrefix("/static/", http.FileServer(http.Dir("static")))
}
This handle is executed just after all the API paths have been registered (otherwise API would not work).
I already tried any sort of combination and either it redirects me to HTTP 404 or it displays the HTML content as text. Neither is good. I wonder if anyone has been here and could give me a heads up on what am I doing wrong.
Thanks.
Although this has nothing to do with my issue, here is the NGINX configuration I am using:
server {
listen 80;
# enable gzip compression
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain application/x-javascript text/xml text/css;
gzip_vary on;
# end gzip configuration
location / {
root /home/mleyzaola/go/src/bitbucket.org/mauleyzaola/goerp/static;
try_files $uri $uri/ /index.html = 404;
}
location /api {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I had run into similar issues so perhaps the following points would be helpful.
Remember to register the handler for serving static content as the final route. Otherwise, it might match everything.
Perhaps try using absolute paths instead of relative ones.
Here's a simplified version of how my routes are set up with Goji.
func apiExampleHandler(context web.C, resp http.ResponseWriter, req *http.Request) {
fmt.Fprint(resp, "You've hit the API!")
}
func main() {
goji.Handle("/api", apiExampleHandler)
// Static file handler should generally be the last handler registered. Otherwise, it'll match every path.
// Be sure to use an absolute path.
staticFilesLocation := "Some absolute to the directory with your static content."
goji.Handle("/*", http.FileServer(http.Dir(staticFilesLocation)))
goji.Serve()
}
If you have full control over your URLs, a simple strategy is to divide them at the top level. I use /a at the start of all application URLs and /s at the start of all static URLs. This makes the routing very simple.
I was using Goji for a while, then switched to Gocraft-web. But the principles are the same in that the URLs will be unambiguous with either framework. Gocraft-web can obviously do subrouting; I think Goji can also do this but it's less obvious. Subrouting is helpful for several reasons:
its an easy way to remove ambiguity
the router might well be faster if its search patterns are simpler
you can divide your code so it might be easier to understand
If you are serving static assets in production, you may like to measure it and improve its performance. I find that pre-compressing (gzip) my JS and CSS files can help. I have both uncompressed and compressed versions in the same file system and I have a bespoke static asset package that spots the pre-compressed files and serves them to all clients that understand (which is almost all browsers). Also, setting a future expiry date is worth exploring. Both of these ideas are built-into Nginx, and quite easy to code up with a bit of effort.

How would one recreate Cloudflare's always online feature using nginx?

Cloudflare uses nginx.
They have this feature called Always Online: http://www.cloudflare.com/always-online
As their website states:
Always Online is a feature that caches a static version of your pages in case your server goes offline.
I would like to setup a caching nginx server on the other side of the globe and have it cache my website's static files and point my secondary dns to it. If my website's server goes down, a cached version will be showed.
Is this possible to do using nginx reverse-proxy feature?
Or, I could also save a copy of all my static files including .html files in the nginx server, and have it load these files when the main server is offline.
Can nginx do this?
In Nginx it is possible to set-up a caching. The documentation even provides an example of a Nginx reverse proxy with caching:
http {
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m
inactive=24h max_size=1g;
server {
location / {
proxy_pass http://1.2.3.4;
proxy_set_header Host $host;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
}
}
}
Note the proxy_cache_use_stale argument, in the example, in the event of a range of errors (including timeouts, 500, 502, 503, 504), then the old cached item is supplied.

showing added weird string when nginx fetches memcached rack result

I'm having a bit of a problem with memcaching the pages generate with my rack app.
I'm storing the page generated by my rack app in memcache with the following bit of (ruby) code:
require 'dalli'
memcached = Dalli::Client.new("localhost:11211")
memcached.set(req.path_info, response[2][0])
(where response[2][0] is the generated html code)
In my nginx server config I have the following:
location #rack {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:9292;
}
location #memcached {
set $memcached_key $request_uri;
memcached_pass localhost:11211;
default_type text/html;
error_page 404 = #rack;
}
location / {try_files #memcached;}
This kinda works, but not completely: the content passed to my browser now starts with:
I"¯[<!DOCTYPE html><html ...
My question is:
what's the extra bit in front of the html code, and how do I prevent it from showing in the browser result?
Dalli uses Marshal.dump to serialize the value you set (so that you can cache arbitrary ruby objects), so what nginx gets isn't just the string but data in ruby's marshal format. The extra bytes you see contain the marshal header (version of the format etc.) and bytes that say that the bytes that follow are a string.
You can tell dalli to store the raw value of the object instead:
memcached.set(req.path_info, response[2][0], nil, :raw => true)

Resources