Enable page caching on Nginx - caching

I have a CDN for my website that uses Nginx and Drupal.
In my nginx configuration, I am trying to enable page level caching so requests like "website.com/page1" can be served from the CDN. Currently, I am only able to serve static files from the CDN(GET requests on 'website.com/sites/default/files/abc.png').
All page-level requests always hit the back-end web server.
What nginx config should I add in order for "website.com/page1" requests to also be served from the CDN?
Thanks!

If I understand you correctly, you want to setup another Nginx so that it works as a basic CDN in front of your current webserver (Nginx or Apache??) on which Drupal resides. You need to have a reverse proxy Nginx server to cache both static assets and pages. Since its not super clear to me what you wrote, this is what I assumed.
If you want a setup like this, then you should read the following article on how to setup reverse proxy

Related

Laravel + nginx + Subdomain + Load balancer

My wildcard subdomains are not working when I am using a load balancer. I have edited the nginx config so the domain is .xxx.com on both the load balancer and both of my app servers. The servers are setup using Forge.
When I visit a subdomain, the app interprets it as the main domain. For example, visiting subdomain.xxx.com shows me the homepage of xxx.com, and visting subdomain.xxx.com/blog shows me xxx.com/blog (which is a 404). The URL also changes in the browser and doesn't include the subdomain.
The same code works on my staging server, which leads me to believe that the load balancer is causing the issue. I don't have a LB on the staging server.
I have restarted nginx, cleared the route and config cache.
Looking at the request in Telescope, I see that host is set to the domain (not subdomain).
Why is the subdomain not working when using a load balancer?
Turns out the DNS hadn't propagated yet. Weird result.

Reverse proxy same naked domain to different hosts

I'm managing the DNS of my domain with Cloudflare.
The marketing pages for are hosted with Netlify.
The main application is hosted with Heroku.
Is it possible with cloudflare + a naked domain (my-example.com) to have some paths being served by Netlify and other paths by Heroku?
Or am I forced to put one of the hosting services on a subdomain?
Disclaimer: I work for Netlify.
You can definitely do this without running your own server or paying anything extra.
Since Netlify already has a CDN, it's suboptimal to put cloudflare's CDN (activated with the 'orange cloud' in their settings) in front of Netlify's. Besides being inefficient, doing so breaks Netlify's atomic deploys and rollbacks and also slows down page service from our observations. It may work, but is not recommended. However, CloudFlare's DNS is quite performant and can be used without their CDN (turn off the 'orange cloud'). Their DNS works well with content hosted on Netlify's CDN.
Here's how to set things up to accomplish this via Netlify.
Deploy your static assets to a Netlify site at your main custom domain, let's say it's my-example.com. For testing purposes you can use the built-in sitename at Netlify (something-something-1234.netlify.com) instead of my-example.com. The below example redirects are "host agnostic" so will work with the Netlify hostname, Netlify deploy previews, AND the production hostname.
Find all the paths for your dynamic content - for this example, let's say it's /main/* and /app/* that are dynamic and your backend is hosted on Heroku.
Create proxy redirect rules to point to those paths. They could be hosted via CloudFlare's CDN to protect your API if you wanted to - Netlify proxying to CloudFlare-fronted sites on Heroku works fine. You could also choose just to proxy straight to Heroku which would be less complicated. Netlify has some DDoS protection built-in and is still "in front of" your Heroku app. Up to you.
Deploy those proxy rules and test.
Netlify's proxying (technically reverse proxying) can connect to whatever backend you'd like and does NOT show the URL to the visitor - it looks to them (URL bar in the browser, HTTPS connection) as though they are connected to my-example.com the whole time, but the content is returned from your backend (including HTTP status codes. This response is cached on Netlify's CDN if indicated by your Cache-Control: HTTP Header directives which the Heroku app sends. Note that CloudFlare WILL CHANGE your Cache-Control header in case you set it on content they proxy to! Netlify won't.)
Here's a common setup:
/main/* https://yourapp.herokuapp.com/main/:splat 200!
/app/* https://yourapp.herokuapp.com/main/:splat 200!
Note that if you deploy ANY assets under /main or /app to Netlify, they will be ignored due to the trailing ! on those rules. See https://www.netlify.com/docs/redirects/#note-on-shadowing for some more details about how that works and the alternatives (TL;DR: deploying some things like /main/logo.png on Netlify but nothing that Heroku should serve vs deploying ALL needed content for /main/* on Heroku).
Note that I suggest using identical paths on Netlify and Heroku (/main/*) rather than proxying to /somethingelse/* since it is easier to debug asset loading when paths match up. This isn't a requirement, though.
As mentioned in the comment, its possible using cloudflare enterprise service.
But you can do it with a simple nginx reverse proxy setup.
Have DNS resolve to nginx reverse proxy and based on the path, appropriately call the upstreams.
eg. example.com, and then direct queries for /path1 to 100.100.100.100 and for /path2 to 200.200.200.200

Separate frontend and backend with Heroku

I have an application, let's call it derpshow, that consists of two repositories, one for the frontend and one for the backend.
I would like to deploy these using Heroku, and preferably on the same domain. I would also like to use pipelines for both parts separate, with a staging and production environment for each.
Is it possible to get both apps running on the same domain, so that the frontend can call the backend on /api/*? Another option would be to serve the backend on api.derpshow.com and the frontend on app.derpshow.com but that complicates security somewhat.
What are the best practices for this? The frontend is simply static files, so it could even be served from S3 or similar, but I still need the staging and production environments and automatic testing and so and so forth.
Any advice is greatly appreciated!
For what you are trying to you must use webserver for serving static content and provide access to container(gunicorn, tomcat, etc...) holding your app. Also this is best practice.
Asume your use nginx as webserver, because its easier to setup. nginx config file would look like this
# Server definition for project A
server {
listen 80;
server_name derpshow.com www.derpshow.com;
location / {
# Proxy to gUnicorn.
proxy_pass http://127.0.0.1:<projectA port>;
# etc...
}
}
# Server definition for project B
server {
listen 80;
server_name api.derpshow.com www.api.derpshow.com;
location / {
# Proxy to gUnicorn on a different port.
proxy_pass http://127.0.0.1:<projectBg port>;
allow 127.0.0.1;
deny all;
# etc...
}
}
And thats it.
OLD ANSWER: Try using nginx-buildpack it allows you to run NGINX in front of your app server on Heroku. Then you need to run your apps on different ports and setup one port to api.derpshow.com and other to app.derpshow.com, and then you can restrict calls to api.derpshow.com only from localhost.
Would just like to contribute what I recently did. I had a NodeJS w/ Express backend and a plain old Bootstrap/vanilla frontend (using just XMLHttpRequest to communicate). To connect these two, you can simply tell express to serve static files (i.e. serve requests to /index.html, /img/pic1.png) etc.
For example, to tell express to serve the assets in directory test_site1, simply do:
app.use(express.static('<any-directory>/test_site1'));
Many thanks to this post for the idea: https://www.fullstackreact.com/articles/deploying-a-react-app-with-a-server/
Note that all these answers appear to be variations of merging the code to be served by one monolith server.
Jozef's answer appears to be adding an entire nginx server on top of everything (both the frontend and backend) to reverse proxy requests.
My answer is about letting your backend server serve frontend requests; I am sure there is also a way to let the frontend server serve backend requests.

Does the github web platform use a reverse proxy for caching?

The github web site is awfully fast and considering it's serving a ton of requests I've been wondering if github uses a reverse proxy something along the line of varnish or nginx?
I have a website that's serving highly dynamic content just like the github web service and I've been wondering if it's possible to front it with a reverse proxy for better performance.

How to use Redis as a cache backend for Nginx (uwsgi module)

I'm using Nginx with UWSGI and I want Nginx to perform caching.
I know that there is a uwsgi_cache which can be used to cache pages on the local file system. But i want to use Redis to cache pages on memory
How is this possible?
UPDATE:
I don't want to proxy requests to Redis and serve content out of it. I want Nginx to proxy requests to UWSGI and perform caching, which is possible using the uwsgi_cache parameter but the problem is that it only caches on file system not anything else.

Resources