Is it possible to use NGINX cache instead of RackCache? - ruby

I have configured my site on the server to use Rack Cache. But I was wondering if it is possible to use NGINX instead of that as I think NGINX cache would be faster? I am using NGINX as the web server and Thin as the application server. But I am not sure how I can use NGINX to serve the cached files instead of through Rack Cache.
Right now, all i have done is use the following in my config.ru and have configured the caching using the routing library's feature.
require 'rack-cache'
use Rack::Cache
Any help would be appreciated!

Related

How to configure proxy for springboot and mediawiki?

Currently I'm trying to run both SpringBoot application and mediawiki server (but I assume it may be any other server) on one machine simultaniously. They are both accessed via different ports e.g. 8080 and 7734.
Now I want to be able to access my Spring app as usual on localhost:8080/homePage and if I type something like this (with wiki included in the beggining of the url) localhost:8080/wiki/faqPage there must be some setting (or maybe another proxy server?) to redirect requests to the mediawiki instance. So the request to localhost:8080/wiki/faqPage would actually go to localhost:7734/faqPage. What is the best practice for achieving this?
If it helps I'm using docker image and docker-compose util to run mediawiki instance
Okay. Apache's ProxyPass and ProxyPassReverse in the httpd.conf did the magic

Separate frontend and backend with Heroku

I have an application, let's call it derpshow, that consists of two repositories, one for the frontend and one for the backend.
I would like to deploy these using Heroku, and preferably on the same domain. I would also like to use pipelines for both parts separate, with a staging and production environment for each.
Is it possible to get both apps running on the same domain, so that the frontend can call the backend on /api/*? Another option would be to serve the backend on api.derpshow.com and the frontend on app.derpshow.com but that complicates security somewhat.
What are the best practices for this? The frontend is simply static files, so it could even be served from S3 or similar, but I still need the staging and production environments and automatic testing and so and so forth.
Any advice is greatly appreciated!
For what you are trying to you must use webserver for serving static content and provide access to container(gunicorn, tomcat, etc...) holding your app. Also this is best practice.
Asume your use nginx as webserver, because its easier to setup. nginx config file would look like this
# Server definition for project A
server {
listen 80;
server_name derpshow.com www.derpshow.com;
location / {
# Proxy to gUnicorn.
proxy_pass http://127.0.0.1:<projectA port>;
# etc...
}
}
# Server definition for project B
server {
listen 80;
server_name api.derpshow.com www.api.derpshow.com;
location / {
# Proxy to gUnicorn on a different port.
proxy_pass http://127.0.0.1:<projectBg port>;
allow 127.0.0.1;
deny all;
# etc...
}
}
And thats it.
OLD ANSWER: Try using nginx-buildpack it allows you to run NGINX in front of your app server on Heroku. Then you need to run your apps on different ports and setup one port to api.derpshow.com and other to app.derpshow.com, and then you can restrict calls to api.derpshow.com only from localhost.
Would just like to contribute what I recently did. I had a NodeJS w/ Express backend and a plain old Bootstrap/vanilla frontend (using just XMLHttpRequest to communicate). To connect these two, you can simply tell express to serve static files (i.e. serve requests to /index.html, /img/pic1.png) etc.
For example, to tell express to serve the assets in directory test_site1, simply do:
app.use(express.static('<any-directory>/test_site1'));
Many thanks to this post for the idea: https://www.fullstackreact.com/articles/deploying-a-react-app-with-a-server/
Note that all these answers appear to be variations of merging the code to be served by one monolith server.
Jozef's answer appears to be adding an entire nginx server on top of everything (both the frontend and backend) to reverse proxy requests.
My answer is about letting your backend server serve frontend requests; I am sure there is also a way to let the frontend server serve backend requests.

Enable page caching on Nginx

I have a CDN for my website that uses Nginx and Drupal.
In my nginx configuration, I am trying to enable page level caching so requests like "website.com/page1" can be served from the CDN. Currently, I am only able to serve static files from the CDN(GET requests on 'website.com/sites/default/files/abc.png').
All page-level requests always hit the back-end web server.
What nginx config should I add in order for "website.com/page1" requests to also be served from the CDN?
Thanks!
If I understand you correctly, you want to setup another Nginx so that it works as a basic CDN in front of your current webserver (Nginx or Apache??) on which Drupal resides. You need to have a reverse proxy Nginx server to cache both static assets and pages. Since its not super clear to me what you wrote, this is what I assumed.
If you want a setup like this, then you should read the following article on how to setup reverse proxy

How to use Redis as a cache backend for Nginx (uwsgi module)

I'm using Nginx with UWSGI and I want Nginx to perform caching.
I know that there is a uwsgi_cache which can be used to cache pages on the local file system. But i want to use Redis to cache pages on memory
How is this possible?
UPDATE:
I don't want to proxy requests to Redis and serve content out of it. I want Nginx to proxy requests to UWSGI and perform caching, which is possible using the uwsgi_cache parameter but the problem is that it only caches on file system not anything else.

How to configure NGINX with Memcached to serve HTML

I'm trying to configure NGINX with Memcached to serve HTML
I found the following Memcached module for NGINX:
http://wiki.nginx.org/NginxHttpMemcachedModule
But I can't seem to get NGINX to serve my HTML (e.g. index.html) files from Memcached from reading the tutorial above.
Anyone know what the NGINX config should be to bet it to serve HTML from Memcached?
To use memcached with nginx like this you will need to populate memcached with the right key/value pairs. To do this you will need the #fallback location to do some work for you.
When a matching request comes in, nginx will query memcached with whatever you set $memcache_key to. If the value is found it is sent to the browser. If not the fallback location invokes your backend system to do two things:
generate a response and send it back to the browser.
send the response to memcached and set the appropriate key/value pair.
The next time a request comes in for the same key it will be in memcached and will be served directly from there.

Resources