How to disable cache in nginx when upload file - caching

We configure nginx as front server, and jetty as the backend server. When we upload file, nginx will cache the uploading file, and then send to the jetty server.
Can anyone tell me how to disable the cache function in nginx? Thank you very much!

Indeed we don't have to disable nginx cache. We can use this link. to handle file upload progress.

Related

Is it possible to use NGINX cache instead of RackCache?

I have configured my site on the server to use Rack Cache. But I was wondering if it is possible to use NGINX instead of that as I think NGINX cache would be faster? I am using NGINX as the web server and Thin as the application server. But I am not sure how I can use NGINX to serve the cached files instead of through Rack Cache.
Right now, all i have done is use the following in my config.ru and have configured the caching using the routing library's feature.
require 'rack-cache'
use Rack::Cache
Any help would be appreciated!

Bypassing akamai

I am completely new to Akamai. I have a .vbs file, which get served by Akamai. Recently operation team put restriction in .vbs, so now it is getting blocked. Is there a way to get this file served directly from web server and bypass Akamai?
I am not sure if that's possible. Even if you add few rules in akamai configuration and bypass akamai for .vbs extension, request will still have to go through akamai servers. Best option is to give it a try in Akamai staging env or contact akamai support.
If this file is updated frequently and should not be cached at all, then the best option would be to setup Akamai configuration to have this file as no-store, or if you need only specific requestors to be able to reach your own infra, behind Akamai, then bypass the Akamai cache. Still it will go through Akamai.
If the issue is going through Akamai, your only solution is to have another hostname not on Akamai to reach this file OR to know what is the server on your infra, which hosts this file and modify your host file to request this server directly. Issue is that the whole site will be served from your infra and not benefit of Akamai.

Enable page caching on Nginx

I have a CDN for my website that uses Nginx and Drupal.
In my nginx configuration, I am trying to enable page level caching so requests like "website.com/page1" can be served from the CDN. Currently, I am only able to serve static files from the CDN(GET requests on 'website.com/sites/default/files/abc.png').
All page-level requests always hit the back-end web server.
What nginx config should I add in order for "website.com/page1" requests to also be served from the CDN?
Thanks!
If I understand you correctly, you want to setup another Nginx so that it works as a basic CDN in front of your current webserver (Nginx or Apache??) on which Drupal resides. You need to have a reverse proxy Nginx server to cache both static assets and pages. Since its not super clear to me what you wrote, this is what I assumed.
If you want a setup like this, then you should read the following article on how to setup reverse proxy

How to use Redis as a cache backend for Nginx (uwsgi module)

I'm using Nginx with UWSGI and I want Nginx to perform caching.
I know that there is a uwsgi_cache which can be used to cache pages on the local file system. But i want to use Redis to cache pages on memory
How is this possible?
UPDATE:
I don't want to proxy requests to Redis and serve content out of it. I want Nginx to proxy requests to UWSGI and perform caching, which is possible using the uwsgi_cache parameter but the problem is that it only caches on file system not anything else.

How to configure NGINX with Memcached to serve HTML

I'm trying to configure NGINX with Memcached to serve HTML
I found the following Memcached module for NGINX:
http://wiki.nginx.org/NginxHttpMemcachedModule
But I can't seem to get NGINX to serve my HTML (e.g. index.html) files from Memcached from reading the tutorial above.
Anyone know what the NGINX config should be to bet it to serve HTML from Memcached?
To use memcached with nginx like this you will need to populate memcached with the right key/value pairs. To do this you will need the #fallback location to do some work for you.
When a matching request comes in, nginx will query memcached with whatever you set $memcache_key to. If the value is found it is sent to the browser. If not the fallback location invokes your backend system to do two things:
generate a response and send it back to the browser.
send the response to memcached and set the appropriate key/value pair.
The next time a request comes in for the same key it will be in memcached and will be served directly from there.

Resources