Clear NGINX cache using FTP - caching

I have an NGINX server set-up to handle caching for a website (running elsewhere).
Works like a charm, however we want to administrators to have an option to flush the cache from their backoffice. I was thinking of handling this using FTP, by simply removing all the files from the cache directory.
I have set up the caching like this:
proxy_cache_path /var/cache/nginx/my_site levels=1:2 keys_zone=MY_SITE:8m max_size=2048m inactive=720m;
However the files are stored with permissions 700. How can I tell NGINX to also give permissions to the group (770)? I would add the FTP user I created to that group then.
Any other suggestion to handle the flushing would be OK (I heave read the other thread). The backoffice is located on another location so I would have to use some remote technology.

You could use proxy_cache_purge directive, look for details http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_purge (this directive apeared in 1.5.7)

Related

HipChat Server login screen limit

Is it possible to restrict access to the HipChat Server login screen for some networks for security reason?
I need to limit only to site root.
Unfortunately, there's not feature right now to allow you to do that directly.
One way you could work around it is to write an script that updates the ngixn configuration to add IP filtering. This question proposes a method to achieve something similar to what you describe (you would need to customize the script to fit into HipChat Server's nginx configuration though):
cat /var/www-allow/client1-allow.conf
allow 192.168.1.1;
allow 10.0.0.1;
cat /etc/nginx/sites/client1.conf
...
server {
include /var/www-allow/client1-allow.conf;
deny all;
}
Try the script manually. Once it works, move the script to /home/admin/startup_scripts/ipfilter (keep the file without extension, and make it executable), so that your configuration stays after reboot and upgrade (/home/admin/startup_scripts contains a few examples of different scripts).

How to clear phpFastCache when path set to /tmp/

I'm using phpFastCache in a frontend-application, setting the path to the server's "/tmp/" directory:
phpFastCache::setup('path',"/tmp/");
I do not want to use phpFastCache's automatically found cache-directory, because it clutters my home directory with an extra directoy for every domain through which users are reaching the application (several are connected).
In the backend I would like to display cache-statistics and be able to clear the cache. This doesn't work anymore, now that I have set /tmp/ as the cache path. Statistics show up empty and the cache is not cleared. I did configure the cache-directoy to the same "/tmp/" in the backend-application as well.
How can phpFastCache be configured to be able to achieve this?
After looking at the phpFastCache-code, I'm able to answer my own question:
To achieve what I wanted (have only ONE cache-directory, regardless of domain used; be able to list statistics and clear cache from a separate application) I had to make two config-settings:
phpFastCache::setup('path', '/path-to-my-home-dir');
phpFastCache::setup('securityKey', 'phpfastcache');
I'm setting these identically in both my frontend- and backend-applications.
This will make phpFastCache use /path-to-my-home-dir/phpfastcache as its only cache-directory.
Had I not set the 'securityKey', phpFastCache would have generated one from the current domain (in most cases), therefore my backend application would have only "seen" that part of the cache residing in the directory for the currently used domain.

How do I disable Symfony from writing _sess files to my /tmp directory

I am new to symfony and am responsible for a site that I didn't build. For some reason the site is on a live server but running in dev mode. - Im not sure why??
That aside - The website keeps writing _sess files to my /tmp directory. The contents of each _sess file is exactly the same. See below:
_symfony2|a:3:{s:10:"attributes";a:0:{}s:7:"flashes";a:0:{}s:6:"locale";s:2:"en";}
Do I really need all of these files? Can anyone suggest a way of disabling this feature?
Thanks in advance
The default session storage of Symfony2 writes the session information to file(s). The location these files are written to is determined by the config parameter framework.session.save_path. The default value for this is %kernel.cache.dir%/sessions. This means that in a default installation of symfony the session files would be written to the cache directory for the environment.
However, this can be a problem as the cache directory has to be cleared each time an app is deployed, thus logging all the users out. Therefore presumably your app has been configured (most likely in config.yml) to store the session files in /tmp.
As I understand it, sessions that have expired should be garbage-collected at some point. Symfony also has some config params that affect this - see the FrameworkBundle Configuration. I don't know how much traffic your website has but obviously you do need the session files for active sessions. If you think you have a lot of expired sessions you could try tweaking the gc config params.
Alternatively, if having the session files in /tmp is specifically the problem you could relocate them (by changing the value of the framework.session.save_path) or use PDOSessionHandler to store sessions in the database.
I have this problem with symfony 1.4.20 on a web site I inherited.
It is writing files to
/var/lib/php/sessions
every second, until the server runs out of iNodes.
I've tried changing settings in settings.yml. app.yml and PHP session variables.
Nothing sees to be working though, the only way I can stop it is to change the ownership of /var/lib/php/sessions to root and that prevents any session files being created.

How to list files in a directory on an https server

I have set up a connection with a rather large https server and I am able to download files if I know their name and location.
However, what I would like to do is search through the https file server and only pull out html files. I know how to do this in normal directories, but is there a way to list out files and directories in an https file server kinda like you would do an ls or dir?
I am unfamiliar with http servers in general so explanations are appreciated.
Thanks!
To list files, or be able to access them using HTTP from that list, you have to have a CGI, or some sort of plugin, that will give you a listing of the directories available.
That isn't something that is allowed by default as it can be a major security hole on a system. Imagine the problems someone could cause if they could browser through the /etc hierarchy on a *nix system and retrieve the password information, or through database files, etc.
So, by default no browsing of the file system is allowed. You can enable that many different ways, depending on the HTTPd server and the modules supplied with it or that have been added.
Writing such an interface isn't that hard either, but its better to rely on pre-built wheels, rather than reinvent your own.

WindowsAzure: Is it possible to set directory permissions within the web.config?

A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)

Resources