I'm running Redis locally and have multiple machines communicating with redis on the same port -- any suggestions for good ways to lock down access to Redis? The database is run on Mac OS X. Thank you.
Edit: This is assuming I do not want to use the built-in (non backwards compatible) Redis requirepass directive in the config.
On EC2 we lock down the machines that can make requests to the redis port on our redis box to only be our app box (we also only use it to store non-sensitive data).
Another option could be to not open up the redis port externally, but require doing port forwarding through an ssh tunnel. Then you could only allow requests coming through the tunnel and only allow ssh with a known key.
You'd pay the ssh penalty, but maybe that's ok for your scenario.
There is a simple requirepass directive in the configuration file which allow access only to clients who authenticate through AUTH command. I recommend to read docs on this command, namely the "note" section.
Related
From time to time I have to use a proxy server to get access to every web page. Is their a way to tell the redis client (redis-cli) to not use the normal connection but to use a proxy?
Or are there any other clients, which allow a proxy?
You can create a SSH tunnel between your machine and the one hosting the Redis server:
ssh -L 6379:localhost:6379 user#remotehostname
(6379 is the default port for Redis)
You can also use Redis Desktop Manager or Fastoredis, they support SSH tunneling too.
Alternatively, if you do not have the posibility to open an ssh tunnel, you could install Webdis on the same host than Redis and command Redis from you web browser.
I am running neo4j as embedded service in Jetty / webapp, but for support purposes I need shell access to it. I can enable remote shell using approach described here, but because I am using a shared hosting this does not feel secure enough, I would prefer some additional protection, e.g. username/password. Is that possible? Neo4j docs on securing the server only seem to apply to the web admin interface.
There is no authentication in remote shell.
The way to secure access is to protect the remote shell port using iptables and access the shell from outside using ssh port forwarding or a vpn.
If running in a shared hosting environment you need to take care that the remote shell port is not accessible by others. This can be done e.g. by running Neo4j in a lxc container e.g. using docker.io.
And if you run server, you can use the REST based endpoint for the Neo4j shell which is also protected by the basic-auth user authentication that you can put in front of the server.
E.g. by something like this:
https://gist.github.com/jexp/8213614
I've got a django project on heroku and it uses postgre database on heroku (ec2). It all works fine, but on one computer I don't have access to postger port 5432 so I need to setup a tunnel from my computer to there. Is that possible?
You will need to have some sort of access to an intermediate host to make it possible. Heroku does not support it out of the box.
Corkscrew does SSH over HTTP proxy. Then you can open a transparent proxy like tsocks. This way you don't necessarily have to know about the firewall.
This all applies to Linux and possibly Mac. On Windows you can pipe your connection through Putty.
My situation is as follow: an application I am working on works using multiple database servers (MySQL) connections, I work locally and these database servers do not allow connections from anywhere, so I have set up a local test server. How do I redirect all the outgoing traffic to these servers (port 3306) to this local server?
As far as I know - even though I never used it yet - you can use the ipfw tool on OSX for similar tasks as you would use iptables for (filtering, address translation etc.). Here you find some more hints: OSX pfctl manpage
I will be building a server/client software on Windows, where many machines need to communicate with a Postresql database running on the server. This is C++ software so I will use libpq to connect to the database.
If I do this, will there be issues with the firewall? I'd like to make configuration as easy as possible and not have users open up firewall ports or disable their firewall.
If I do need to open up firewall ports, can I use WCF to get around the issue? Basically send a command to the server using WCF, run the postgresql command locally, and get the result back (I have never used WCF but understand that it can communication using HTTP port 80).
PostgreSQL typically listens on port 5432, which is not open by default in the Windows firewall. But the only machine where the firewall would need to be re-configured is the one where PostgreSQL is running. If you have many client machines, none of them should require firewall changes (unless they have restrictions on outbound traffic, which is rare).
Hope this helps.
You can also configure SSL connections to ensure better security.