I am running neo4j as embedded service in Jetty / webapp, but for support purposes I need shell access to it. I can enable remote shell using approach described here, but because I am using a shared hosting this does not feel secure enough, I would prefer some additional protection, e.g. username/password. Is that possible? Neo4j docs on securing the server only seem to apply to the web admin interface.
There is no authentication in remote shell.
The way to secure access is to protect the remote shell port using iptables and access the shell from outside using ssh port forwarding or a vpn.
If running in a shared hosting environment you need to take care that the remote shell port is not accessible by others. This can be done e.g. by running Neo4j in a lxc container e.g. using docker.io.
And if you run server, you can use the REST based endpoint for the Neo4j shell which is also protected by the basic-auth user authentication that you can put in front of the server.
E.g. by something like this:
https://gist.github.com/jexp/8213614
Related
I have a Laravel app which needs to connect to a secure external API with very strict access requirements. There is a handler hosted on AWS which has a bunch of signed certificates etc. The only way to connect to that API is via that specific server due to those requirements.
Now, to test things on my local machine, I do the following:
SSH to the server using the -D flag to set up a SOCKS proxy.
Use this socks to http package to convert the proxy.
Set up Postman's proxy settings to use that http proxy.
That all works fine and I can complete the requests as expected.
However, I'd like to be able to use the proxy in my local Laravel environment too, for which I use Sail.
The problem is that I'm unsure of how to get the container to interact with the proxy. Using the method above in my local machine, I can cURL the required endpoint just fine, but if I try to do it via the container itself, it refuses to connect.
Any help would be appreciated!
I have a Laravel app with CI/CD setup at BuddyWorks which lets you create deployment pipelines.
I want to use SSH action to run some config scripts (artisan...) after uploading the source code.
Unfortunately, it turned out that SSH connectivity to the hosting server is restricted to my home country, ergo can’t use BuddyWorks to do the job for me. The hosting company refused my request to whitelist BuddyWorks IP’s.
So here am, looking for a solution to bypass restriction.
Currently, I’m investigating SSH reverse for , but not sure I’m on good path.
Any help would be appreciated!
I ended up writing a small http->ssh proxy server with basic authentication which receives commands from pipeline via post requests and connects to the host server via ssh, executes the commands and logs to slack.
I am evaluating jelastic for use with Tomcat 8 and Postgres 9.5.
Does a user have ssh access to the instance that is running these services?
Does Tomcat have access to the local storage, or can you attach storage that Tomcat can create and read files?
Does a user have ssh access to the instance that is running these services?
Yes, a user have ssh access to the any instance. The authentication procedure in Jelastic SSH Gateway is divided into two independent parts:
connection from end user to Gateway (external authentication)
connection from Gateway to users’ container (internal authentication)
Both parts of the authentication procedure are based on a standard SSH protocol, using public/private keypairs.
With Jelastic SSH Gateway, you can easily access:
the whole account where you can navigate across your environments and containers using an interactive menu without extra authentication
separate containers directly while working with them remotely via additional tools (e.g. Capistrano) or using SFTP and FISH protocols.
While accessing containers via SSH, a user receives all required permissions and additionally can manage the main services with sudo commands of the following kind (and others):
sudo /etc/init.d/jetty start
sudo /etc/init.d/mysql stop
sudo /etc/init.d/tomcat restart
sudo /etc/init.d/memcached status
sudo /etc/init.d/mongod reload
sudo /etc/init.d/nginx upgrade
sudo /etc/init.d/httpd help
Using our documentation you’ll find out how to:
generate SSH key
add SSH key
access environments and containers
Does Tomcat have access to the local storage, or can you attach storage that Tomcat can create and read files?
Jelastic supported the local storage and the dedicated storage container.
Jelastic Dedicated Storage Container is a special type of node, based on Docker centos7 image. Being developed specially for data storing, it provides a number of the appropriate benefits:
being delivered with the corresponding software (i.e. NFS & RPC) already pre-installed, so such a container can be used as a storage immediately after the creation without any additional configurations required
compared to other common-purposed Jelastic nodes, Dedicated Storage Container provides the enlarged amount of disk space, which allows to persist a comparatively bigger data volumes (herewith, the particular value depends on your service provider’s settings and can vary according to your account type).
Some tips on this container type usage and examples it can be leveraged in the best way are revealed within the corresponding use case description.
And below we'll consider how to set up such Storage server inside your Cloud and some tips on its management:
Storage container creation
Storage container management
If you don't have root permissions, please contact your hosting provider.
Applications you run on tomcat have access to storage on the running system is based on several things. There are layers of security. Tomcat literally has access to whatever user you run it under has access to. That's true in both Windows and Linux environments. A running service has operable services defined as soon as you decide to log in.
I need to access a site from behind a proxy server. I can do it from within a EC2 instance, but it would be really nice if I could use my own EC2 server and when using nokogiri or mechanize to be able to set the instance as my proxy. I have tried enabling HTTP requests and SSH requesting from any source. When I try to connect to the server through ruby running this code.
open('http://example.com/', :proxy => 'http://ec2-54-242-232-173.compute-1.amazonaws.com:80')
I get back either... A connection error(2)
Or an error saying that the end of the file has been reached.
I have tried basic authentification with valid credentials as well.
Can someone try and walk me through the process of setting up an ec2 server and using it as a proxy server through mechanize?
For your case you need to do a few things:
Make sure your EC2 instance is running some sort of proxy server (Squid is good)
Make sure your instance and Squid (or whatever) are set to accept external connections
Configure your Ruby script appropriately
To setup the EC2 instance, use this guide: http://hackingonstuff.net/post/23929749838/setting-up-a-squid-proxy-on-aws
To setup the script just make sure it uses the instance's public DNS name and the port your proxy service is listening on. The public DNS name/ip changes each time you launch the instance so just be sure not to over look that small but important detail. :)
I'm running Redis locally and have multiple machines communicating with redis on the same port -- any suggestions for good ways to lock down access to Redis? The database is run on Mac OS X. Thank you.
Edit: This is assuming I do not want to use the built-in (non backwards compatible) Redis requirepass directive in the config.
On EC2 we lock down the machines that can make requests to the redis port on our redis box to only be our app box (we also only use it to store non-sensitive data).
Another option could be to not open up the redis port externally, but require doing port forwarding through an ssh tunnel. Then you could only allow requests coming through the tunnel and only allow ssh with a known key.
You'd pay the ssh penalty, but maybe that's ok for your scenario.
There is a simple requirepass directive in the configuration file which allow access only to clients who authenticate through AUTH command. I recommend to read docs on this command, namely the "note" section.