How can docker-machine create an EC2 instance in a private subnet? - amazon-ec2

I have a bastion host in the public subnet through which I usually access the hosts in the private subnet. When I create a docker machine in the private subnet with the command below, it does not complete.
export server_name=tomcat-5
docker-machine create \
--driver amazonec2 \
--amazonec2-region us-west-2 \
--amazonec2-vpc-id vpc-8e5488ea \
--amazonec2-ami ami-6f69a25f \
--amazonec2-instance-type m3.medium \
--amazonec2-zone b \
--amazonec2-subnet-id subnet-52f5dd54 \
--amazonec2-security-group tomcat-sg-SecurityGroup-JHHNDKKL4LO1 \
--amazonec2-tags Name,${server_name} \
--amazonec2-root-size 10 \
--amazonec2-ssh-user ec2-user \
--amazonec2-ssh-keypath ~/.ssh/id_rsa \
--amazonec2-private-address-only \
${server_name}
It says
Running pre-create checks...
Creating machine...
(tomcat-5) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
and after that it just hangs for ever. Obviously it does not know how to get to the server via the bastion. And I cannot name the server so that docker can leverage the .ssh/config (if it will do that).
It is hard to imagine that others have not run into it. I ultimately plan to bring up these servers using docker compose. So if I can do that without docker-machine, that is fine too.
What am I missing?

I was able to get more information about the problem by turning on debug. Essentially
docker-machine --debug....
This allowed me to see that docker-machine was trying to ssh into an IP with ec2-user#10.x.y.z with these parameters
{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no
-o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3
-o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none ec2-user#10.x.y.z
-o IdentitiesOnly=yes -i /Users/jvarg/.docker/machine/machines/tomcat-2/id_rsa
-p 22] /usr/bin/ssh <nil>}
Getting closer. I was able to directly ssh into the machine by updating my ~/.ssh/config by creating a proxy for my subnet. But docker machine is not using that config file. As you can see it use /dev/null. "-F /dev/null".
I looked in the docker machine code and this seems to be hardcoded. https://github.com/docker/machine/blob/df2d3811ca8bc9ddf6896b4a4154b9277826b441/libmachine/ssh/client.go#L69
I have created an issue on the github to follow up. https://github.com/docker/machine/issues/3794
Update: While we wait for that PR to be accepted (now stuck with merge conflicts) here is a lame workaround. Login to bastion and use that as an orchestrator instead of your laptop. On the bastion install docker and docker-machine. Also make sure you create a new key-pair there so as not to compromise your own. I did not install ssh agent. So if you go the same way, make sure the key pair has no pass phrase. You will be able to finally ssh into it. But it will then fail with
notifying bugsnag: [Error creating machine: Error running provisioning: exit status 1]
While it is not obvious from that error, a detailed analysis of the debug output showed that it was failing because the new machine was not able to get out to the internet. This is usually not an issue for most of you. In my company's case we use an http_proxy. But I resolved this by setting up a NAT gateway.
The next error was because the bastion could not communicate on port 2376 with the new machine. Normally docker-machine creates a security group with 2376 open to the world. My company frowns on open-to-the-world ports. So I had updated my SG to allow access from the bastion. But I guess I need to tweak it.

I installed an OpenVPN server on the bastion (actually a NAT gateway). Then my client connects to the OpenVPN server. The OpenVPN server pushes a route allowing the client to access the private subnet work on the VPC.
This works transparently. I can use docker-machine to create nodes on the private network. I can even start docker on my client and join an existing swarm.

Related

How to access phpMyAdmin from laptop via SSH tunnel through AWS bastion/jump server to EC2 instance using .ssh/config

Need to reach phpMyAdmin on an EC2 instance behind a bastion/jumpserver from local laptop.
Looking to reduce these steps into using .shh/config. The question seeks to solve the right configurations.
When connecting to EC2 without public bastion server to jump through, this is the normal way documented which does not work in my case because our deployment uses a public facing bastion:
https://docs.bitnami.com/aws/faq/get-started/access-phpmyadmin/
When you need to jump through a public facing bastion e.g.:
Local/Laptop ------> bastion/jumpserver -----> ec2
This above reference link does not follow the same workflow and documentation is sparse.
Setting up inbound/outbound rules for this capability is also sparse.
The preference is to use .ssh/config which is setup like this:
Host bastionHostTunnel
Hostname <publicBastionIp>
User <bastionusername>
ForwardAgent yes
IdentityFile <local path to .pem file>
Host ec2Host
Hostname <privateEC2IP>
User <ec2 username>
ForwardAgent yes
IdentityFile <local path to .pem file>
# -A Enable forwarding of the Authentication agent connection
# -W used on older machines instead of -J to bounce through
# %h the remote hostname
# On Windows 10(only?) seems must call ssh.exe instead of only ssh
ProxyCommand ssh.exe -A -W %h:22 bastionHostTunnel
I obviously left out vars in <> above - but I have them and have verified similar configuration is working for enabling SFTP as above with FileZilla.
Then in shell call this to bind port localhost:8888 (http://127.0.0.1:8888):
ssh ec2Host -D 8888
Then ought to be able to open browser and go to the following to access phpMyAdmin:
http://127.0.0.1:8888/phpmyadmin
Current issue is that this process is hanging and possibly refusing the connection. This points to either bad configuration above or incorrect inbound/outbound rules for either/both bastion and ec2 instance.
Has anyone here had similar issue and was able to solve and could share further, much appreciated. Plus any extra clues as far as debugging the overall process would help in the answer.
I'm most curious if it works if you specific everything on the command line...once you determine that works, you can start refactoring to put some aspects in to .ssh/config. It's usually easier for me to find errors with my configuration if everything is on the command line, plus I don't know that I see the correct forwarding options all listed there.
Unless I'm very mistaken, you don't need any reference to the ec2 host in your SSH config file because you're using the jump machine to redirect localhost traffic there, you wouldn't directly be able to reach the ec2 host machine from your local machine using an SSH tunnel.
There are many ways to do a tunnel, but when I do this, I use a command like ssh -L 8080:destination:80 -i <keyfile> me#jumpbox . destination must be reachable from jumpbox, which I can verify by first using ssh -i <keyfile> jumpbox then, once on that machine, ssh destination. If there's a problem along the way, it's easier to debug these little steps (for instance, if I can't connect by manual ssh to jumpbox then I know the tunnel will never work).

Rancher: creating hosts on AWS EC2

I'm trying to add a EC2 host to my Rancher setup. I have seen this tutorial, however I wanted to use Docker-machine instead.
To that extend, I have done the following:
MAC:~ user1$ docker-machine create -d amazonec2 --amazonec2-vpc-id vpc-84fd6de0 --amazonec2-region eu-west-1 --amazonec2-ami ami-c5f1beb6 Rancher-node-aws-01Running pre-create checks...
Creating machine...
(Rancher-node-aws-01) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Error creating machine: Error detecting OS: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded
Note: the AMI ID corresponds to rancheros-v0.7.0-hvm-1.
As you can see, I cannot SSH into the RancherOS (SSH port is open on AWS). Any ideas why this is?
The trick is to use an SSH user called 'rancher'. So the full command will be:
docker-machine create -d amazonec2 --amazonec2-vpc-id vpc-84fd6de0 --amazonec2-region eu-west-1 --amazonec2-ami ami-c5f1beb6 --amazonec2-ssh-user rancher Rancher-node-aws-01

access host's ssh tunnel from docker container

Using ubuntu tusty, there is a service running on a remote machine, that I can access via port forwarding through an ssh tunnel from localhost:9999.
I have a docker container running. I need to access that remote service via the host's tunnel, from within the container.
I tried tunneling from the container to the host with -L 9000:host-ip:9999 , then accessing the service through 127.0.0.1:9000 from within the container fails to connect. To check wether the port mapping was on, I tried
nc -luv -p 9999 # at host
nc -luv -p 9000 # at container
following this, parag. 2 but there was no perceived communication, even when doing
nc -luv host-ip -p 9000
at the container
I also tried mapping the ports via docker run -p 9999:9000 , but this reports that the bind failed because the host port is already in use (from the host tunnel to the remote machine, presumably).
So my questions are
1 - How will I achieve the connection? Do I need to setup an ssh tunnel to the host, or can this be achieved with the docker port mapping alone?
2 - What's a quick way to test that the connection is up? Via bash, preferably.
Thanks.
Using your hosts network as network for your containers via --net=host or in docker-compose via network_mode: host is one option but this has the unwanted side effect that (a) you now expose the container ports in your host system and (b) that you cannot connect to those containers anymore that are not mapped to your host network.
In your case, a quick and cleaner solution would be to make your ssh tunnel "available" to your docker containers (e.g. by binding ssh to the docker0 bridge) instead of exposing your docker containers in your host environment (as suggested in the accepted answer).
Setting up the tunnel:
For this to work, retrieve the ip your docker0 bridge is using via:
ifconfig
you will see something like this:
docker0 Link encap:Ethernet HWaddr 03:41:4a:26:b7:31
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
Now you need to tell ssh to bind to this ip to listen for traffic directed towards port 9000 via
ssh -L 172.17.0.1:9000:host-ip:9999
Without setting the bind_address, :9000 would only be available to your host's loopback interface and not per se to your docker containers.
Side note: You could also bind your tunnel to 0.0.0.0, which will make ssh listen to all interfaces.
Setting up your application:
In your containerized application use the same docker0 ip to connect to the server: 172.17.0.1:9000. Now traffic being routed through your docker0 bridge will also reach your ssh tunnel :)
For example, if you have a "DOT.NET Core" application that needs to connect to a remote db located at :9000, your "ConnectionString" would contain "server=172.17.0.1,9000;.
Forwarding multiple connections:
When dealing with multiple outgoing connections (e.g. a docker container needs to connect to multiple remote DB's via tunnel), several valid techniques exist but an easy and straightforward way is to simply create multiple tunnels listening to traffic arriving at different docker0 bridge ports.
Within your ssh tunnel command (ssh -L [bind_address:]port:host:hostport] [user#]hostname), the port part of the bind_address does not have to match the hostport of the host and, therefore, can be freely chosen by you. So within your docker containers just channel the traffic to different ports of your docker0 bridge and then create several ssh tunnel commands (one for each port you are listening to) that intercept data at these ports and then forward it to the different hosts and hostports of your choice.
on MacOS (tested in v19.03.2),
1) create a tunnel on host
ssh -i key.pem username#jump_server -L 3336:mysql_host:3306 -N
2) from container, you can use host.docker.internal or docker.for.mac.localhost or docker.for.mac.host.internal to reference host.
example,
mysql -h host.docker.internal -P 3336 -u admin -p
note from docker-for-mac official doc
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Mac.
The gateway is also reachable as gateway.docker.internal.
I think you can do it by adding --net=host to your docker run. But see also this question: Forward host port to docker container
I'd like to share my solution to this. My case was as follows: I had a PostgreSQL SSH tunnel on my host and I needed one of my containers from the stack to connect to a database through it.
I spent hours trying to find a solution (Ubuntu + Docker 19.03) and I failed. Instead of doing voodoo magic with iptables, doing modifications to the settings of the Docker engine itself I came up with a solution and was shocked I didn't thought of this earlier. The most important thing was I didn't want to use the host mode: security first.
Instead of trying to allow a container to talk to the host, I simply added another service to the stack, which would create the tunnel, so other containers could talk to easily without any hacks.
After configuring a host inside my ~/.ssh/config:
Host project-postgres-tunnel
HostName remote.server.host
User sshuser
Port 2200
ForwardAgent yes
TCPKeepAlive yes
ConnectTimeout 5
ServerAliveCountMax 10
ServerAliveInterval 15
And adding a service to the stack:
postgres:
image: cagataygurturk/docker-ssh-tunnel:0.0.1
volumes:
- $HOME/.ssh:/root/ssh:ro
environment:
TUNNEL_HOST: project-postgres-tunnel
REMOTE_HOST: localhost
LOCAL_PORT: 5432
REMOTE_PORT: 5432
# uncomment if you wish to access the tunnel on the host
#ports:
# - 5432:5432
The PHP container started talking through the tunnel without any problems:
postgresql://user:password#postgres/db?serverVersion=11&charset=utf8
Just remember to put your public key inside that host if you haven't already:
ssh-copy-id project-postgres-tunnel
I'm pretty sure this will work regardless of the OS used (MacOS / Linux).
I agree with #hlobit that #B12Toaster answer should be the accepted answer.
In case anyone hits this problem but with a slightly different setup with the SSH tunnel, here are my findings. In my case, instead of creating a tunnel from Docker host machine to remote machine using ssh -L, I was creating remote forward SSH tunnel from remote machine to Docker host machine using ssh -L.
In this setup, by default sshd does NOT allow gateway ports, i.e. in file /etc/ssh/sshd_config on Docker host, the GatewayPorts no should be uncommented and set to GatewayPorts yes or GatewayPorts clientspecified. I configured GatewayPorts clientspecified and configured the remote forward SSH tunnel by ssh -L 172.17.0.1:dockerHostPort:localhost:sshClientPort user#dockerHost. Remember to restart sshd after changing /etc/ssh/sshd_config (sudo systemctl restart sshd).
Your Docker container should be able to connect to Docker host on 172.17.0.1:dockerHostPort and this in turn gets tunnelled back to SSH client's localhost:sshClientPort.
References:
https://www.ssh.com/ssh/tunneling/example
https://docs.docker.com/network/network-tutorial-host/
https://docs.docker.com/network/host/
My 2 cents for Ubuntu 18.04 - a very simple answer, no need for extra tunnels, extra containers, extra docker options or exposing host.
Simply, when creating a reverse tunnel make sure ssh binds to all interfaces as, by default, it binds ports of the reverse tunnel to localhost only. For example, in putty make sure that option Connection->SSH->Tunnels Remote ports do the same (SSH-2 only) is ticked.
This is more or less equivalent to specifying the binding address 0.0.0.0 for the remote part of the tunnel (more details here):
-R [bind_address:]port:host:hostport
However, this did not work for me unless I allowed the GatewayPorts option in my sshd server configuration. Many thanks to Stefan Seidel for his great answer.
In short: (1) you bind the reverse tunnel to 0.0.0.0, (2) you let the sshd server to accept such tunnels.
Once this is done I can access my remote server from my docker containers via the docker gateway 172.17.0.1 and port bound to the host.
On my side, running Docker in Windows Subsystem for Linux (WSL v1), I couldn't use docker0 connection approach. host.docker.internal also doesn't resolve (latest docker version).
However, I found out I could directly use the host-ip insider my docker container.
Get your Host IP (Windows cmd: ipconfig), e.g. 192.168.0.5
Bash into your Container and test if you can ping your host ip:
- docker exec -it d6b4be5b20f7 /bin/bash
- apt-get update && apt-get install iputils-ping
- ping 192.168.0.5
PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data.
64 bytes from 192.168.0.5 : icmp_seq=1 ttl=37 time=2.17 ms
64 bytes from 192.168.0.5 : icmp_seq=2 ttl=37 time=1.44 ms
64 bytes from 192.168.0.5 : icmp_seq=3 ttl=37 time=1.68 ms
Apparently, in Windows, you can directly connect from within containers to the host using the official host ip.
In case anyone needs it (like I did), solution for Windows and WSL is same as #prayagupd mentioned for Mac OS
Create an SSH tunnel to your remote service with whatever tool you prefer to whatever port you prefer, for example 3300.
Then, from Docker container you can connect to, for example, MySQL DB on tunnel port 3300 using following command:
mysql -u user -p -h host.docker.internal -P 3300
An easy example to reproduce the situation and ssh to host
Run a container. Use --network="host
docker container run --network="host" --interactive --tty --rm ubuntu bash
Now you can access your host using localhost
Now your host machine is a Linux machine that has a public-private key file to ssh into it. So copy the contents of your private key file and reproduce the key file inside your host. (However, this is just a demonstration. This is not a good way to copy key files)
Now ssh into your host. Use localhost to access it.
ssh -i key_file.pem ec2-user#localhost

Docker running inside vagrant + remote python debugging in Pycharm

I'm running docker on top on vagrant and would like to debug application remotely using pycharm running on windows (which runs vagrant). Of course the docker host is then on vagrant - not the same machine pycharm is running on.
I have to specify the certificates folder and docker machine executable as a local files / directories. Does this mean I cannot debug applications using pycharm in this setup?
Of course I could ssh directly into the docker container but then I have no features pycharm gives me.
pycharm cannot remote debug because cannot connect with code in docker in vagrant
you need bridge port from docker with vagrant before this.
you need find vagrant ip and docker ip (by default, vagrant ip: 10.0.2.2, you can see when run vagrant ssh)
second determine port for debug( exam 21000)
use commant code in terminal
vagrant ssh
sudo iptables -t nat -A PREROUTING -p tcp --dport 21000 -j DNAT --to-destination 10.0.2.2:21000
sudo iptables -t nat -A POSTROUTING -j MASQUERADE
set code for python file:
change 172.19.0.1 with your docker ip (in vagrant)
import pydevd
pydevd.settrace('172.19.0.1', port=21000, suspend=False)
set on breakpoint on code and try to debug
It is possible however not recommended, it has the potential to introduce a number of problem spots longer term and brings a increased security risk.
as per the docker documentation ...
If you are okay with the security risk and if docker toolbox using boot2docker is not an option for your situation, then you will need to ensure:
Docker client/server versions are identical
Port forwarding on your local vagrant box is setup
Add the TCP binding for the docker server, either as a replacement to the default unix socket binding and/or in addition.

Getting some sort of authentication issue when deploying EC2 instances with Knife

I'm having some kind of authentication issue when trying to launch server instances in EC2 with the knife command.
I'm using a command like:
knife ec2 server create --availability-zone us-east-1d --node-name ES-test --flavor t1.micro --image ami-fd20ad94 --identity-file something-dev.pem --ssh-user ubuntu -r 'recipe[something-elasticsearch::default]'
And there are 2 points of failure. The first comes relatively early on.
Waiting for instance...........................
Subnet ID: subnet-61dfa849
Private IP Address: 10.0.0.43
done
Bootstrapping Chef on 10.0.0.43
Failed to authenticate ubuntu - trying password auth
Enter your password:
I should be able to authenticate as Ubuntu with no password here. In fact, if I allow the provisioning to continue and try to ssh to the generated instance with something like:
ssh -i something-dev.pem ubuntu#10.0.0.43
...it will work. So why is the knife command itself failing to authenticate?
I had the same problem as above and tried the ssh-add as suggested by Rico above. Although I still got the prompt for a password, hitting enter on a blank password then allowed the process to continue.
Failing that, the -V verbose output option may give you more insight.
I found this to work well for me.
bundle exec knife ec2 server create -r "role[websphere]" -I ami-cb94868e --flavor m1.small -G default --ssh-user ubuntu -N server01 -S whatever --identity-file .chef/whatever.pem
Also consider that when you download the .pem from AWS, you need to chmod 400 whatever.pem

Resources