1) Context
I am running a build pipeline using Gitlab's VirtualBox runner (Gitlab version 10.6.3). When I manually create a base image (e.g. my-base-vm), then the build runs perfectly on the 1-n clones that Gitlab-CI creates.
2) Observed error
However, when I want to provision the base image using Vagrant (version 2.2.2), the Gitlab CI build ouput for my job shows the following:
Running with gitlab-runner 11.2.0 (35e8515d)
on myproject-build-machine 1c8ab769
Using VirtualBox version 5.2.18_Ubuntur123745 executor...
Creating new VM...
ERROR: Preparation failed: ssh: handshake failed: read tcp 127.0.0.1:35542->127.0.0.1:34963: read: connection reset by peer
Will be retried in 3s ...
Using VirtualBox version 5.2.18_Ubuntur123745 executor...
Creating new VM...
ERROR: Job failed: execution took longer than 1h0m0s seconds
The image is based on the base image ubuntu/bionic64.
3) Configuration
The runner (clone from my-base-vm) seems to have the right NAT rules though (output of VBoxManage showvminfo my-base-vm-runner-1c8ab769-concurrent-0):
NIC 1 Rule(0): name = guestssh, protocol = tcp, host ip = 127.0.0.1, host port = 32805, guest ip = , guest port = 22
NIC 1 Rule(1): name = ssh, protocol = tcp, host ip = 127.0.0.1, host port = 2222, guest ip = , guest port = 22
The Gitlab config.toml is configured with the correct username + password (vagrant:vagrant) and the Vagrant file provisions the machine to accept username and password as means of authentication (excerpt from Vagrantfile):
config.vm.provision "shell", inline: <<-SHELL
sed -i 's/ChallengeResponseAuthentication no/ChallengeResponseAuthentication yes/g' /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
service ssh restart
SHELL
Related
I've ubuntu 16.04 on both my local and virtual machine, I want to access my virtual machine from my local machine, I've already changed the network adapter to bridge connection (both ips are in 192.168,10.x). But when i run the ssh virtual_mac_ip from my local terminal i get the error ssh: connect to host 192.168.10.7 port 22: Connection refused.
ps: I want to configure single node hadoop cluster
The issue has been resolved I changed my network adapter to NAT again, and use port forwarding on port 2222. Now when I run "ssh -p 2222 username#127.0.0.1", I am able to connect to my guest OS
Side note: Please check if OpenSSH is installed on your guess machine
I have created a ubuntu docker container on my mac
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d993a622d23 ubuntu "/bin/bash" 42 minutes ago Up 42 minutes 0.0.0.0:123->123/tcp kickass_ptolemy
I set port as 123.
My container IP is 172.17.0.2
docker inspect 5d993a622d23 | grep IP
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"IPAMConfig": null,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
On my Mac I try to ping my container,
Ping 172.17.0.2, I got Request timeout for icmp_seq 0....
What should I do? So my local machine can ping the container I installed. Did I missing some app installation on my container, which is a plain ubuntu system?
You can't ping or access a container interface directly with Docker for Mac.
The current best solution is to connect to your containers from
another container. At present there is no way we can provide routing
to these containers due to issues with OSX that Apple have not yet
resolved. we are tracking this requirement, but we cannot do anything
about it at present.
Docker Toolbox/VirtualBox
When running Docker Toolbox, Docker Machine via VirtualBox or any VirtualBox VM (like a Vagrant definition) you can setup a "Host-Only Network" and access the Docker VMs network via that.
If you are using the default boot2docker VM, don't change the existing interface as you will stop a whole lot of Docker utilities from working, add a new interface.
You will also need to setup routing from your Mac to the container networks via your VM's new IP address. In my case the Docker network range is 172.22.0.0/16 and the Host Only adapter IP on the VM is 192.168.99.100.
sudo route add 172.22.0.0/16 192.168.99.100
Adding a permanent route to osx is bit more complex
Then you can get to containers from your Mac
machost:~ ping -c 1 172.22.0.2
PING 172.22.0.2 (172.22.0.2): 56 data bytes
64 bytes from 172.22.0.2: icmp_seq=0 ttl=63 time=0.364 ms
--- 172.22.0.2 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.364/0.364/0.364/0.000 ms
Vagrant + Ansible setup
Here's my running config...
Vagrant.configure("2") do |config|
config.vm.box = "debian/contrib-buster64"
config.vm.hostname = "docker"
config.vm.network "private_network", ip: "10.7.7.7", hostname: true
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.memory = "4000"
vb.cpus = "4"
end
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.playbook = "tasks.yaml"
end
end
The ansible tasks.yaml to configure a fixed network.
- hosts: all
become: yes
vars:
ansible_python_interpreter: auto_silent
docker_config:
bip: 10.7.2.1/23
host: ["tcp://10.7.7.7:2375"]
userland-proxy: false
tasks:
- ansible.builtin.apt:
update_cache: yes
force_apt_get: yes
pkg:
- bridge-utils
- docker.io
- python3-docker
- python-docker
- iptables-persistent
- ansible.builtin.hostname:
name: docker
- ansible.builtin.copy:
content: "{{ docker_config | to_json }}"
dest: /etc/docker/daemon.json
- ansible.builtin.lineinfile:
line: 'DOCKER_OPTS="{% for host in docker_config.host %} -H {{ host }} {% endfor %}"'
regexp: '^DOCKER_OPTS='
path: /etc/default/docker
- ansible.builtin.systemd:
name: docker.service
state: restarted
- ansible.builtin.iptables:
action: insert
chain: DOCKER-USER
destination: 10.7.2.0/23
in_interface: eth1
out_interface: docker0
jump: ACCEPT
- ansible.builtin.shell: iptables-save > /etc/iptables/rules.v4
Add the route for the docker bridge network via the VM to the mac
$ sudo /sbin/route -n -v add -net 10.7.2.0/23 10.7.7.7
Then set DOCKER_HOST=10.7.7.7 in the environment to use the new VM.
$ export DOCKER_HOST=10.7.7.7
$ docker run --name route_test --rm -d node:14-slim node -e "require('http').createServer((req, res) => {
res.writeHead(200, {'Content-Type':'text/plain'})
res.end('hello')
}).listen(3000)"
$ docker container inspect route_test -f '{{ .NetworkSettings.Networks.bridge.IPAddress }}'
$ curl http://10.7.2.3:3000
hello
$ docker rm -f route_test
You don't get volumes mapped from the host to the vm, but as a bonus it uses a lot less cpu than the Docker 2.5.x release.
As an alternative, if your container has a bash shell incorporated, you can access it through
docker exec -it <CONTAINER ID> bash
and then you can ping your virtual ip
It works in this scenario:
Windows host
Linux VM installed on Windows host
Docker container installed on Linux VM host
Now you have to note this. Containers are in a isolated network but connected to the internet throught your Docker container host adapter.So you have to tell kernel linux to be available in your network then in your Linux VM:
# sysctl net.ipv4.conf.all.forwarding=1
# sudo iptables -P FORWARD ACCEPT
Now in you Windows host you have to add a route for our container network:
route add "Docker container network" "Linux VM IP" for example
# route add 172.17.0.0/16 192.168.1.20
setup:
PC-A a is docker host, PC-B is a another PC in the network. To ping/access docker's container from PC-B, run the below iptables-rules in the host.
iptables -A FORWARD -i docker0 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o docker0 -j ACCEPT
note: eth0 is host's interface and docker0 is docker's virtual default bridge.
Now add route in PC-B
route add -net <dockerip> netmask <net mask> gw <docker's host>
ping/access container services directly.
Let's say you have W-> windows machine, L-Linux Vbox VM (eth0,eth1) and docker app (using port 8989) running on this L-Linux Vbox VM. Provider does not have to Vbox anyway or W-> a win.You want to type http://app:8989 on your browser.There are two methods afak; easy way to run vagrant automatically or manually configure Vbox VM with port forwarding through "Host-only Adapter" which is actually eth1; normally eth0 is Vbox's default reserved 10.0.2.15 IP assignment.Or on command prompt on win/lin/mac through "VBoxManage" command you can set up networks or automate through scripts.
webtier.vm.network "forwarded_port", guest: 8989, host: 8989
run docker app
sudo docker run -p 8989:8989 ...
on windows explorer(W-> windows machine) browse your app
http://app:8989
You still can not ping "172.17.0.2" which is docker container IP in this situation from W-> windows machine.This could run cross-platform win/lin/mac.You might want to look into Vbox Manual and Vagrant Manual, particularly networks.
It is possible to run the containers of interest in one and the same network with an additional container with OpenVPN server, so that you can see containers over VPN connection from the host:
Use docker network create --subnet=172.19.0.0/24 my-net to create a network where containers will see each other.
Attach containers to it using --net my-net parameter for docker run.
Run an additional container with OpenVPN in the same network. This time you need a port mapping for VPN connection -p 1194:1194/udp.
Use OpenVPN client on the host to connect to this network with containers to ping them.
Also, you may need to comment out redirect-gateway instruction in OpenVPN client config file and add push "route 172.19.0.0 255.255.255.0" to (and remove other pushes from) the server config file.
How can one copy files from guest to host in Vagrant environment?
I know there are synced_folders but this doesn't work with darwin guest.
I need to copy build result from guest to host, ideally using vagrant provisioning scripts as all my other stuff happens there.
VM is VirtualBox.
From your host use this command -
scp -P <vagrant guest ssh port number> vagrant#localhost:/path/to/source/file /path/to/destination/file
The "vagrant guest ssh port number" is the forwarded port from host to guest for ssh, and you can find this from the output of your vagrant up command.
I am running a Vagrant VM under Windows 7 . The Vagrant VM is running a docker container. So the configuration is :
Windows7[Vagrant[Docker]]
I want to ssh from Windows into the Docker container.
The docker container is running sshd and I can successfully ssh from Vagrant VM to Docker container.
sudo docker ps
gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64b13daab5f2 ubuntu:12.04 "/bin/bash" 14 minutes ago Up 14 minutes 0.0.0.0:49153->22/tcp thirsty_morse
From the Vagrant VM:
ssh root#localhost -p 49153
works just fine. So Vagrant VM's port 49153 is forwarded to Docker container's port 22.
I've added
config.vm.network "forwarded_port", guest:49153, host:49155
to Vagrantfile so that localhost:49155 on Windows is forwarded to Vagrant VM:49153
This is where things break down. When I try to ssh from Windows to localhost:49155, I get:
ssh: connect to host localhost port 49155: Connection refused
So Windows:49155 -> Vagrant:49153 is not working. I thought that it may be a problem related to listening on a port on Vagrant VM's external ip so I've installed rinetd into Vagrant VM and I've done:
bindadress bindport connectaddress connectport
0.0.0.0 49153 127.0.0.1 49153
Still no luck. What am I missing here?
Ok, answering my own question. It works now. I think the most likely reason for the problem was that port 49153/55 and its neighbours is actually used by some windows services by default. I changed to mapping for ports in the Vagrant file to use 9090 for Windows and everything worked. No need to rinetd either. I've also done:
sudo docker run -v /vagrant:/opt/data -p 0.0.0.0:49153:22 -i -t ubuntu:12.04
Notice the 0.0.0.0: it may or may not be relevant but this configuration is working for me.
Docker (www.docker.io) looks terrific. However, after installing VirtualBox, Vagrant
... and finally Docker on a Mac, I'm finding it's not possible to access the service running in the Docker container from another computer (or from a terminal session on the Mac). The service I'm trying to access is Redis.
The problem appears to be that there's no route to the IP address assigned to the Docker container. In this case the container's IP is 172.16.42.2 while the Mac's IP is 196.168.0.3.
A couple notes:
It IS possible to access it - but only from within the VirtualBox session. This can be done using redis-cli -h 172.16.42.2 -p 6379.
I have added "config.vm.network :bridged" to the VagrantFile in an attempt to get the, but that didn't solve the problem.
The VM generated by vagrant is indeed isolated, in order to access it from your host, you can allocate a private network to it.
Instead of doing config.vm.network :bridged, try config.vm.network :private_network, ip: "192.168.50.4", It should do the trick
However, this will only allow you to access the VM itself, not the containers.
In order to do so, when running the container, you can add the -p option
ex: docker run -d -p 8989 base nc -lkp 8989
This will run a netcat listening on 8989 within a container and expose the port publicly. As it is also run with -d, the container will be in detached mode and the only output will be the container's ID
In order to expose the port, Docker do a simple NAT. In order to know the real port, you can
do docker port <ID of the container> 8989
Netcat will be available from the mac at 192.168.50.4:<result>
I just wrote a tutorial of how to use a host-only network and TCP routing to make this pretty easy. This way you don't have to map every specific port.
http://ispyker.blogspot.com/2014/04/accessing-docker-container-private.html
Important points ...
1) Add host-only network to Virtual Box
2) Tell the boot2docker VM to have an adapter on the host-only network
3) Add an IP for the new boot2docker VM host-only networking adapter
4) Route all Mac OS X traffic for the docker container subnet to that boot2docker VM host-only networking IP
Actual steps are on the blog with output so you can compare to what you see as you follow them.
I have installed tomcat from my Dockerfile and forwarded that to 6060 using vagrant`s port forwarding. These are the steps worked for me:
vagrant provision
vagrant up
vagrant ssh
box_name$ docker run -i -t -p 8080:8080 bsb_tomcat6 /bin/bash
Able to see tomcat up & running on localhost:6060, as I have done port forwarding to 6060 in my Vagrantfile
you also can define PRIVATE_NETWORK and FORWARD_DOCKER_PORTS environment variables to access your services that are running in docker containers:
$ vagrant halt
$ export PRIVATE_NETWORK=192.168.50.4
$ export FORWARD_DOCKER_PORTS=1
$ vagrant up
In my case i can access postgres from Mac using
$ telnet 192.168.50.4 49154
to find out actual application port you can use
$ sudo docker port 1854499c6547 5432
0.0.0.0:49154