Packer.io - dial tcp 172.X.X.X:22: connect: no route to host - ansible

I'm using vsphere-clone as the builder and ansible-playbook as the provisioner to build my machine.
In one of my ansible tasks, I'm rebooting the machine (after installing some packages and changing network interfaces names), but sometimes my VM is getting a different IP address from DHCP and the ansible playbook cannot continue to the rest of the tasks. I tried the ansible.builtin.setup:
- name: do facts module to get latest information
setup:
But it's not refreshing the IP. Also tried reboot with shell provisioner instead:
{
"type": "shell",
"inline": ["echo {{user `ssh_password`}} | sudo -S reboot"],
"expect_disconnect": true,
"inline_shebang": "/bin/bash -x"
}
But the next provisioners also uses the old IP. Is there a way with Packer, to refresh the IP?

Related

Can we create a playbook to install a package in our own system?

I'm using Ubuntu Linux
I have created an inventory file and I have put my own system IP address there.
I have written a playbook to install the nginx package.
I'm getting the following error:
false, msg" : Failed to connect to the host via ssh: connect to host myip : Connection refused, unreachable=true
How can I solve this?
You could use the hosts keyword with the value localhost
- name: Install nginx package
hosts: localhost
tasks:
- name: Install nginx package
apt:
name: nginx
state: latest
Putting your host IP directly in your inventory treats your local machine as any other remote target. Although this can work, ansible will use the ssh connection plugin by default to reach your IP. If an ssh server is not installed/configured/running on your host it will fail (as you have experienced), as well as if you did not configure the needed credentials (ssh keys, etc.).
You don't need to (and in most common situations you don't want to) declare localhost in your inventory to use it as it is implicit by default. The implicit localhost uses the local connection plugin which does not need ssh at all and will use the same user to run the tasks as the one running the playbook.
For more information on connection plugins, see the current list
See #gary lopez answer for an example playbook to use localhost as target.

/ect/ansible file is not available in Mac OS

I used pip to install Ansible in MacOS. But I cannot find the /etc/ansible folder. Neither the inventory file.
I want to run my playbook in minikube environment. But the playbook returns,
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: 192.168.99.105
How to solve this issue?
I looked into this matter and using Ansible for managing minikube is not an easy topic. Let me elaborate on that:
The main issue is cited below:
Most Ansible modules that execute under a POSIX environment require a Python interpreter on the target host. Unless configured otherwise, Ansible will attempt to discover a suitable Python interpreter on each target host the first time a Python module is executed for that host.
-- Ansible Docs
What that means is that most of the modules will be unusable. Even ping
Steps to reproduce:
Install Ansible
Install Virtualbox
Install minikube
Start minikube
SSH into minikube
Configure Ansible
Test
Install Ansible
As the original poster said it can be installed through pip.
For example:
$ pip3 install ansible
Install VirtualBox
Please download and install appropriate version for your system.
Install minikube
Please follow this site: Kubernetes.io
Start minikube
You can start minikube by invoking command:
$ minikube start --vm-driver=virtualbox
Parameter --vm-driver=virtualbox is important because it will be useful later for connecting to the minikube.
Please wait for minikube to successfully deploy on the Virtualbox.
SSH into minikube
It is necessary to know the IP address of minikube inside the Virtualbox.
One way of getting this IP is:
Open Virtualbox
Click on the minikube virtual machine for it to show
Enter root for account name. It should not ask for password
Execute command: $ ip a | less and find the address of network interface. It should be in format of 192.168.99.XX
From terminal that was used to start minikube please run below command:
$ minikube ssh
Command above will ssh to newly created minikube environment and it will store a private key in location:
HOME_DIRECTORY .minikube/machines/minikube/id_rsa
id_rsa will be needed to connect to the minikube
Try to login to minikube by invoking command:
ssh -i PATH_TO/id_rsa docker#IP_ADDRESS
If login has happened correctly there should be no issues with Ansible
Configure Ansible
For using ansible-playbook 2 files will be needed:
Hosts file with information about hosts
Playbook file with statements what you require from Ansible to do
Example hosts file:
[minikube_env]
minikube ansible_host=IP_ADDRESS ansible_ssh_private_key_file=./id_rsa
[minikube_env:vars]
ansible_user=docker
ansible_port=22
The ansible_ssh_private_key_file=./id_rsa will tell Ansible to use ssh key from file with correct key to this minikube instance.
Note that this declaration will need to have id_rsa file in the same location as rest of the files.
Example playbook:
- name: Playbook for checking connection between hosts
hosts: all
gather_facts: no
tasks:
- name: Task to check the connection
ping:
You can test the connection by invoking command:
$ ansible-playbook -i hosts_file ping.yaml
Above command should fail because there is no Python interpreter installed.
fatal: [minikube]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "module_stderr": "Shared connection to 192.168.99.101 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
There is a successful connection between Ansible and minikube but there is no Python interpreter to back it up.
There is a way to use Ansible without Python interpreter.
This Ansible documentation is explaining the use of raw module.

Ansible can't ping my vagrant box with the vagrant insecure public key

I'm using Ansible 2.4.1.0 and Vagrant 2.0.1 with VirtualBox on osx and although provisioning of my vagrant box works fine with ansible, I get an unreachable error when I try to ping with:
➜ ansible all -m ping
vagrant_django | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).\r\n",
"unreachable": true
}
The solutions offered on similar questions didn't work for me (like adding the vagrant insecure pub key to my ansible config). I just can't get it to work with the vagrant insecure public key.
Fwiw, here's my ansible.cfg file:
[defaults]
host_key_checking = False
inventory = ./ansible/hosts
roles_path = ./ansible/roles
private_key_file = ~/.vagrant.d/insecure_private_key
And here's my ansible/hosts file (ansible inventory):
[vagrantboxes]
vagrant_vm ansible_ssh_user=vagrant ansible_ssh_host=192.168.10.100 ansible_ssh_port=22 ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
What did work was using my own SSH public key. When I add this to the authorized_keys on my vagrant box, I can ansible ping:
➜ ansible all -m ping
vagrant_django | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}
I can't connect via ssh either, so that seems to be the underlying problem. Which is fixed by adding my own pub key to the vagrant box in authorized_hosts.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
PS: To clarify, although the root cause is similar to this other question, the symptoms and context are different. I could provision my box with ansible, but couldn't ansible ping it. This justifies another question imho.
I'd love to know why it doesn't work with the vagrant insecure key. Does anyone know?
Because Vagrant insecure key is used for the initial connection to the box only. By default Vagrant replaces it with a freshly-generated key, which you’ll find in .vagrant/machines/<machine_name>/virtualbox/private_key under the project directory.
You’ll also find an automatically generated Ansible inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory, if you use Ansible provisioner in Vagrantfile, so you don't need to create your own.

Running Ansible playbooks on remote Vagrant box

I have one machine (A) from which I run Ansible playbooks on a variety of hosts. Vagrant is not installed here.
I have another machine (B) with double the RAM that hosts my Vagrant boxes. Ansible is not installed here.
I want to use Ansible to act on Vagrant boxes the same way I do all other hosts; that is, running ansible-playbook on machineA while targeting a virtualized Vagrant box on machineB. SSH keys are already set up between the two.
This seems like a simple use case but I can't find it clearly explained anywhere given the encouraged use of Vagrant's built-in Ansible provisioner. Is it possible?
Perhaps some combination of SSH tunnels and port forwarding trickery?
Turns out this was surprisingly simple. Vagrant in fact does not need to know about Ansible at all.
Ansible inventory on machineA:
default ansible_host=machineB ansible_port=2222
Vagrantfile on machineB:
Vagrant.configure("2") do |config|
...
config.vm.network "forwarded_port", id: "ssh", guest: 22, host: 2222
...
end
The id: "ssh" is the important bit, as this overrides the default SSH behavior of restricting SSH to the guest from localhost only.
$ ansible --private-key=~/.ssh/vagrant-default -u vagrant -m ping default
default | SUCCESS => {
"changed": false,
"ping": "pong"
j }
(Note that the Vagrant private key must be copied over to the Ansible host and specified at the command line).

How could I ping my docker container from my host

I have created a ubuntu docker container on my mac
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d993a622d23 ubuntu "/bin/bash" 42 minutes ago Up 42 minutes 0.0.0.0:123->123/tcp kickass_ptolemy
I set port as 123.
My container IP is 172.17.0.2
docker inspect 5d993a622d23 | grep IP
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"IPAMConfig": null,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
On my Mac I try to ping my container,
Ping 172.17.0.2, I got Request timeout for icmp_seq 0....
What should I do? So my local machine can ping the container I installed. Did I missing some app installation on my container, which is a plain ubuntu system?
You can't ping or access a container interface directly with Docker for Mac.
The current best solution is to connect to your containers from
another container. At present there is no way we can provide routing
to these containers due to issues with OSX that Apple have not yet
resolved. we are tracking this requirement, but we cannot do anything
about it at present.
Docker Toolbox/VirtualBox
When running Docker Toolbox, Docker Machine via VirtualBox or any VirtualBox VM (like a Vagrant definition) you can setup a "Host-Only Network" and access the Docker VMs network via that.
If you are using the default boot2docker VM, don't change the existing interface as you will stop a whole lot of Docker utilities from working, add a new interface.
You will also need to setup routing from your Mac to the container networks via your VM's new IP address. In my case the Docker network range is 172.22.0.0/16 and the Host Only adapter IP on the VM is 192.168.99.100.
sudo route add 172.22.0.0/16 192.168.99.100
Adding a permanent route to osx is bit more complex
Then you can get to containers from your Mac
machost:~ ping -c 1 172.22.0.2
PING 172.22.0.2 (172.22.0.2): 56 data bytes
64 bytes from 172.22.0.2: icmp_seq=0 ttl=63 time=0.364 ms
--- 172.22.0.2 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.364/0.364/0.364/0.000 ms
Vagrant + Ansible setup
Here's my running config...
Vagrant.configure("2") do |config|
config.vm.box = "debian/contrib-buster64"
config.vm.hostname = "docker"
config.vm.network "private_network", ip: "10.7.7.7", hostname: true
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.memory = "4000"
vb.cpus = "4"
end
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.playbook = "tasks.yaml"
end
end
The ansible tasks.yaml to configure a fixed network.
- hosts: all
become: yes
vars:
ansible_python_interpreter: auto_silent
docker_config:
bip: 10.7.2.1/23
host: ["tcp://10.7.7.7:2375"]
userland-proxy: false
tasks:
- ansible.builtin.apt:
update_cache: yes
force_apt_get: yes
pkg:
- bridge-utils
- docker.io
- python3-docker
- python-docker
- iptables-persistent
- ansible.builtin.hostname:
name: docker
- ansible.builtin.copy:
content: "{{ docker_config | to_json }}"
dest: /etc/docker/daemon.json
- ansible.builtin.lineinfile:
line: 'DOCKER_OPTS="{% for host in docker_config.host %} -H {{ host }} {% endfor %}"'
regexp: '^DOCKER_OPTS='
path: /etc/default/docker
- ansible.builtin.systemd:
name: docker.service
state: restarted
- ansible.builtin.iptables:
action: insert
chain: DOCKER-USER
destination: 10.7.2.0/23
in_interface: eth1
out_interface: docker0
jump: ACCEPT
- ansible.builtin.shell: iptables-save > /etc/iptables/rules.v4
Add the route for the docker bridge network via the VM to the mac
$ sudo /sbin/route -n -v add -net 10.7.2.0/23 10.7.7.7
Then set DOCKER_HOST=10.7.7.7 in the environment to use the new VM.
$ export DOCKER_HOST=10.7.7.7
$ docker run --name route_test --rm -d node:14-slim node -e "require('http').createServer((req, res) => {
res.writeHead(200, {'Content-Type':'text/plain'})
res.end('hello')
}).listen(3000)"
$ docker container inspect route_test -f '{{ .NetworkSettings.Networks.bridge.IPAddress }}'
$ curl http://10.7.2.3:3000
hello
$ docker rm -f route_test
You don't get volumes mapped from the host to the vm, but as a bonus it uses a lot less cpu than the Docker 2.5.x release.
As an alternative, if your container has a bash shell incorporated, you can access it through
docker exec -it <CONTAINER ID> bash
and then you can ping your virtual ip
It works in this scenario:
Windows host
Linux VM installed on Windows host
Docker container installed on Linux VM host
Now you have to note this. Containers are in a isolated network but connected to the internet throught your Docker container host adapter.So you have to tell kernel linux to be available in your network then in your Linux VM:
# sysctl net.ipv4.conf.all.forwarding=1
# sudo iptables -P FORWARD ACCEPT
Now in you Windows host you have to add a route for our container network:
route add "Docker container network" "Linux VM IP" for example
# route add 172.17.0.0/16 192.168.1.20
setup:
PC-A a is docker host, PC-B is a another PC in the network. To ping/access docker's container from PC-B, run the below iptables-rules in the host.
iptables -A FORWARD -i docker0 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o docker0 -j ACCEPT
note: eth0 is host's interface and docker0 is docker's virtual default bridge.
Now add route in PC-B
route add -net <dockerip> netmask <net mask> gw <docker's host>
ping/access container services directly.
Let's say you have W-> windows machine, L-Linux Vbox VM (eth0,eth1) and docker app (using port 8989) running on this L-Linux Vbox VM. Provider does not have to Vbox anyway or W-> a win.You want to type http://app:8989 on your browser.There are two methods afak; easy way to run vagrant automatically or manually configure Vbox VM with port forwarding through "Host-only Adapter" which is actually eth1; normally eth0 is Vbox's default reserved 10.0.2.15 IP assignment.Or on command prompt on win/lin/mac through "VBoxManage" command you can set up networks or automate through scripts.
webtier.vm.network "forwarded_port", guest: 8989, host: 8989
run docker app
sudo docker run -p 8989:8989 ...
on windows explorer(W-> windows machine) browse your app
http://app:8989
You still can not ping "172.17.0.2" which is docker container IP in this situation from W-> windows machine.This could run cross-platform win/lin/mac.You might want to look into Vbox Manual and Vagrant Manual, particularly networks.
It is possible to run the containers of interest in one and the same network with an additional container with OpenVPN server, so that you can see containers over VPN connection from the host:
Use docker network create --subnet=172.19.0.0/24 my-net to create a network where containers will see each other.
Attach containers to it using --net my-net parameter for docker run.
Run an additional container with OpenVPN in the same network. This time you need a port mapping for VPN connection -p 1194:1194/udp.
Use OpenVPN client on the host to connect to this network with containers to ping them.
Also, you may need to comment out redirect-gateway instruction in OpenVPN client config file and add push "route 172.19.0.0 255.255.255.0" to (and remove other pushes from) the server config file.

Resources