I was wondering if it were possible to tell Ansible to set up a VPN connection before executing the rest of the playbook. I've googled around, but haven't seen much on this.
You could combine a local playbook to setup a VPN and a playbook to run your tasks against a server.
Depending on whats the job you can use ansible or a shell script to connect the VPN. Maybe there should be another playbook to disconnect afterwards.
As result you will have three playbooks and one to combine them via include:
- include: connect_vpn.yml
- include: do_stuff.yml
- include: disconnect_vpn.yml
Check How To Use Ansible and Tinc VPN to Secure Your Server Infrastructure.
Basically, you need to install thisismitch/ansible-tinc playbook and create a hosts inventory file with the nodes that you want to include in the VPN, for example:
[vpn]
prod01 vpn_ip=10.0.0.1 ansible_host=162.243.125.98
prod02 vpn_ip=10.0.0.2 ansible_host=162.243.243.235
prod03 vpn_ip=10.0.0.3 ansible_host=162.243.249.86
prod04 vpn_ip=10.0.0.4 ansible_host=162.243.252.151
[removevpn]
Then you should review the contents of the /group_vars/all file such as:
---
netname: nyc3
physical_ip: "{{ ansible_eth1.ipv4.address }}"
vpn_interface: tun0
vpn_netmask: 255.255.255.0
vpn_subnet_cidr_netmask: 32
where:
physical_ip is IP address which you want tinc to bind to;
vpn_netmask is the netmask that the will be applied to the VPN interface.
If you're using Amazon Web Services, check out the ec2_vpc_vpn module which can create, modify, and delete VPN connections. It uses boto3/botocore library.
For example:
- name: create a VPN connection
ec2_vpc_vpn:
state: present
vpn_gateway_id: vgw-XXXXXXXX
customer_gateway_id: cgw-XXXXXXXX
- name: delete a connection
ec2_vpc_vpn:
vpn_connection_id: vpn-XXXXXXXX
state: absent
For other cloud services, check the list of Ansible Cloud Modules.
Related
I am unable to find an ansible module that adds ip routes.
Basically the command I'm looking for should allow something like:
sudo ip route add 12.3.4.0/24 via 123.456.78.90
I found net_static_route_module which seems related, but is deprecated since 2022-06-01.
A simple solution would be:
- name: Add a route
ansible.builtin.shell:
cmd: sudo ip route add 12.3.4.0/24 via 123.456.78.90
of course, but inbuilt modules are usually better.
Additional information
This task will be executed on a subset of all nodes (ubuntu 22.04 machines that are all part of a cloud cluster using openstack) (specified by groups). Concrete subnets and IPs will be defined using variables.
An example hostfile could look like this:
master:
hosts:
localhost:
ansible_connection: local
ansible_python_interpreter: /usr/bin/python3
ansible_user: ubuntu
ip: localhost
workers:
children:
ephemeral:
hosts: {}
hosts:
someworker:
ansible_connection: ssh
ansible_python_interpreter: /usr/bin/python3
ansible_user: ubuntu
This playbook will be used to setup wireguard. Every vpn-node (those nodes will be connected with each other via wireguard) is connected in its own subnet with multiple worker nodes. Those workers need to add the ip route to get back to the master node which is in a different subnet, but in the same virtual subnet created by wireguard and the vpn nodes). I do not think that this is related to my question, but I might overlook how this can affect the right answer.
From your description I understand that you like to configure the network route settings on a Linux based operating system (Debian/GNU Linux, Ubuntu 22.04) and not on a dedicated router or switch.
Whereby the mentioned module net_static_route was for providing
... declarative management of static IP routes on network appliances (routers, switches et. al.).
and became deprecated
Removed in: major release after 2022-06-01
Why: Updated modules released with more functionality
Alternative: Use platform-specific [netos]_static_route module
it further recommends to use the vendor and platform specific modules instead.
Certain Linux Distributions tend to integrate and use NetworkManager for creating and managing network connections.
Documentation
RHEL7 - Getting Started with NetworkManager
ArchLinux - NetworkManager
Ubuntu - NetworkManager
...
So the respective Ansible module for this is nmcli module – Manage Networking. You may have a look into the Parameters and Examples, especially into the parameters for route*.
Further Readings
For more examples have a look at
How to configure network settings with Ansible system roles
Linux System Roles
I would like to write an ansible playbook that will tell me if my ec2 instances have a security group that contains a rule that allows ingress on a specified port. I have seen answers like Test if a server is reachable from host and has port open with Ansible where one would have this in the playbook:
- hosts: target.host
tasks:
- wait_for: host=remote.host port=8080 timeout=1
- debug: msg=ok
But that tells me if something is listening on port 8080 on the remote host. In my circumstance there will be no process listening because the service has not yet been installed.
You could try using the aws ec2 module to get Netword ACLs and apply a filter to get the one's you're after. Using tags could also be an easy method to filter the resources for your playbook. This code is untested, and you'd need to parse the response into your playbook get the information you're after. It's basically a starting point. Check this link for further info.
# Retrieve Port 8080 Network ACLs
- name: Get Port 8080 NACLs
community.aws.ec2_vpc_nacl_info:
region: us-west-2
filters:
'entry.port-range.from': 8080,
'entry.port-range.to': 8080
register: port_8080_nacls
I'm using Ubuntu Linux
I have created an inventory file and I have put my own system IP address there.
I have written a playbook to install the nginx package.
I'm getting the following error:
false, msg" : Failed to connect to the host via ssh: connect to host myip : Connection refused, unreachable=true
How can I solve this?
You could use the hosts keyword with the value localhost
- name: Install nginx package
hosts: localhost
tasks:
- name: Install nginx package
apt:
name: nginx
state: latest
Putting your host IP directly in your inventory treats your local machine as any other remote target. Although this can work, ansible will use the ssh connection plugin by default to reach your IP. If an ssh server is not installed/configured/running on your host it will fail (as you have experienced), as well as if you did not configure the needed credentials (ssh keys, etc.).
You don't need to (and in most common situations you don't want to) declare localhost in your inventory to use it as it is implicit by default. The implicit localhost uses the local connection plugin which does not need ssh at all and will use the same user to run the tasks as the one running the playbook.
For more information on connection plugins, see the current list
See #gary lopez answer for an example playbook to use localhost as target.
I have one question, I will try to describe to you my problem, please check and tell me is that possible to do.
I'm using molecule and DroneCI for automatic testing my playbooks. Because molecule doesn't have support for Proxmox natively, but Ansible has Proxmox module, I wrote prepare playbook which creates LXC container on Proxmox server. That works well, but problem is that because my LXC container get IP from DHCP server and I don't have a solution to run my playbook on that newly created LXC because I don't have that IP in my inventory.
Does exists some solution for this problem or does anyone have some idea how can I do this?
Thank you.
Both prepare.yml and playbook.yaml are ansible playbooks, so it is fully up to you what you use on hosts: .... Feel free to use whatever host or group you want instead of all.
The scenario:
We have 2 servers and use private key to communicate via ssh
Server 1 (stage)
Server 2 (production)
Inventory file:
[server]
server1.com
server2.com
Both servers has default username "ubuntu"
For now we need put private key on Local Machine, Server 1, Server 2, on the same path. For example:
On Local Machine : /tmp/key/privatekey (private key to remote server 1)
On Server 1 : /tmp/key/privatekey (private key to remote server 2)
On Server 2 : /tmp/key/privatekey (private key to remote server 1)
- name: copy archived file to another remote server using rsync.
synchronize: mode=pull src=/home/{{ user }}/packages/{{ package_name }}.tar.gz dest=/tmp/{{ package_name }}.tar.gz archive=yes
delegate_to: "{{ production_server }}"
tags: release
Ansible playbook:
ansible-playbook -i inventory --extra-vars "host=server[0] user=ubuntu" --private-key=/tmp/key/privatekey playbook.yml --tags "release"
Executing ansible playbook on local machine will remote Server 1 to copy archived file on Server 1 to Server 2.
That worked as we expected, but It's ridiculous, we have to place private key on the same path on 3 machines (local, server 1, server 2). What if we want to place private key on different path? or the user want to use different account/username for server 1 and server 2?
I've tried modifiying inventory, providing user and auth key for server 2. I placed private key under /home/ubuntu/key/ on server 1:
[server]
server1.com
server2.com ansible_ssh_private_key_file=/home/ubuntu/key/privatekey ansible_ssh_user=ubuntu
then run the same ansible playbook command above. The result, server 1 could not remote server 2
What if we want to place private key on different path? or the user want to use different account/username for server 1 and server 2?
I think you're making your life much harder than it needs to be. If I assume you're using the user 'ubuntu' and standard RSA keys
Take your PPK pairs and put them in /home/ubuntu/.ssh/id_rsa and /home/ubuntu/.ssh/id_rsa.pub on each machine. These are the default locations for PPK files the sshd daemon will look.
Write a playbook to deploy the public key (using the authorized_keys module) from your control machine to your target machines. Since you're sharing keys they are all the same.
If you want to do communication between server1 and server2 via delegate_to you'll have to update the ssh_config for the ubuntu user. Your options here are to turn off host key/known host checking OR add the appropriate entries to known_hosts. The former (less secure) can be done by adding a /home/ubuntu/.ssh/config file and the latter (more secure) can be done via ssh-keyscan (see https://serverfault.com/questions/132970/can-i-automatically-add-a-new-host-to-known-hosts)
Part of the issue here is that when you use delegate_to I don't think the whole ssh connection logic,that would normally be constructed on the control machine, doesn't get propagated to the delegate machine.
Also I would not put your private keys out in the tmp directory
Hope this helps