Can we create a playbook to install a package in our own system? - ansible

I'm using Ubuntu Linux
I have created an inventory file and I have put my own system IP address there.
I have written a playbook to install the nginx package.
I'm getting the following error:
false, msg" : Failed to connect to the host via ssh: connect to host myip : Connection refused, unreachable=true
How can I solve this?

You could use the hosts keyword with the value localhost
- name: Install nginx package
hosts: localhost
tasks:
- name: Install nginx package
apt:
name: nginx
state: latest

Putting your host IP directly in your inventory treats your local machine as any other remote target. Although this can work, ansible will use the ssh connection plugin by default to reach your IP. If an ssh server is not installed/configured/running on your host it will fail (as you have experienced), as well as if you did not configure the needed credentials (ssh keys, etc.).
You don't need to (and in most common situations you don't want to) declare localhost in your inventory to use it as it is implicit by default. The implicit localhost uses the local connection plugin which does not need ssh at all and will use the same user to run the tasks as the one running the playbook.
For more information on connection plugins, see the current list
See #gary lopez answer for an example playbook to use localhost as target.

Related

What's the "best" way to add an ip route using ansible?

I am unable to find an ansible module that adds ip routes.
Basically the command I'm looking for should allow something like:
sudo ip route add 12.3.4.0/24 via 123.456.78.90
I found net_static_route_module which seems related, but is deprecated since 2022-06-01.
A simple solution would be:
- name: Add a route
ansible.builtin.shell:
cmd: sudo ip route add 12.3.4.0/24 via 123.456.78.90
of course, but inbuilt modules are usually better.
Additional information
This task will be executed on a subset of all nodes (ubuntu 22.04 machines that are all part of a cloud cluster using openstack) (specified by groups). Concrete subnets and IPs will be defined using variables.
An example hostfile could look like this:
master:
hosts:
localhost:
ansible_connection: local
ansible_python_interpreter: /usr/bin/python3
ansible_user: ubuntu
ip: localhost
workers:
children:
ephemeral:
hosts: {}
hosts:
someworker:
ansible_connection: ssh
ansible_python_interpreter: /usr/bin/python3
ansible_user: ubuntu
This playbook will be used to setup wireguard. Every vpn-node (those nodes will be connected with each other via wireguard) is connected in its own subnet with multiple worker nodes. Those workers need to add the ip route to get back to the master node which is in a different subnet, but in the same virtual subnet created by wireguard and the vpn nodes). I do not think that this is related to my question, but I might overlook how this can affect the right answer.
From your description I understand that you like to configure the network route settings on a Linux based operating system (Debian/GNU Linux, Ubuntu 22.04) and not on a dedicated router or switch.
Whereby the mentioned module net_static_route was for providing
... declarative management of static IP routes on network appliances (routers, switches et. al.).
and became deprecated
Removed in: major release after 2022-06-01
Why: Updated modules released with more functionality
Alternative: Use platform-specific [netos]_static_route module
it further recommends to use the vendor and platform specific modules instead.
Certain Linux Distributions tend to integrate and use NetworkManager for creating and managing network connections.
Documentation
RHEL7 - Getting Started with NetworkManager
ArchLinux - NetworkManager
Ubuntu - NetworkManager
...
So the respective Ansible module for this is nmcli module – Manage Networking. You may have a look into the Parameters and Examples, especially into the parameters for route*.
Further Readings
For more examples have a look at
How to configure network settings with Ansible system roles
Linux System Roles

/ect/ansible file is not available in Mac OS

I used pip to install Ansible in MacOS. But I cannot find the /etc/ansible folder. Neither the inventory file.
I want to run my playbook in minikube environment. But the playbook returns,
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: 192.168.99.105
How to solve this issue?
I looked into this matter and using Ansible for managing minikube is not an easy topic. Let me elaborate on that:
The main issue is cited below:
Most Ansible modules that execute under a POSIX environment require a Python interpreter on the target host. Unless configured otherwise, Ansible will attempt to discover a suitable Python interpreter on each target host the first time a Python module is executed for that host.
-- Ansible Docs
What that means is that most of the modules will be unusable. Even ping
Steps to reproduce:
Install Ansible
Install Virtualbox
Install minikube
Start minikube
SSH into minikube
Configure Ansible
Test
Install Ansible
As the original poster said it can be installed through pip.
For example:
$ pip3 install ansible
Install VirtualBox
Please download and install appropriate version for your system.
Install minikube
Please follow this site: Kubernetes.io
Start minikube
You can start minikube by invoking command:
$ minikube start --vm-driver=virtualbox
Parameter --vm-driver=virtualbox is important because it will be useful later for connecting to the minikube.
Please wait for minikube to successfully deploy on the Virtualbox.
SSH into minikube
It is necessary to know the IP address of minikube inside the Virtualbox.
One way of getting this IP is:
Open Virtualbox
Click on the minikube virtual machine for it to show
Enter root for account name. It should not ask for password
Execute command: $ ip a | less and find the address of network interface. It should be in format of 192.168.99.XX
From terminal that was used to start minikube please run below command:
$ minikube ssh
Command above will ssh to newly created minikube environment and it will store a private key in location:
HOME_DIRECTORY .minikube/machines/minikube/id_rsa
id_rsa will be needed to connect to the minikube
Try to login to minikube by invoking command:
ssh -i PATH_TO/id_rsa docker#IP_ADDRESS
If login has happened correctly there should be no issues with Ansible
Configure Ansible
For using ansible-playbook 2 files will be needed:
Hosts file with information about hosts
Playbook file with statements what you require from Ansible to do
Example hosts file:
[minikube_env]
minikube ansible_host=IP_ADDRESS ansible_ssh_private_key_file=./id_rsa
[minikube_env:vars]
ansible_user=docker
ansible_port=22
The ansible_ssh_private_key_file=./id_rsa will tell Ansible to use ssh key from file with correct key to this minikube instance.
Note that this declaration will need to have id_rsa file in the same location as rest of the files.
Example playbook:
- name: Playbook for checking connection between hosts
hosts: all
gather_facts: no
tasks:
- name: Task to check the connection
ping:
You can test the connection by invoking command:
$ ansible-playbook -i hosts_file ping.yaml
Above command should fail because there is no Python interpreter installed.
fatal: [minikube]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "module_stderr": "Shared connection to 192.168.99.101 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python: No such file or directory\r\n", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
There is a successful connection between Ansible and minikube but there is no Python interpreter to back it up.
There is a way to use Ansible without Python interpreter.
This Ansible documentation is explaining the use of raw module.

How to configure Ansible with Cygwin on windows hosts

Since my company needs time to consider security issues with WinRM which is used by Ansible to manage windows hosts I was thinking about doing it via Cygwin ssh connection which we already have installed.
Is this even possible?
I tried to setup env variables like that:
ansible_connection: ssh
ansible_shell_type: cmd
End I'm trying to create a folder with the folliwng playbook:
- name: Ensure C:\Temp exists
win_file:
path: C:\Temp
state: directory
Gathering Facts is succesfull, but I'm getting: FAILED! => {"changed": false, "msg": "Unhandled exception while executing module: The system cannot find the path specified"}
In theory, Ansible, since v.2.8, supports doing connections through SSH, new windows even come with a Microsoft fork of OpenSSH.
I am having trouble to make it work (that's how I ended up here), but I recommend you to take a look to the following links:
https://docs.ansible.com/ansible/latest/user_guide/windows_setup.html#windows-ssh-setup
If you can do SSH using the PK, but you get an unreachable from Ansible, you may need to check also this:
How to fix "Unreachable" when ping windows with ansible over ssh?
For Windows Server 2019/10's OpenSSH configuration:
https://www.youtube.com/watch?v=Cs3wBl_mMH0
Setting up OpenSSH for Windows using public key authentication

Ansible connect to jump machine through VPN?

I was wondering if it were possible to tell Ansible to set up a VPN connection before executing the rest of the playbook. I've googled around, but haven't seen much on this.
You could combine a local playbook to setup a VPN and a playbook to run your tasks against a server.
Depending on whats the job you can use ansible or a shell script to connect the VPN. Maybe there should be another playbook to disconnect afterwards.
As result you will have three playbooks and one to combine them via include:
- include: connect_vpn.yml
- include: do_stuff.yml
- include: disconnect_vpn.yml
Check How To Use Ansible and Tinc VPN to Secure Your Server Infrastructure.
Basically, you need to install thisismitch/ansible-tinc playbook and create a hosts inventory file with the nodes that you want to include in the VPN, for example:
[vpn]
prod01 vpn_ip=10.0.0.1 ansible_host=162.243.125.98
prod02 vpn_ip=10.0.0.2 ansible_host=162.243.243.235
prod03 vpn_ip=10.0.0.3 ansible_host=162.243.249.86
prod04 vpn_ip=10.0.0.4 ansible_host=162.243.252.151
[removevpn]
Then you should review the contents of the /group_vars/all file such as:
---
netname: nyc3
physical_ip: "{{ ansible_eth1.ipv4.address }}"
vpn_interface: tun0
vpn_netmask: 255.255.255.0
vpn_subnet_cidr_netmask: 32
where:
physical_ip is IP address which you want tinc to bind to;
vpn_netmask is the netmask that the will be applied to the VPN interface.
If you're using Amazon Web Services, check out the ec2_vpc_vpn module which can create, modify, and delete VPN connections. It uses boto3/botocore library.
For example:
- name: create a VPN connection
ec2_vpc_vpn:
state: present
vpn_gateway_id: vgw-XXXXXXXX
customer_gateway_id: cgw-XXXXXXXX
- name: delete a connection
ec2_vpc_vpn:
vpn_connection_id: vpn-XXXXXXXX
state: absent
For other cloud services, check the list of Ansible Cloud Modules.

Ansible Where private key should be placed?

The scenario:
We have 2 servers and use private key to communicate via ssh
Server 1 (stage)
Server 2 (production)
Inventory file:
[server]
server1.com
server2.com
Both servers has default username "ubuntu"
For now we need put private key on Local Machine, Server 1, Server 2, on the same path. For example:
On Local Machine : /tmp/key/privatekey (private key to remote server 1)
On Server 1 : /tmp/key/privatekey (private key to remote server 2)
On Server 2 : /tmp/key/privatekey (private key to remote server 1)
- name: copy archived file to another remote server using rsync.
synchronize: mode=pull src=/home/{{ user }}/packages/{{ package_name }}.tar.gz dest=/tmp/{{ package_name }}.tar.gz archive=yes
delegate_to: "{{ production_server }}"
tags: release
Ansible playbook:
ansible-playbook -i inventory --extra-vars "host=server[0] user=ubuntu" --private-key=/tmp/key/privatekey playbook.yml --tags "release"
Executing ansible playbook on local machine will remote Server 1 to copy archived file on Server 1 to Server 2.
That worked as we expected, but It's ridiculous, we have to place private key on the same path on 3 machines (local, server 1, server 2). What if we want to place private key on different path? or the user want to use different account/username for server 1 and server 2?
I've tried modifiying inventory, providing user and auth key for server 2. I placed private key under /home/ubuntu/key/ on server 1:
[server]
server1.com
server2.com ansible_ssh_private_key_file=/home/ubuntu/key/privatekey ansible_ssh_user=ubuntu
then run the same ansible playbook command above. The result, server 1 could not remote server 2
What if we want to place private key on different path? or the user want to use different account/username for server 1 and server 2?
I think you're making your life much harder than it needs to be. If I assume you're using the user 'ubuntu' and standard RSA keys
Take your PPK pairs and put them in /home/ubuntu/.ssh/id_rsa and /home/ubuntu/.ssh/id_rsa.pub on each machine. These are the default locations for PPK files the sshd daemon will look.
Write a playbook to deploy the public key (using the authorized_keys module) from your control machine to your target machines. Since you're sharing keys they are all the same.
If you want to do communication between server1 and server2 via delegate_to you'll have to update the ssh_config for the ubuntu user. Your options here are to turn off host key/known host checking OR add the appropriate entries to known_hosts. The former (less secure) can be done by adding a /home/ubuntu/.ssh/config file and the latter (more secure) can be done via ssh-keyscan (see https://serverfault.com/questions/132970/can-i-automatically-add-a-new-host-to-known-hosts)
Part of the issue here is that when you use delegate_to I don't think the whole ssh connection logic,that would normally be constructed on the control machine, doesn't get propagated to the delegate machine.
Also I would not put your private keys out in the tmp directory
Hope this helps

Resources