Is there a way to allow downgrades with apt Ansible module? - ansible

I have an Ansible Playbook to deploy a specific version of docker. I want apt module to allow for downgrades when the target machine have a higher version installed. I browsed the documentation but couldn't find a suitable way to do it. The Yaml file have lines like:
- name : "Install specific docker ce"
become : true
apt :
name : docker-ce=5:18.09.1~3-0~ubuntu-bionic
state : present

For Docker CE on Ubuntu there are two packages, docker-ce and docker-ce-cli. You can see which versions are currently installed with:
$ apt list --installed | grep docker
docker-ce/xenial,now 5:18.09.7~3-0~ubuntu-xenial amd64 [installed,upgradable to: 5:19.03.1~3-0~ubuntu-xenial]
docker-ce-cli/xenial,now 5:18.09.7~3-0~ubuntu-xenial amd64 [installed,upgradable to: 5:19.03.1~3-0~ubuntu-xenial]
You need to force the same version for both packages. e.g. on Ubuntu Xenial:
main.yml
- name: Install docker-ce
apt:
state: present
force: True
name:
- "docker-ce=5:18.09.7~3-0~ubuntu-xenial"
- "docker-ce-cli=5:18.09.7~3-0~ubuntu-xenial"
notify:
- Restart docker
when: ansible_os_family == "Debian" and ansible_distribution_version == "16.04"
handler.yml
# Equivalent to `systemctl daemon-reload && systemctl restart docker`
- name: Restart docker
systemd:
name: docker
daemon_reload: True
state: restarted

Related

python module firewall not found

computer run ansible-playbook: MacBook, with python 3.9
target machine: Debian 10 with python2.7.16 and python3.7.3
When I tried to open port in firewall:
- name: Open port 80 for http access
firewalld:
service: http
permanent: true
state: enabled
I got error:
fatal: [virtual_server]: FAILED! => {"changed": false, "msg": "Python
Module not found: firewalld and its python module are required for
this module, version 0.2.11 or newer required
(0.3.9 or newer for offline operations)"}
I also tried to use ansible.posix.firewall, with ansible-galaxy collection install ansible.posix on macbook, and use ansible.posix.firewall, still got this error.
Can anybody tell me what is wrong?
if you have your playbook vars like this
---
- hosts: testbench
vars:
ansible_python_interpreter: /usr/bin/python3
then your firewall task should be like this
- name: open ports
ansible.posix.firewalld:
permanent: true
immediate: true
port: "{{item}}/tcp"
state: enabled
become: true
vars:
ansible_python_interpreter: /usr/bin/python
with_items:
- tcp-port-1
- tcp-port-2
- tcp-port-3
ansible.posix.firewalld depends on the python firewalld bindings which are missing for the python version ansible is running under.
See https://bugzilla.redhat.com/show_bug.cgi?id=2091931 for a similar problem on systems using the EPEL8 ansible package, where the python3-firewall package is built against python 3.6 but ansible is using python 3.8.
ansible --version or head -1 $(which ansible) will tell you what version of Python ansible uses.
On redhat systems, dnf repoquery -l python3-firewall will tell you what version of Python python3-firewall is built against.
The solution is to install the appropriate python-firewalld package for your OS that matches the version of python ansible is using, if one exists.
If a compatible python-firewalld package does not exist, you can configure ansible to use a different version of python by setting the ansible_python_interpreter variable or the interpreter_python ansible.cfg setting (see https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html).
The problem is that you propably have awx installed on docker and he dont have that galaxy package do this :
1. go to main server
> docker images
find smt like this
ansible/awx 17.1.0 {here_id_of_image} 16 months ago 1.41GB
2. connect to that docker image
> docker run -it {here_id_of_image} bash
3. Run command to install pkg
> ansible-galaxy collection install ansible.posix
Done now run your playbook
I have fixed this problem by switch ansible_connection mode from paramiko to ssh on Ansible 5.10.0 x Ubuntu 22.04 .
My changes.
[ chusiang#ubuntu-22.04 ~ ]
$ vim ansible-pipeline.cfg
[defaults]
- ansible_connection = paramiko
- transport = paramiko
+ ansible_connection = ssh
+ transport = ssh
Ansible version.
[ chusiang#ubuntu-22.04 ~ ]
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/chusiang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/chusiang/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/chusiang/.ansible/collections:/usr/share/ansible/collections
executable location = /home/chusiang/.local/bin/ansible
python version = 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
Pip versions of ansible.
[ chusiang#ubuntu-22.04 ~ ]
$ pip list | grep -i ansible
ansible 5.10.0
ansible-core 2.12.10
ansible-inventory-to-ssh-config 1.0.1
ansible-lint 3.5.1
Enjoy it.

How to execute a command as a root user in ansible hosts

I have 3 VM's for ansible as follows:
master
host1
host2
ansible is installed and configured correctly. no issue with that part.
So I have this command I need to run. Before going to execute this command using ansible, I will show you how to do this without ansible :
I want to install update the packages first, so I am logged into the VM as a normal user (ubuntut) and execute like this:
sudo apt-get update -y
As you can see, I have executed a sudo command when I am logged in as the ubuntu user but not as a root user.
So above command is executed with sudo but user is ubuntu
I want this scenario to be done using ansible.
My Question: How to execute a command that contains sudo using ansible but when logged in as a different user
So my playbook will look like this:
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: ubuntu
apt:
upgrade: yes
update_cache: yes
But where can I add the sudo part in above playbook ? does it add automatically?
What if I want to execute a command like this, logged in user as ubuntu:
sudo apt-get install docker
How can I do this sudo part in ansible while become: ubuntu
To invoke a command as sudo, You can do something like this :
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: ubuntu
# this will fail because /opt needs permissions
# shell: echo hello > /opt/testfile
# this will work
shell: "sudo sh -c 'echo hello > /opt/testfile'"
# use this to verify output
register: out
- debug:
msg: "{{ out }}"
NOTE: you will get this warning:
"Consider using 'become', 'become_method', and 'become_user' rather
than running sudo"
The following will work, you can find a detailed explanation and examples in ansible documentation.
[1]https://docs.ansible.com/ansible/latest/user_guide/become.html
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: true
become_user: ubuntu
shell: echo hello > /opt/testfile
# use this to verify output
register: out
- debug:
msg: "{{ out }}"

How to make Ansible overwrite existing files during remote software installation?

I'm using Ansible to clone a repository to a remote host and install the software there:
---
- hosts: ubuntu-16.04
become: yes
tasks:
- name: Install aptitude
apt:
name: aptitude
state: latest
update_cache: yes
- name: Update apt cache and upgrade packages
apt:
update_cache: yes
upgrade: full
autoclean: yes
autoremove: yes
- name: Clone <name of repository> repository
git:
repo: <link to repository>
dest: /home/test/Downloads/<new directory>
clone: yes
- name: Install <name of repository>
command: chdir=/home/test/Downloads/<new directory> {{ item }}
with_items:
- make
- make install
become_user: root
become_method: sudo
- name: Remove <name of repository> directory
file:
path: /home/test/Downloads/<new directory>
state: absent
This works fine if the software isn't already installed on the system. However, if it is and I run the playbook again, then it gets stuck at the "Install ..." task, saying that it failed to create symlinks as files already exist.
The first problem is that several tasks after this one (not shown here) are not executed due to the play exiting there. I could solve this by simply reordering my tasks, but what I really would like to know, please:
How do I make Ansible overwrite existing files?
Would you even solve this that way, or would you say that this behavior is fine as the software is already installed? But what if the repository contains newer code the second time it is cloned?
I don't think the problem is related to Ansible, rather to the Makefile. Nevertheless, you could simply skip the installation step when the software is already installed.
First you could check if the binary already exists, and based on the output you can skip the installation:
name: Check if software exists
stat:
path: <path to binary>
register: stat_result
name: Install <name of repository>
command: chdir=/home/test/Downloads/<new directory> {{ item }}
with_items:
- make
- make install
become_user: root
become_method: sudo
when: stat_result.stat.exists == False

Ansible Error : The lxc module is not importable. Check the requirements

My task looks something like this
- name: Create a started container
lxc_container:
name: test-container-started
container_log: true
template: ubuntu
state: started
template_options: --release trusty
I get the following errors:
> TASK: [lxc | Create a started container]
> **************************************  failed: [localhost] => {"failed": true, "parsed": false}
> BECOME-SUCCESS-azsyqtknxvlowmknianyqnmdhuggnanw failed=True msg='The
> lxc module is not importable. Check the requirements.' The lxc module
> is not importable. Check the requirements.
>
>
> FATAL: all hosts have already failed -- aborting
If you use Debian or Ubuntu as host system there is already python-lxc package available. The best way to install it is Ansible of course, just before lxc_container tasks:
tasks:
- name: Install provisioning dependencies
apt: name=python-lxc
- name: Create docker1.lxc container
lxc_container:
name: docker1
...
I just ran into the same issue andI solved it by making sure to pip install lxc-python2 system-wide on the target machine.
In my case the target was localhost via an ansible_connection=local host and I thought I could get away with pip: name=lxc-python2, but that installed into my current virtualenv.
lxc_container is relatively new.
To run the lxc_container module you need to make sure the pip package is installed on the target host, as well as lxc.
e.g
pip install lxc-python2
apt-get install lxc
The correct answer to the issue is to add:
become: yes
just before the lxc_container -module section, so as to get root privileges to run lxc-commands on target host.
Definitely not is it required to install any python2-lxc -support libs to target hosts to use lxc_container module, which requires such support to be available in management host only. Only the python2 must be available in the target host to be usable with current Ansible versions.
See ansible documention for more info about the requirements about the lxc-container module: (on host that executes module)
lxc >= 1.0 # OS package
python >= 2.6 # OS Package
lxc-python2 >= 0.1 # PIP Package from https://github.com/lxc/python2-lxc

Remove package ansible playbook

I've an EC2 instance create using Vagrant and provisioned with Ansible.
I've this task that install 2 package using apt.
---
- name: Install GIT & TIG
action: apt pkg={{ item }} state=installed
with_items:
- git
- tig
I want now delete/remove tig from my instance. I've removed it from my playbook and I've run vagrant provision but the package is still there.
How can I do this ?
Ansible cannot automatically remove packages when you remove them from you playbook. Ansible is stateless. This means it will not keep track of what it does and therefore does not know what it did in recent runs or if your playbook/role has been modified. Ansible will only do what you explicitly describe in the playbook/role. So you have to write a task to remove it.
You can easily do this with the apt module.
- name: Remove TIG
apt:
pkg: tig
state: absent
become: true
Removing the package from playbook will ensure that package is not installed on a newly provisioned machine, but the package will not be removed if that is already installed on a provisioned machine. If possible, you can destroy the machine using vagrant destroy and create/provision a new one.
If it is not possible to destroy the machine and provision a new one then you can add an ansible task to remove the installed package. Using absent as the state will remove the package.
- name: Remove TIG
action: apt pkg=tig state=absent
Sudo is now deprecated and no longer recommended. Use become instead.
- name: Remove TIG
apt:
pkg: tig
state: absent
become: yes

Resources