Remove package ansible playbook - amazon-ec2

I've an EC2 instance create using Vagrant and provisioned with Ansible.
I've this task that install 2 package using apt.
---
- name: Install GIT & TIG
action: apt pkg={{ item }} state=installed
with_items:
- git
- tig
I want now delete/remove tig from my instance. I've removed it from my playbook and I've run vagrant provision but the package is still there.
How can I do this ?

Ansible cannot automatically remove packages when you remove them from you playbook. Ansible is stateless. This means it will not keep track of what it does and therefore does not know what it did in recent runs or if your playbook/role has been modified. Ansible will only do what you explicitly describe in the playbook/role. So you have to write a task to remove it.
You can easily do this with the apt module.
- name: Remove TIG
apt:
pkg: tig
state: absent
become: true

Removing the package from playbook will ensure that package is not installed on a newly provisioned machine, but the package will not be removed if that is already installed on a provisioned machine. If possible, you can destroy the machine using vagrant destroy and create/provision a new one.
If it is not possible to destroy the machine and provision a new one then you can add an ansible task to remove the installed package. Using absent as the state will remove the package.
- name: Remove TIG
action: apt pkg=tig state=absent

Sudo is now deprecated and no longer recommended. Use become instead.
- name: Remove TIG
apt:
pkg: tig
state: absent
become: yes

Related

``apt-mark hold`` and ``apt-mark unhold`` with ansible modules

I'm writing my k8s upgrade ansible playbook, and within that I need to do apt-mark unhold kubeadm. Now, I am trying to avoid using the ansible command or shell module to call apt if possible, but the apt hold/unhold command does not seem to be supported by neither package nor apt modules.
Is it possible to do apt-mark hold in ansible without command or shell?
You can use the ansible.builtin.dpkg_selections module for this.
- name: Hold kubeadm
ansible.builtin.dpkg_selections:
name: kubeadm
selection: hold
- name: Unhold kubeadm
ansible.builtin.dpkg_selections:
name: kubeadm
selection: install

Ansible - Activating virtual env with become_user

I want to do create a user for my CKAN installation and then activate a virtual environment as the user and install something.
- name: Add a CKAN user
user:
name: ckan
comment: "CKAN User"
shell: /sbin/nologin
create_home: yes
home: /usr/lib/ckan
state: present
- name: chmod 755 /usr/lib/ckan
file:
path: /usr/lib/ckan
mode: u=rwX,g=rX,o=rX
recurse: yes
- name: Create Python virtual env
command: virtualenv --no-site-packages default
become: yes
become_user: ckan
- name: Activate env
command: . default/bin/activate
- name: Activate env
command: pip install setuptools==36.1
I know it's generally not the most 'Ansible' implementation but I'm just trying to get something to work.
The error is in 'Create Python virtual env'. I am getting an error in that line for
In a command line I would just run:
su -s /bin/bash - ckan
But how do I achieve this here? I thought become_user would do it?
If you already have the path to the user's folder and have set the appropriate permissions, then you can directly use the Ansible pip module to create a virtual environment in that folder and install packages. So, IIUC you do not require the following tasks
Create Python virtual env
instead of this task, you can just add the parameter virtualenv_command to the pip module in order to create the virtual environment (if it does not already exist)
Activate env (x2)
if you want to install packages into the virtual environment using the Ansible pip module, then these 2 tasks are not required
Also, you can use the parameter virtualenv_site_packages in order to exclude the global packages in your virtual environment. You do not need to use the parameter extra_args to do this.
If you want to install a single package into a virtual environment, then you can replace your last 3 tasks with the following task
tasks:
- name: Create Python virtual env and install one package inside the virtual env
pip:
name: setuptools==36.1
virtualenv: /path/to/ckan/user/home/folder # <--- path to user's home folder*
virtualenv_command: virtualenv
virtualenv_site_packages: no # <---- added this parameter to exclude site packages
virtualenv_python: python3.7
If you want to install many packages from requirements-docs.txt, then you could use this approach
tasks:
- name: Create Python virtual env and install multiple packages inside the virtual env
pip:
requirements: /path/to/ckan/user/home/folder/requirements-docs.txt
virtualenv: /path/to/ckan/user/home/folder # <--- path to user's home folder*
virtualenv_command: virtualenv
virtualenv_site_packages: no # <---- added this parameter to exclude site packages
virtualenv_python: python3.7
* the user's home folder must exist before executing this task
The following worked:
- name: Install setuptools into venv
pip:
name: Setuptools==36.1
virtualenv: '{{ path_to_virtualenv }}'
Become user was not needed.
Another example:
- name: Install ckan python modules
pip: name="requirements-docs.txt" virtualenv={{ ckan_virtualenv }} state=present extra_args="--ignore-installed -r"

Is there a way to allow downgrades with apt Ansible module?

I have an Ansible Playbook to deploy a specific version of docker. I want apt module to allow for downgrades when the target machine have a higher version installed. I browsed the documentation but couldn't find a suitable way to do it. The Yaml file have lines like:
- name : "Install specific docker ce"
become : true
apt :
name : docker-ce=5:18.09.1~3-0~ubuntu-bionic
state : present
For Docker CE on Ubuntu there are two packages, docker-ce and docker-ce-cli. You can see which versions are currently installed with:
$ apt list --installed | grep docker
docker-ce/xenial,now 5:18.09.7~3-0~ubuntu-xenial amd64 [installed,upgradable to: 5:19.03.1~3-0~ubuntu-xenial]
docker-ce-cli/xenial,now 5:18.09.7~3-0~ubuntu-xenial amd64 [installed,upgradable to: 5:19.03.1~3-0~ubuntu-xenial]
You need to force the same version for both packages. e.g. on Ubuntu Xenial:
main.yml
- name: Install docker-ce
apt:
state: present
force: True
name:
- "docker-ce=5:18.09.7~3-0~ubuntu-xenial"
- "docker-ce-cli=5:18.09.7~3-0~ubuntu-xenial"
notify:
- Restart docker
when: ansible_os_family == "Debian" and ansible_distribution_version == "16.04"
handler.yml
# Equivalent to `systemctl daemon-reload && systemctl restart docker`
- name: Restart docker
systemd:
name: docker
daemon_reload: True
state: restarted

How to make Ansible overwrite existing files during remote software installation?

I'm using Ansible to clone a repository to a remote host and install the software there:
---
- hosts: ubuntu-16.04
become: yes
tasks:
- name: Install aptitude
apt:
name: aptitude
state: latest
update_cache: yes
- name: Update apt cache and upgrade packages
apt:
update_cache: yes
upgrade: full
autoclean: yes
autoremove: yes
- name: Clone <name of repository> repository
git:
repo: <link to repository>
dest: /home/test/Downloads/<new directory>
clone: yes
- name: Install <name of repository>
command: chdir=/home/test/Downloads/<new directory> {{ item }}
with_items:
- make
- make install
become_user: root
become_method: sudo
- name: Remove <name of repository> directory
file:
path: /home/test/Downloads/<new directory>
state: absent
This works fine if the software isn't already installed on the system. However, if it is and I run the playbook again, then it gets stuck at the "Install ..." task, saying that it failed to create symlinks as files already exist.
The first problem is that several tasks after this one (not shown here) are not executed due to the play exiting there. I could solve this by simply reordering my tasks, but what I really would like to know, please:
How do I make Ansible overwrite existing files?
Would you even solve this that way, or would you say that this behavior is fine as the software is already installed? But what if the repository contains newer code the second time it is cloned?
I don't think the problem is related to Ansible, rather to the Makefile. Nevertheless, you could simply skip the installation step when the software is already installed.
First you could check if the binary already exists, and based on the output you can skip the installation:
name: Check if software exists
stat:
path: <path to binary>
register: stat_result
name: Install <name of repository>
command: chdir=/home/test/Downloads/<new directory> {{ item }}
with_items:
- make
- make install
become_user: root
become_method: sudo
when: stat_result.stat.exists == False

ERROR: yum is not a legal parameter in an Ansible task or handler

Long story short my PC died while I was running Vagrant and when I got power back my subsequent vagrant up attempted to rebuild the box. In doing so I got an error as the subject is titled:
ERROR: yum is not a legal parameter in an Ansible task or handler
I've tried double checking the ansible documentation, checking my structure and indentation, saving in different text editors etc but the error persists. I'm stuck as there was no issue previously so I am a bit stumped as to why it's no longer working.
My playbook is as follows, though i've cut a lot of it out for now while I resolve the issue:
---
- hosts: all
sudo: yes
tasks:
- name: Update yum packages
yum: name=* state=latest
Many thanks!
It turns out that I was missing a return after tasks.
---
- hosts: all
sudo: yes
tasks:
- name: Update yum packages
yum: name=* state=latest
However ansible playbook docs: http://docs.ansible.com/ansible/playbooks_intro.html examples show that no return is required after the tasks declaration.
Perhaps a nuance with my particular version (1.7.2)

Resources