Ansible - Activating virtual env with become_user - ansible

I want to do create a user for my CKAN installation and then activate a virtual environment as the user and install something.
- name: Add a CKAN user
user:
name: ckan
comment: "CKAN User"
shell: /sbin/nologin
create_home: yes
home: /usr/lib/ckan
state: present
- name: chmod 755 /usr/lib/ckan
file:
path: /usr/lib/ckan
mode: u=rwX,g=rX,o=rX
recurse: yes
- name: Create Python virtual env
command: virtualenv --no-site-packages default
become: yes
become_user: ckan
- name: Activate env
command: . default/bin/activate
- name: Activate env
command: pip install setuptools==36.1
I know it's generally not the most 'Ansible' implementation but I'm just trying to get something to work.
The error is in 'Create Python virtual env'. I am getting an error in that line for
In a command line I would just run:
su -s /bin/bash - ckan
But how do I achieve this here? I thought become_user would do it?

If you already have the path to the user's folder and have set the appropriate permissions, then you can directly use the Ansible pip module to create a virtual environment in that folder and install packages. So, IIUC you do not require the following tasks
Create Python virtual env
instead of this task, you can just add the parameter virtualenv_command to the pip module in order to create the virtual environment (if it does not already exist)
Activate env (x2)
if you want to install packages into the virtual environment using the Ansible pip module, then these 2 tasks are not required
Also, you can use the parameter virtualenv_site_packages in order to exclude the global packages in your virtual environment. You do not need to use the parameter extra_args to do this.
If you want to install a single package into a virtual environment, then you can replace your last 3 tasks with the following task
tasks:
- name: Create Python virtual env and install one package inside the virtual env
pip:
name: setuptools==36.1
virtualenv: /path/to/ckan/user/home/folder # <--- path to user's home folder*
virtualenv_command: virtualenv
virtualenv_site_packages: no # <---- added this parameter to exclude site packages
virtualenv_python: python3.7
If you want to install many packages from requirements-docs.txt, then you could use this approach
tasks:
- name: Create Python virtual env and install multiple packages inside the virtual env
pip:
requirements: /path/to/ckan/user/home/folder/requirements-docs.txt
virtualenv: /path/to/ckan/user/home/folder # <--- path to user's home folder*
virtualenv_command: virtualenv
virtualenv_site_packages: no # <---- added this parameter to exclude site packages
virtualenv_python: python3.7
* the user's home folder must exist before executing this task

The following worked:
- name: Install setuptools into venv
pip:
name: Setuptools==36.1
virtualenv: '{{ path_to_virtualenv }}'
Become user was not needed.
Another example:
- name: Install ckan python modules
pip: name="requirements-docs.txt" virtualenv={{ ckan_virtualenv }} state=present extra_args="--ignore-installed -r"

Related

How can I avoid getting a .travis.yml file automatically created inside my ansible role?

I have created several roles with ansible-galaxy init my_role and after a while I realized a .travis.yml file was created automatically for a basic syntax testing. I am not planning to use Travis CI, so this .travis.yml file is useless.
I am sure there is a good reason why this occurred, so if you may also explain it I would be thankful.
This is the file:
---
language: python
python: "2.7"
# Use the new container infrastructure
sudo: false
# Install ansible
addons:
apt:
packages:
- python-pip
install:
# Install ansible
- pip install ansible
# Check ansible version
- ansible --version
# Create ansible.cfg with correct roles_path
- printf '[defaults]\nroles_path=../' >ansible.cfg
script:
# Basic role syntax check
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/
It's part of the default role skeleton. If you want to use a different skeleton, pass it to ansible-galaxy using the --role-skeleton flag:
ansible-galaxy init --role-skeleton ~/ansible-skeletons/role roles/foo

``apt-mark hold`` and ``apt-mark unhold`` with ansible modules

I'm writing my k8s upgrade ansible playbook, and within that I need to do apt-mark unhold kubeadm. Now, I am trying to avoid using the ansible command or shell module to call apt if possible, but the apt hold/unhold command does not seem to be supported by neither package nor apt modules.
Is it possible to do apt-mark hold in ansible without command or shell?
You can use the ansible.builtin.dpkg_selections module for this.
- name: Hold kubeadm
ansible.builtin.dpkg_selections:
name: kubeadm
selection: hold
- name: Unhold kubeadm
ansible.builtin.dpkg_selections:
name: kubeadm
selection: install

How to execute a command as a root user in ansible hosts

I have 3 VM's for ansible as follows:
master
host1
host2
ansible is installed and configured correctly. no issue with that part.
So I have this command I need to run. Before going to execute this command using ansible, I will show you how to do this without ansible :
I want to install update the packages first, so I am logged into the VM as a normal user (ubuntut) and execute like this:
sudo apt-get update -y
As you can see, I have executed a sudo command when I am logged in as the ubuntu user but not as a root user.
So above command is executed with sudo but user is ubuntu
I want this scenario to be done using ansible.
My Question: How to execute a command that contains sudo using ansible but when logged in as a different user
So my playbook will look like this:
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: ubuntu
apt:
upgrade: yes
update_cache: yes
But where can I add the sudo part in above playbook ? does it add automatically?
What if I want to execute a command like this, logged in user as ubuntu:
sudo apt-get install docker
How can I do this sudo part in ansible while become: ubuntu
To invoke a command as sudo, You can do something like this :
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: ubuntu
# this will fail because /opt needs permissions
# shell: echo hello > /opt/testfile
# this will work
shell: "sudo sh -c 'echo hello > /opt/testfile'"
# use this to verify output
register: out
- debug:
msg: "{{ out }}"
NOTE: you will get this warning:
"Consider using 'become', 'become_method', and 'become_user' rather
than running sudo"
The following will work, you can find a detailed explanation and examples in ansible documentation.
[1]https://docs.ansible.com/ansible/latest/user_guide/become.html
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: true
become_user: ubuntu
shell: echo hello > /opt/testfile
# use this to verify output
register: out
- debug:
msg: "{{ out }}"

Ansible Error : The lxc module is not importable. Check the requirements

My task looks something like this
- name: Create a started container
lxc_container:
name: test-container-started
container_log: true
template: ubuntu
state: started
template_options: --release trusty
I get the following errors:
> TASK: [lxc | Create a started container]
> **************************************  failed: [localhost] => {"failed": true, "parsed": false}
> BECOME-SUCCESS-azsyqtknxvlowmknianyqnmdhuggnanw failed=True msg='The
> lxc module is not importable. Check the requirements.' The lxc module
> is not importable. Check the requirements.
>
>
> FATAL: all hosts have already failed -- aborting
If you use Debian or Ubuntu as host system there is already python-lxc package available. The best way to install it is Ansible of course, just before lxc_container tasks:
tasks:
- name: Install provisioning dependencies
apt: name=python-lxc
- name: Create docker1.lxc container
lxc_container:
name: docker1
...
I just ran into the same issue andI solved it by making sure to pip install lxc-python2 system-wide on the target machine.
In my case the target was localhost via an ansible_connection=local host and I thought I could get away with pip: name=lxc-python2, but that installed into my current virtualenv.
lxc_container is relatively new.
To run the lxc_container module you need to make sure the pip package is installed on the target host, as well as lxc.
e.g
pip install lxc-python2
apt-get install lxc
The correct answer to the issue is to add:
become: yes
just before the lxc_container -module section, so as to get root privileges to run lxc-commands on target host.
Definitely not is it required to install any python2-lxc -support libs to target hosts to use lxc_container module, which requires such support to be available in management host only. Only the python2 must be available in the target host to be usable with current Ansible versions.
See ansible documention for more info about the requirements about the lxc-container module: (on host that executes module)
lxc >= 1.0 # OS package
python >= 2.6 # OS Package
lxc-python2 >= 0.1 # PIP Package from https://github.com/lxc/python2-lxc

Remove package ansible playbook

I've an EC2 instance create using Vagrant and provisioned with Ansible.
I've this task that install 2 package using apt.
---
- name: Install GIT & TIG
action: apt pkg={{ item }} state=installed
with_items:
- git
- tig
I want now delete/remove tig from my instance. I've removed it from my playbook and I've run vagrant provision but the package is still there.
How can I do this ?
Ansible cannot automatically remove packages when you remove them from you playbook. Ansible is stateless. This means it will not keep track of what it does and therefore does not know what it did in recent runs or if your playbook/role has been modified. Ansible will only do what you explicitly describe in the playbook/role. So you have to write a task to remove it.
You can easily do this with the apt module.
- name: Remove TIG
apt:
pkg: tig
state: absent
become: true
Removing the package from playbook will ensure that package is not installed on a newly provisioned machine, but the package will not be removed if that is already installed on a provisioned machine. If possible, you can destroy the machine using vagrant destroy and create/provision a new one.
If it is not possible to destroy the machine and provision a new one then you can add an ansible task to remove the installed package. Using absent as the state will remove the package.
- name: Remove TIG
action: apt pkg=tig state=absent
Sudo is now deprecated and no longer recommended. Use become instead.
- name: Remove TIG
apt:
pkg: tig
state: absent
become: yes

Resources