Using pyenv in Ansible - vagrant

I'm using Ansible to set up Ubuntu-based Vagrant and DigitalOcean boxes, and want to use pyenv to manage environments for a few different websites.
I'm having a problem with permissions when trying to install a version of python using the pyenv I've installed, which is probably down to my lack of basic *nix knowledge.
I have a deploy user and group, who I've installed pyenv for, but obviously something's up with which user's doing things as the final task below fails (all variables replaced with strings for clarity):
- name: Install pyenv
git:
repo: https://github.com/yyuu/pyenv.git
dest: "/home/deploy/.pyenv"
- name: Install pyenv-virtualenv plugin
git:
repo: https://github.com/yyuu/pyenv-virtualenv.git
dest: "/home/deploy/.pyenv/plugins/pyenv-virtualenv"
- name: Add path etc to .bashrc.
lineinfile:
dest: "/home/deploy/.bashrc"
state: present
create: yes
line: "{{ item }}"
with_items:
- 'export PYENV_ROOT="$HOME/.pyenv"'
- 'export PATH="$PYENV_ROOT/bin:$PATH"'
- 'eval "$(pyenv init -)"'
- 'eval "$(pyenv virtualenv-init -)"'
- name: Ensure .pyenv permissions are set properly
file: path=/home/deploy/.pyenv
recurse=yes
owner=deploy
group=deploy
state=directory
- name: Install default python version
become: yes
become_user: 'deploy'
shell: . /home/deploy/.bashrc && pyenv install 3.5.1
creates="/home/deploy/.pyenv/versions/3.5.1"
When doing vagrant up it goes fine until:
TASK [python : Install default python version] *********************************
fatal: [192.168.33.15]: FAILED! => {"changed": true, "cmd": ". /home/deploy/.bashrc && pyenv install 3.5.1", "delta": "0:00:00.002111", "end": "2016-02-16 11:48:26.930971", "failed": true, "rc": 127, "start": "2016-02-16 11:48:26.928860", "stderr": "/bin/sh: 1: pyenv: not found", "stdout": "", "stdout_lines": [], "warnings": []}
UPDATE: In case it's important, in this instance (the Vagrant box) my vagrant.yml playbook is setting remote_user to vagrant:
- name: Create a virtual machine via vagrant
hosts: all
become: yes
become_method: sudo
remote_user: vagrant
...
UPDATE 2: If I ssh into the Vagrant VM as the deploy user then I can use pyenv OK. If I ssh in as vagrant and then sudo -u deploy bash -i I get pyenv: command not found...
UPDATE 3: The root of the problem might be that neither /home/deploy/.bashrc or /home/deploy/.profile are sourced when switching to the deploy user with sudo (tested by echoing from each file), but are when logging in as deploy. But, I think /home/deploy/.bashrc is being sourced by the failing task - echo'd text appears in stdout.

As your error is:
pyenv: not found
just try simply to use absoulute path to pyenv in your task. This is a recommended way to handle shell tasks anyway:
Log in to your machine and find out the path to pyenv
which pyenv
then change your task to
# /path/to/pyenv is the result of the previous command
...
shell: . /home/deploy/.bashrc && /path/to/pyenv install 3.5.1
...

Related

python module firewall not found

computer run ansible-playbook: MacBook, with python 3.9
target machine: Debian 10 with python2.7.16 and python3.7.3
When I tried to open port in firewall:
- name: Open port 80 for http access
firewalld:
service: http
permanent: true
state: enabled
I got error:
fatal: [virtual_server]: FAILED! => {"changed": false, "msg": "Python
Module not found: firewalld and its python module are required for
this module, version 0.2.11 or newer required
(0.3.9 or newer for offline operations)"}
I also tried to use ansible.posix.firewall, with ansible-galaxy collection install ansible.posix on macbook, and use ansible.posix.firewall, still got this error.
Can anybody tell me what is wrong?
if you have your playbook vars like this
---
- hosts: testbench
vars:
ansible_python_interpreter: /usr/bin/python3
then your firewall task should be like this
- name: open ports
ansible.posix.firewalld:
permanent: true
immediate: true
port: "{{item}}/tcp"
state: enabled
become: true
vars:
ansible_python_interpreter: /usr/bin/python
with_items:
- tcp-port-1
- tcp-port-2
- tcp-port-3
ansible.posix.firewalld depends on the python firewalld bindings which are missing for the python version ansible is running under.
See https://bugzilla.redhat.com/show_bug.cgi?id=2091931 for a similar problem on systems using the EPEL8 ansible package, where the python3-firewall package is built against python 3.6 but ansible is using python 3.8.
ansible --version or head -1 $(which ansible) will tell you what version of Python ansible uses.
On redhat systems, dnf repoquery -l python3-firewall will tell you what version of Python python3-firewall is built against.
The solution is to install the appropriate python-firewalld package for your OS that matches the version of python ansible is using, if one exists.
If a compatible python-firewalld package does not exist, you can configure ansible to use a different version of python by setting the ansible_python_interpreter variable or the interpreter_python ansible.cfg setting (see https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html).
The problem is that you propably have awx installed on docker and he dont have that galaxy package do this :
1. go to main server
> docker images
find smt like this
ansible/awx 17.1.0 {here_id_of_image} 16 months ago 1.41GB
2. connect to that docker image
> docker run -it {here_id_of_image} bash
3. Run command to install pkg
> ansible-galaxy collection install ansible.posix
Done now run your playbook
I have fixed this problem by switch ansible_connection mode from paramiko to ssh on Ansible 5.10.0 x Ubuntu 22.04 .
My changes.
[ chusiang#ubuntu-22.04 ~ ]
$ vim ansible-pipeline.cfg
[defaults]
- ansible_connection = paramiko
- transport = paramiko
+ ansible_connection = ssh
+ transport = ssh
Ansible version.
[ chusiang#ubuntu-22.04 ~ ]
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/chusiang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/chusiang/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/chusiang/.ansible/collections:/usr/share/ansible/collections
executable location = /home/chusiang/.local/bin/ansible
python version = 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
Pip versions of ansible.
[ chusiang#ubuntu-22.04 ~ ]
$ pip list | grep -i ansible
ansible 5.10.0
ansible-core 2.12.10
ansible-inventory-to-ssh-config 1.0.1
ansible-lint 3.5.1
Enjoy it.

How to execute a command as a root user in ansible hosts

I have 3 VM's for ansible as follows:
master
host1
host2
ansible is installed and configured correctly. no issue with that part.
So I have this command I need to run. Before going to execute this command using ansible, I will show you how to do this without ansible :
I want to install update the packages first, so I am logged into the VM as a normal user (ubuntut) and execute like this:
sudo apt-get update -y
As you can see, I have executed a sudo command when I am logged in as the ubuntu user but not as a root user.
So above command is executed with sudo but user is ubuntu
I want this scenario to be done using ansible.
My Question: How to execute a command that contains sudo using ansible but when logged in as a different user
So my playbook will look like this:
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: ubuntu
apt:
upgrade: yes
update_cache: yes
But where can I add the sudo part in above playbook ? does it add automatically?
What if I want to execute a command like this, logged in user as ubuntu:
sudo apt-get install docker
How can I do this sudo part in ansible while become: ubuntu
To invoke a command as sudo, You can do something like this :
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: ubuntu
# this will fail because /opt needs permissions
# shell: echo hello > /opt/testfile
# this will work
shell: "sudo sh -c 'echo hello > /opt/testfile'"
# use this to verify output
register: out
- debug:
msg: "{{ out }}"
NOTE: you will get this warning:
"Consider using 'become', 'become_method', and 'become_user' rather
than running sudo"
The following will work, you can find a detailed explanation and examples in ansible documentation.
[1]https://docs.ansible.com/ansible/latest/user_guide/become.html
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: true
become_user: ubuntu
shell: echo hello > /opt/testfile
# use this to verify output
register: out
- debug:
msg: "{{ out }}"

Ansible - Activating virtual env with become_user

I want to do create a user for my CKAN installation and then activate a virtual environment as the user and install something.
- name: Add a CKAN user
user:
name: ckan
comment: "CKAN User"
shell: /sbin/nologin
create_home: yes
home: /usr/lib/ckan
state: present
- name: chmod 755 /usr/lib/ckan
file:
path: /usr/lib/ckan
mode: u=rwX,g=rX,o=rX
recurse: yes
- name: Create Python virtual env
command: virtualenv --no-site-packages default
become: yes
become_user: ckan
- name: Activate env
command: . default/bin/activate
- name: Activate env
command: pip install setuptools==36.1
I know it's generally not the most 'Ansible' implementation but I'm just trying to get something to work.
The error is in 'Create Python virtual env'. I am getting an error in that line for
In a command line I would just run:
su -s /bin/bash - ckan
But how do I achieve this here? I thought become_user would do it?
If you already have the path to the user's folder and have set the appropriate permissions, then you can directly use the Ansible pip module to create a virtual environment in that folder and install packages. So, IIUC you do not require the following tasks
Create Python virtual env
instead of this task, you can just add the parameter virtualenv_command to the pip module in order to create the virtual environment (if it does not already exist)
Activate env (x2)
if you want to install packages into the virtual environment using the Ansible pip module, then these 2 tasks are not required
Also, you can use the parameter virtualenv_site_packages in order to exclude the global packages in your virtual environment. You do not need to use the parameter extra_args to do this.
If you want to install a single package into a virtual environment, then you can replace your last 3 tasks with the following task
tasks:
- name: Create Python virtual env and install one package inside the virtual env
pip:
name: setuptools==36.1
virtualenv: /path/to/ckan/user/home/folder # <--- path to user's home folder*
virtualenv_command: virtualenv
virtualenv_site_packages: no # <---- added this parameter to exclude site packages
virtualenv_python: python3.7
If you want to install many packages from requirements-docs.txt, then you could use this approach
tasks:
- name: Create Python virtual env and install multiple packages inside the virtual env
pip:
requirements: /path/to/ckan/user/home/folder/requirements-docs.txt
virtualenv: /path/to/ckan/user/home/folder # <--- path to user's home folder*
virtualenv_command: virtualenv
virtualenv_site_packages: no # <---- added this parameter to exclude site packages
virtualenv_python: python3.7
* the user's home folder must exist before executing this task
The following worked:
- name: Install setuptools into venv
pip:
name: Setuptools==36.1
virtualenv: '{{ path_to_virtualenv }}'
Become user was not needed.
Another example:
- name: Install ckan python modules
pip: name="requirements-docs.txt" virtualenv={{ ckan_virtualenv }} state=present extra_args="--ignore-installed -r"

How to make Ansible overwrite existing files during remote software installation?

I'm using Ansible to clone a repository to a remote host and install the software there:
---
- hosts: ubuntu-16.04
become: yes
tasks:
- name: Install aptitude
apt:
name: aptitude
state: latest
update_cache: yes
- name: Update apt cache and upgrade packages
apt:
update_cache: yes
upgrade: full
autoclean: yes
autoremove: yes
- name: Clone <name of repository> repository
git:
repo: <link to repository>
dest: /home/test/Downloads/<new directory>
clone: yes
- name: Install <name of repository>
command: chdir=/home/test/Downloads/<new directory> {{ item }}
with_items:
- make
- make install
become_user: root
become_method: sudo
- name: Remove <name of repository> directory
file:
path: /home/test/Downloads/<new directory>
state: absent
This works fine if the software isn't already installed on the system. However, if it is and I run the playbook again, then it gets stuck at the "Install ..." task, saying that it failed to create symlinks as files already exist.
The first problem is that several tasks after this one (not shown here) are not executed due to the play exiting there. I could solve this by simply reordering my tasks, but what I really would like to know, please:
How do I make Ansible overwrite existing files?
Would you even solve this that way, or would you say that this behavior is fine as the software is already installed? But what if the repository contains newer code the second time it is cloned?
I don't think the problem is related to Ansible, rather to the Makefile. Nevertheless, you could simply skip the installation step when the software is already installed.
First you could check if the binary already exists, and based on the output you can skip the installation:
name: Check if software exists
stat:
path: <path to binary>
register: stat_result
name: Install <name of repository>
command: chdir=/home/test/Downloads/<new directory> {{ item }}
with_items:
- make
- make install
become_user: root
become_method: sudo
when: stat_result.stat.exists == False

Ansible: Cannot configure sudo command even become_user is a sudo user

testuser is a sudo user,
sudo cat /etc/sudoers.d/90-cloud-init-testuser
testuser ALL=(ALL) NOPASSWD:ALL
I can login testuser manually and run following without password:
sudo -H apt-get update
sudo -H apt-get upgrade
but if I run following ansible code, although I saw whoami command return testuser, then the code stops with fatal error (see code and error below).
Must I set become_user as root in order to run (see the line I comment out)? Note I CAN login testuser manually and run sudo command, can't I use become_user=testuser to Install apt? Note I think remote_user does not matter because whoami command only depends on become_user.in fact I feel remote_user is useless, it just log me in. if become_user is unset. then whoami become root, if become_user is set as testuser, then whoami become testuser.
- hosts: all
remote_user: ubuntu
become: yes
become_user: testuser
gather_facts: yes
become_method: sudo
tasks:
- name: test which user I am
shell: whomami
register: hello
- debug: msg="{{ hello.stdout }}"
- name: Update and upgrade apt.
# become_user: root
# become: yes
apt: update_cache=yes upgrade=dist cache_valid_time=3600
TASK [Update and upgrade apt.]
********************************
fatal: [XX.XX.XX.XX]: FAILED! => {"changed": false, "msg":
"'/usr/bin/apt-get dist-upgrade' failed: E: Could not open lock file
/var/lib/dpkg/lock - open (13: Permission denied)\nE: Unable to lock
the administration directory (/var/lib/dpkg/), are you root?\n", "rc":
100, "stdout": "", "stdout_lines": []}
You need to connect with an account which has sudo permissions ― in your case testuser ― and then run play/task with elevated permissions (become: true, and become: root which is default), so:
either add sudo permissions to ubuntu,
or connect with testuser.
sudo does not work the way you imply in the question.
Any command runs in a context of a specific user ― either testuser, or ubuntu, or root. There is no such thing as running a command as a "sudo testuser".
sudo executes a command as a different user (root by default). User executing sudo must have appropriate permissions.
If you log in as testuser and execute sudo -H apt-get update it is (almost*) the same as if you logged in as root and ran apt-get update.
If you log in as ubuntu and run sudo -u testuser apt-get update (which is a shell counterpart to the Ansible tasks in the question) ― it is (almost*) the same as if you logged on with testuser and ran apt-get update.
testuser running apt-get update will get an error ― and this is what you get.
* "almost", because it depends on settings regarding environment variables ― not relevant to the problem here.

Resources