The issue: When I try to install a package ansible does not proceed. The CLI just sits there idle.
SSH is cofigured to connect with out prompting for a password. I have created a user called "test" and my sudoers file has the following configuration:
test ALL=(ALL) NOPASSWD:ALL
Also in /etc/ansible/ansible.cfg
inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/
remote_tmp = $HOME/.ansible/tmp
pattern = *
forks = 5
poll_interval = 15
sudo_user = root
#ask_sudo_pass = True
#ask_pass = True
transport = smart
#remote_port = 22 module_lang = C
When as user "test" I do
yum install lynx
the package specified gets installed.
But if I do
ansible local -s -m shell -a 'yum install lynx'
Nothing happens.
I am not sure what is going on :(
Try using the yum module instead:
ansible local -s -m yum -a 'name=lynx state=present'
You have to say "yes" to yum:
Try this instead:
ansible local -s -m shell -a 'yum install lynx -y'
You should be using either the yum module or even better, the package module which is OS generic.
On the other hand, you other option is the raw module which runs an SSH-dirty command against your nodes!
This is an example for using the package module in a task:
----
- hosts: <hosts_names>
sudo: yes
tasks:
- name: install lynx
register: result
package: name=lynx state=latest
become: yes
become_user: root
become_method: sudo
ignore_errors: True
Using the above playbook, hopefully helps you to install you desired package and even if it did not do that, you'll be able to easy go through the result variable and see what's going wrong.
And of course before all these, you should've made sure that your YUM repository/ies work propeprly!
Related
computer run ansible-playbook: MacBook, with python 3.9
target machine: Debian 10 with python2.7.16 and python3.7.3
When I tried to open port in firewall:
- name: Open port 80 for http access
firewalld:
service: http
permanent: true
state: enabled
I got error:
fatal: [virtual_server]: FAILED! => {"changed": false, "msg": "Python
Module not found: firewalld and its python module are required for
this module, version 0.2.11 or newer required
(0.3.9 or newer for offline operations)"}
I also tried to use ansible.posix.firewall, with ansible-galaxy collection install ansible.posix on macbook, and use ansible.posix.firewall, still got this error.
Can anybody tell me what is wrong?
if you have your playbook vars like this
---
- hosts: testbench
vars:
ansible_python_interpreter: /usr/bin/python3
then your firewall task should be like this
- name: open ports
ansible.posix.firewalld:
permanent: true
immediate: true
port: "{{item}}/tcp"
state: enabled
become: true
vars:
ansible_python_interpreter: /usr/bin/python
with_items:
- tcp-port-1
- tcp-port-2
- tcp-port-3
ansible.posix.firewalld depends on the python firewalld bindings which are missing for the python version ansible is running under.
See https://bugzilla.redhat.com/show_bug.cgi?id=2091931 for a similar problem on systems using the EPEL8 ansible package, where the python3-firewall package is built against python 3.6 but ansible is using python 3.8.
ansible --version or head -1 $(which ansible) will tell you what version of Python ansible uses.
On redhat systems, dnf repoquery -l python3-firewall will tell you what version of Python python3-firewall is built against.
The solution is to install the appropriate python-firewalld package for your OS that matches the version of python ansible is using, if one exists.
If a compatible python-firewalld package does not exist, you can configure ansible to use a different version of python by setting the ansible_python_interpreter variable or the interpreter_python ansible.cfg setting (see https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html).
The problem is that you propably have awx installed on docker and he dont have that galaxy package do this :
1. go to main server
> docker images
find smt like this
ansible/awx 17.1.0 {here_id_of_image} 16 months ago 1.41GB
2. connect to that docker image
> docker run -it {here_id_of_image} bash
3. Run command to install pkg
> ansible-galaxy collection install ansible.posix
Done now run your playbook
I have fixed this problem by switch ansible_connection mode from paramiko to ssh on Ansible 5.10.0 x Ubuntu 22.04 .
My changes.
[ chusiang#ubuntu-22.04 ~ ]
$ vim ansible-pipeline.cfg
[defaults]
- ansible_connection = paramiko
- transport = paramiko
+ ansible_connection = ssh
+ transport = ssh
Ansible version.
[ chusiang#ubuntu-22.04 ~ ]
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/chusiang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/chusiang/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/chusiang/.ansible/collections:/usr/share/ansible/collections
executable location = /home/chusiang/.local/bin/ansible
python version = 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
Pip versions of ansible.
[ chusiang#ubuntu-22.04 ~ ]
$ pip list | grep -i ansible
ansible 5.10.0
ansible-core 2.12.10
ansible-inventory-to-ssh-config 1.0.1
ansible-lint 3.5.1
Enjoy it.
I have 3 VM's for ansible as follows:
master
host1
host2
ansible is installed and configured correctly. no issue with that part.
So I have this command I need to run. Before going to execute this command using ansible, I will show you how to do this without ansible :
I want to install update the packages first, so I am logged into the VM as a normal user (ubuntut) and execute like this:
sudo apt-get update -y
As you can see, I have executed a sudo command when I am logged in as the ubuntu user but not as a root user.
So above command is executed with sudo but user is ubuntu
I want this scenario to be done using ansible.
My Question: How to execute a command that contains sudo using ansible but when logged in as a different user
So my playbook will look like this:
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: ubuntu
apt:
upgrade: yes
update_cache: yes
But where can I add the sudo part in above playbook ? does it add automatically?
What if I want to execute a command like this, logged in user as ubuntu:
sudo apt-get install docker
How can I do this sudo part in ansible while become: ubuntu
To invoke a command as sudo, You can do something like this :
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: ubuntu
# this will fail because /opt needs permissions
# shell: echo hello > /opt/testfile
# this will work
shell: "sudo sh -c 'echo hello > /opt/testfile'"
# use this to verify output
register: out
- debug:
msg: "{{ out }}"
NOTE: you will get this warning:
"Consider using 'become', 'become_method', and 'become_user' rather
than running sudo"
The following will work, you can find a detailed explanation and examples in ansible documentation.
[1]https://docs.ansible.com/ansible/latest/user_guide/become.html
- hosts: all
tasks:
- name: Update and upgrade apt packages
become: true
become_user: ubuntu
shell: echo hello > /opt/testfile
# use this to verify output
register: out
- debug:
msg: "{{ out }}"
testuser is a sudo user,
sudo cat /etc/sudoers.d/90-cloud-init-testuser
testuser ALL=(ALL) NOPASSWD:ALL
I can login testuser manually and run following without password:
sudo -H apt-get update
sudo -H apt-get upgrade
but if I run following ansible code, although I saw whoami command return testuser, then the code stops with fatal error (see code and error below).
Must I set become_user as root in order to run (see the line I comment out)? Note I CAN login testuser manually and run sudo command, can't I use become_user=testuser to Install apt? Note I think remote_user does not matter because whoami command only depends on become_user.in fact I feel remote_user is useless, it just log me in. if become_user is unset. then whoami become root, if become_user is set as testuser, then whoami become testuser.
- hosts: all
remote_user: ubuntu
become: yes
become_user: testuser
gather_facts: yes
become_method: sudo
tasks:
- name: test which user I am
shell: whomami
register: hello
- debug: msg="{{ hello.stdout }}"
- name: Update and upgrade apt.
# become_user: root
# become: yes
apt: update_cache=yes upgrade=dist cache_valid_time=3600
TASK [Update and upgrade apt.]
********************************
fatal: [XX.XX.XX.XX]: FAILED! => {"changed": false, "msg":
"'/usr/bin/apt-get dist-upgrade' failed: E: Could not open lock file
/var/lib/dpkg/lock - open (13: Permission denied)\nE: Unable to lock
the administration directory (/var/lib/dpkg/), are you root?\n", "rc":
100, "stdout": "", "stdout_lines": []}
You need to connect with an account which has sudo permissions ― in your case testuser ― and then run play/task with elevated permissions (become: true, and become: root which is default), so:
either add sudo permissions to ubuntu,
or connect with testuser.
sudo does not work the way you imply in the question.
Any command runs in a context of a specific user ― either testuser, or ubuntu, or root. There is no such thing as running a command as a "sudo testuser".
sudo executes a command as a different user (root by default). User executing sudo must have appropriate permissions.
If you log in as testuser and execute sudo -H apt-get update it is (almost*) the same as if you logged in as root and ran apt-get update.
If you log in as ubuntu and run sudo -u testuser apt-get update (which is a shell counterpart to the Ansible tasks in the question) ― it is (almost*) the same as if you logged on with testuser and ran apt-get update.
testuser running apt-get update will get an error ― and this is what you get.
* "almost", because it depends on settings regarding environment variables ― not relevant to the problem here.
I'm trying to install java on several hosts with Ansible.
I looked for some examples of expect module to provide answers to the prompts.
I think this syntax is quite fine:
- hosts: datanode
sudo: yes
sudo_user: root
tasks:
- expect:
name: install java jdk 7
command: apt-get install openjdk-7-jdk
responses:
Question:
'Do you want to continue? [Y/n]': 'Y'
But when I try to execute ansible-playbook file.yml I receive the error:
ERROR! conflicting action statements (expect, command)
The error appears to have been in '/root/scp.yml': line 5, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- expect:
^ here
Where is the problem?
(I have installed ansible 2.0.1.0, pexpect, python)
Thanks!
NOTE that Ansible works with yaml files, and this kind of files are indented. This means that the spaces you put before each statement are important to let Ansible to understand how are they nested. More info about yaml.
Corrected task:
- hosts: datanode
sudo: yes
sudo_user: root
tasks:
- name: install java jdk 7
expect:
command: apt-get install openjdk-7-jdk
responses:
Question:
- 'Y'
- 'n'
This will avoid your syntax error.
Source: http://docs.ansible.com/ansible/expect_module.html
Alternatively, if you always want to say "yes" to your apt-get install commands, you can add the -y argument:
apt-get install -y openjdk-7-jdk
Or even better, use the apt Ansible module.
My task looks something like this
- name: Create a started container
lxc_container:
name: test-container-started
container_log: true
template: ubuntu
state: started
template_options: --release trusty
I get the following errors:
> TASK: [lxc | Create a started container]
> ************************************** failed: [localhost] => {"failed": true, "parsed": false}
> BECOME-SUCCESS-azsyqtknxvlowmknianyqnmdhuggnanw failed=True msg='The
> lxc module is not importable. Check the requirements.' The lxc module
> is not importable. Check the requirements.
>
>
> FATAL: all hosts have already failed -- aborting
If you use Debian or Ubuntu as host system there is already python-lxc package available. The best way to install it is Ansible of course, just before lxc_container tasks:
tasks:
- name: Install provisioning dependencies
apt: name=python-lxc
- name: Create docker1.lxc container
lxc_container:
name: docker1
...
I just ran into the same issue andI solved it by making sure to pip install lxc-python2 system-wide on the target machine.
In my case the target was localhost via an ansible_connection=local host and I thought I could get away with pip: name=lxc-python2, but that installed into my current virtualenv.
lxc_container is relatively new.
To run the lxc_container module you need to make sure the pip package is installed on the target host, as well as lxc.
e.g
pip install lxc-python2
apt-get install lxc
The correct answer to the issue is to add:
become: yes
just before the lxc_container -module section, so as to get root privileges to run lxc-commands on target host.
Definitely not is it required to install any python2-lxc -support libs to target hosts to use lxc_container module, which requires such support to be available in management host only. Only the python2 must be available in the target host to be usable with current Ansible versions.
See ansible documention for more info about the requirements about the lxc-container module: (on host that executes module)
lxc >= 1.0 # OS package
python >= 2.6 # OS Package
lxc-python2 >= 0.1 # PIP Package from https://github.com/lxc/python2-lxc