python module firewall not found - ansible

computer run ansible-playbook: MacBook, with python 3.9
target machine: Debian 10 with python2.7.16 and python3.7.3
When I tried to open port in firewall:
- name: Open port 80 for http access
firewalld:
service: http
permanent: true
state: enabled
I got error:
fatal: [virtual_server]: FAILED! => {"changed": false, "msg": "Python
Module not found: firewalld and its python module are required for
this module, version 0.2.11 or newer required
(0.3.9 or newer for offline operations)"}
I also tried to use ansible.posix.firewall, with ansible-galaxy collection install ansible.posix on macbook, and use ansible.posix.firewall, still got this error.
Can anybody tell me what is wrong?

if you have your playbook vars like this
---
- hosts: testbench
vars:
ansible_python_interpreter: /usr/bin/python3
then your firewall task should be like this
- name: open ports
ansible.posix.firewalld:
permanent: true
immediate: true
port: "{{item}}/tcp"
state: enabled
become: true
vars:
ansible_python_interpreter: /usr/bin/python
with_items:
- tcp-port-1
- tcp-port-2
- tcp-port-3

ansible.posix.firewalld depends on the python firewalld bindings which are missing for the python version ansible is running under.
See https://bugzilla.redhat.com/show_bug.cgi?id=2091931 for a similar problem on systems using the EPEL8 ansible package, where the python3-firewall package is built against python 3.6 but ansible is using python 3.8.
ansible --version or head -1 $(which ansible) will tell you what version of Python ansible uses.
On redhat systems, dnf repoquery -l python3-firewall will tell you what version of Python python3-firewall is built against.
The solution is to install the appropriate python-firewalld package for your OS that matches the version of python ansible is using, if one exists.
If a compatible python-firewalld package does not exist, you can configure ansible to use a different version of python by setting the ansible_python_interpreter variable or the interpreter_python ansible.cfg setting (see https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html).

The problem is that you propably have awx installed on docker and he dont have that galaxy package do this :
1. go to main server
> docker images
find smt like this
ansible/awx 17.1.0 {here_id_of_image} 16 months ago 1.41GB
2. connect to that docker image
> docker run -it {here_id_of_image} bash
3. Run command to install pkg
> ansible-galaxy collection install ansible.posix
Done now run your playbook

I have fixed this problem by switch ansible_connection mode from paramiko to ssh on Ansible 5.10.0 x Ubuntu 22.04 .
My changes.
[ chusiang#ubuntu-22.04 ~ ]
$ vim ansible-pipeline.cfg
[defaults]
- ansible_connection = paramiko
- transport = paramiko
+ ansible_connection = ssh
+ transport = ssh
Ansible version.
[ chusiang#ubuntu-22.04 ~ ]
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/chusiang/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/chusiang/.local/lib/python3.10/site-packages/ansible
ansible collection location = /home/chusiang/.ansible/collections:/usr/share/ansible/collections
executable location = /home/chusiang/.local/bin/ansible
python version = 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0]
jinja version = 3.0.3
libyaml = True
Pip versions of ansible.
[ chusiang#ubuntu-22.04 ~ ]
$ pip list | grep -i ansible
ansible 5.10.0
ansible-core 2.12.10
ansible-inventory-to-ssh-config 1.0.1
ansible-lint 3.5.1
Enjoy it.

Related

How can I avoid getting a .travis.yml file automatically created inside my ansible role?

I have created several roles with ansible-galaxy init my_role and after a while I realized a .travis.yml file was created automatically for a basic syntax testing. I am not planning to use Travis CI, so this .travis.yml file is useless.
I am sure there is a good reason why this occurred, so if you may also explain it I would be thankful.
This is the file:
---
language: python
python: "2.7"
# Use the new container infrastructure
sudo: false
# Install ansible
addons:
apt:
packages:
- python-pip
install:
# Install ansible
- pip install ansible
# Check ansible version
- ansible --version
# Create ansible.cfg with correct roles_path
- printf '[defaults]\nroles_path=../' >ansible.cfg
script:
# Basic role syntax check
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/
It's part of the default role skeleton. If you want to use a different skeleton, pass it to ansible-galaxy using the --role-skeleton flag:
ansible-galaxy init --role-skeleton ~/ansible-skeletons/role roles/foo

Is there a way to allow downgrades with apt Ansible module?

I have an Ansible Playbook to deploy a specific version of docker. I want apt module to allow for downgrades when the target machine have a higher version installed. I browsed the documentation but couldn't find a suitable way to do it. The Yaml file have lines like:
- name : "Install specific docker ce"
become : true
apt :
name : docker-ce=5:18.09.1~3-0~ubuntu-bionic
state : present
For Docker CE on Ubuntu there are two packages, docker-ce and docker-ce-cli. You can see which versions are currently installed with:
$ apt list --installed | grep docker
docker-ce/xenial,now 5:18.09.7~3-0~ubuntu-xenial amd64 [installed,upgradable to: 5:19.03.1~3-0~ubuntu-xenial]
docker-ce-cli/xenial,now 5:18.09.7~3-0~ubuntu-xenial amd64 [installed,upgradable to: 5:19.03.1~3-0~ubuntu-xenial]
You need to force the same version for both packages. e.g. on Ubuntu Xenial:
main.yml
- name: Install docker-ce
apt:
state: present
force: True
name:
- "docker-ce=5:18.09.7~3-0~ubuntu-xenial"
- "docker-ce-cli=5:18.09.7~3-0~ubuntu-xenial"
notify:
- Restart docker
when: ansible_os_family == "Debian" and ansible_distribution_version == "16.04"
handler.yml
# Equivalent to `systemctl daemon-reload && systemctl restart docker`
- name: Restart docker
systemd:
name: docker
daemon_reload: True
state: restarted

[Ansible][Fedora 24] DNF Module Requires python2-dnf but it is Already Installed

I have computers on an internal network that is not connected to the internet, and have copied the relevant rpm packages into my local dnf repository. I'm trying to get ansible's DNF module to install a program on another computer but it pops up an error message stating that python2-dnf is not installed. But when I try to install the program it is already installed. Anyone have an idea whats going wrong?
I've placed the error code below.
Background:
Ansible Control machine: Fedora 24 4.5.5-300.fc24.x86_64 GNU/LINUX
Ansible Client machine: Fedora 24 4.5.5-300.fc24.x86_64 GNU/LINUX
yum/dnf local repository: Centos 7 3.10.0-327.28.3.el7.x86_64 GNU/LINUX
[root#localhost ansible]# ansible all -m dnf -a "name=vim state=present"
192.168.10.10 | FAILED! => {
"changed": false,
"failed": true
, "msg": "python2-dnf is not installed, but it is required for the Ansible dnf module."
}
[root#localhost ansible]# yum install python2-dnf
Yum command has been deprecated, redirecting to '/usr/bin/dnf install python2-dnf'.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run:
'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'
Last metadata expiration check: 0:12:08 ago on Tue Oct 18 04:57:11 2016.
Package python2-dnf-1.1.9-2.fc24.noarch is already installed, skipping.
Dependencies resolved.
Nothing to do.
Complete!
As pointed out by #GUIDO you need python2-dnf on the target host.
Ansible module documentation states:
Requirements (on host that executes module)
python >= 2.6
python-dnf
on host that executes module means in the context of Ansible the target host of the play, not the control host.

Cannot install packages with Ansible

The issue: When I try to install a package ansible does not proceed. The CLI just sits there idle.
SSH is cofigured to connect with out prompting for a password. I have created a user called "test" and my sudoers file has the following configuration:
test ALL=(ALL) NOPASSWD:ALL
Also in /etc/ansible/ansible.cfg
inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/
remote_tmp = $HOME/.ansible/tmp
pattern = *
forks = 5
poll_interval = 15
sudo_user = root
#ask_sudo_pass = True
#ask_pass = True
transport = smart
#remote_port = 22 module_lang = C
When as user "test" I do
yum install lynx
the package specified gets installed.
But if I do
ansible local -s -m shell -a 'yum install lynx'
Nothing happens.
I am not sure what is going on :(
Try using the yum module instead:
ansible local -s -m yum -a 'name=lynx state=present'
You have to say "yes" to yum:
Try this instead:
ansible local -s -m shell -a 'yum install lynx -y'
You should be using either the yum module or even better, the package module which is OS generic.
On the other hand, you other option is the raw module which runs an SSH-dirty command against your nodes!
This is an example for using the package module in a task:
----
- hosts: <hosts_names>
sudo: yes
tasks:
- name: install lynx
register: result
package: name=lynx state=latest
become: yes
become_user: root
become_method: sudo
ignore_errors: True
Using the above playbook, hopefully helps you to install you desired package and even if it did not do that, you'll be able to easy go through the result variable and see what's going wrong.
And of course before all these, you should've made sure that your YUM repository/ies work propeprly!

Ansible Error : The lxc module is not importable. Check the requirements

My task looks something like this
- name: Create a started container
lxc_container:
name: test-container-started
container_log: true
template: ubuntu
state: started
template_options: --release trusty
I get the following errors:
> TASK: [lxc | Create a started container]
> **************************************  failed: [localhost] => {"failed": true, "parsed": false}
> BECOME-SUCCESS-azsyqtknxvlowmknianyqnmdhuggnanw failed=True msg='The
> lxc module is not importable. Check the requirements.' The lxc module
> is not importable. Check the requirements.
>
>
> FATAL: all hosts have already failed -- aborting
If you use Debian or Ubuntu as host system there is already python-lxc package available. The best way to install it is Ansible of course, just before lxc_container tasks:
tasks:
- name: Install provisioning dependencies
apt: name=python-lxc
- name: Create docker1.lxc container
lxc_container:
name: docker1
...
I just ran into the same issue andI solved it by making sure to pip install lxc-python2 system-wide on the target machine.
In my case the target was localhost via an ansible_connection=local host and I thought I could get away with pip: name=lxc-python2, but that installed into my current virtualenv.
lxc_container is relatively new.
To run the lxc_container module you need to make sure the pip package is installed on the target host, as well as lxc.
e.g
pip install lxc-python2
apt-get install lxc
The correct answer to the issue is to add:
become: yes
just before the lxc_container -module section, so as to get root privileges to run lxc-commands on target host.
Definitely not is it required to install any python2-lxc -support libs to target hosts to use lxc_container module, which requires such support to be available in management host only. Only the python2 must be available in the target host to be usable with current Ansible versions.
See ansible documention for more info about the requirements about the lxc-container module: (on host that executes module)
lxc >= 1.0 # OS package
python >= 2.6 # OS Package
lxc-python2 >= 0.1 # PIP Package from https://github.com/lxc/python2-lxc

Resources