Required pip package missing when actually present? - pip

I'm running the below ansible playbook, and am getting the following message
TASK [Set up pipeline] ***********************************************************************************************************************************************************************************************************************
fatal: [35.153.53.5]: FAILED! => {"changed": false, "msg": "python-jenkins required for this module. see http://python-jenkins.readthedocs.io/en/latest/install.html"}
to retry, use: --limit #~/Repositories/terraform-jenkins/ansible/jenkins.retry
Funny thing is.. it's actually present
[ec2-user#ip-172-31-43-13 ~]$ pip list |grep jenkins
jenkins-python 1.1
python-jenkins 1.3.0
[ec2-user#ip-172-31-43-13 ~]$ sudo !!
sudo pip list |grep jenkins
jenkins-python 1.1
python-jenkins 1.3.0
Ansible Version
ansible 2.6.4
config file = None
configured module search path = [u'/Users/jddaniel/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.15 (default, Jul 23 2018, 21:27:06) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)]
Verbose log
https://gist.github.com/ehime/ee08545fcb8e13d16ca801d1771d7461
Here's my playbook
#
# Ansible to provision Jenkins on remote host
#
- name: Install Jenkins and its plugins
hosts: all
become: yes
become_method: sudo
gather_facts: yes
pre_tasks:
- name: CA-Certificates update command line execution
command: /bin/update-ca-trust
vars:
jenkins_hostname: localhost
jenkins_http_port: 8080
roles:
- geerlingguy.repo-epel # required for pip
- geerlingguy.java
- geerlingguy.jenkins
tasks:
# TODO fix upstream
- name: Make Groovy folder writable
file:
path: /var/lib/jenkins/init.groovy.d
state: directory
# TODO verify this is what it should be
mode: 0777
- name: Install dependencies
yum:
name:
- git
- python2-pip
- name: Force upgrade pip
pip:
name: pip
extra_args: --upgrade
- name: Install dependencies for Jenkins modules
pip:
name: python-jenkins
- name: Install build pipeline
jenkins_plugin:
name:
- build-pipeline-plugin
- workflow-aggregator
url_username: "{{ jenkins_admin_username }}"
url_password: "{{ jenkins_admin_password }}"
- name: Set up pipeline
jenkins_job:
config: "{{ lookup('file', '_files/jobs.xml') }}"
name: test-auto
user: "{{ jenkins_admin_username }}"
password: "{{ jenkins_admin_password }}"
.... What could possibly be going on here? ...
Here's the jobs.xml if you want to try it on your lonesome
<?xml version='1.0' encoding='UTF-8'?>
<flow-definition plugin="workflow-job#2.10">
<actions/>
<description></description>
<keepDependencies>false</keepDependencies>
<properties>
<org.jenkinsci.plugins.workflow.job.properties.PipelineTriggersJobProperty>
<triggers/>
</org.jenkinsci.plugins.workflow.job.properties.PipelineTriggersJobProperty>
</properties>
<definition class="org.jenkinsci.plugins.workflow.cps.CpsFlowDefinition" plugin="workflow-cps#2.30">
<script>
node { echo &apos;You get a pipeline, she gets a pipeline... you all get pipelines...&apos; }
</script>
<sandbox>true</sandbox>
</definition>
<triggers/>
</flow-definition>
OS info
[ec2-user#ip-172-31-43-13 ~]$ hostnamectl
Static hostname: ip-172-31-43-13.ec2.internal
Icon name: computer
Chassis: n/a
Machine ID: 8df22ad8f77c4d84bc36f0456b1fd0d7
Boot ID: 4981022b5a1c4d0d8bd659ca4ceeb071
Operating System: Red Hat Enterprise Linux Server 7.0 (Maipo)
CPE OS Name: cpe:/o:redhat:enterprise_linux:7.0:GA:server
Kernel: Linux 3.10.0-123.8.1.el7.x86_64
Architecture: x86_64
OS info is probably sketch as lsb_release isn't available... I'm using ami-a8d369c0 on AWS, which says it's RHEL 7.0 ... prob a stripped AMI? idk

No idea what the issue really was but yum installing instead of pip worked?

Related

AWX Ansible - Dont read galaxy collections

I'm using AWX ansible ver 20.0 with kubernetes
My playbook:
---
- name: Install 7zip with offline package chocolatey
hosts: all
become: true
gather_facts: false
tasks:
- name: Create folder
win_file:
path: 'C:/Instalki'
state: directory
- name: Copy installer
become: true
win_copy:
src: "../playbooksWindows/installers/7zip.22.01.nupkg"
dest: "C:/Instalki/7zip.22.01.nupkg"
- name: install 7zip packages
win_chocolatey:
name: "7zip"
state: present
source: "C:/Instalki/7zip.22.01.nupkg"
- name : clear folder
win_file:
path: "C:/Instalki/7zip.22.01.nupkg"
state: absent
Error:
/usr/local/lib/python3.8/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
No config file found; using defaults
SSH password:
BECOME password[defaults to SSH password]:
ERROR! couldn't resolve module/action 'win_chocolatey'. This often indicates a misspelling, missing collection, or incorrect module path.
The error appears to be in '/runner/project/playbooksWindows/Install_7zip.yml': line 27, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: install 7zip packages
^ here
My actual ansible collection on awx-ee container
# /usr/share/ansible/collections/ansible_collections
Collection Version
----------------------- -------
amazon.aws 4.1.0
ansible.posix 1.4.0
ansible.windows 1.11.1
awx.awx 21.5.0
azure.azcollection 1.13.0
community.vmware 2.9.1
google.cloud 1.0.2
kubernetes.core 2.3.2
openstack.cloud 1.9.1
ovirt.ovirt 2.2.3
redhatinsights.insights 1.0.7
theforeman.foreman 3.6.0
# /home/runner/.ansible/collections/ansible_collections
Collection Version
--------------------- -------
ansible.windows 1.11.1
chocolatey.chocolatey 1.3.0
I was installing collection with:
ansible-galaxy collection install chocolatey.chocolatey
Any ideas how to fix it on docker in ver AWX 17.01 everything works fine :/

Failed to import the required Python library (botocore or boto3)

I'm running a playbook on localhost, and I've already installed ansible and the required packaged like boto3. The playbook works fine when it is performing tasks on a remote host, but outputs the following error when running locally.
Command:
ansible-playbook app.yaml
Error:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (botocore or boto3) on DESKTOP-9NTDHK1's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
Ansible version:
ansible 2.10.3
config file = /home/user/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/user/ansible/venv/lib/python3.8/site-packages/ansible
executable location = /home/user/ansible/venv/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
Boto3 version:
(venv) user#DESKTOP-9NTDHK1:~/ansible$ pip show boto3ersion"
Name: boto3
Version: 1.16.18
Summary: The AWS SDK for Python
Home-page: https://github.com/boto/boto3
Author: Amazon Web Services
Author-email: UNKNOWN
License: Apache License 2.0
Location: /home/user/ansible/venv/lib/python3.8/site-packages
Requires: s3transfer, botocore, jmespath
Required-by:
app.yaml
---
- hosts: localhost
connection: local
gather_facts: no
vars:
user: *****
AWS_PREFIX: *****
tasks:
# provision AWS
- name: Provision VPC
ec2_vpc_net:
cidr_block: *****
region: *****
name: *****
state: present
register: vpc_data
I was running ansible in a docker container where python and pip is also installed.
Running the below command solved my problem.
pip install boto3
Here is an extract of a playbook:
note the vars section.
ansible is not aware of the venv in which it is launched.
- hosts: all
connection: local
become_user: tim
vars: #local connection defaults to using the system python
ansible_python_interpreter: /home/tim/pycharm_projects/django_api_sync/ansible/venv/bin/python3
vars_files:
- includes/secret_variables.yml
tasks:
- name: create a DigitalOcean Droplet
community.digitalocean.digital_ocean_droplet:
state: present
name: "{{droplet_name}}"
oauth_token: "{{digital_ocean_token}}"
size: "s-2vcpu-2gb"
region: SGP1
monitoring: yes
unique_name: yes
image: ubuntu-20-04-x64
wait_timeout: 500
ssh_keys: [ "{{digital_ocean_ssh_fingerprint}}"]
register: my_droplet
- name: Print IP address
ansible.builtin.debug:
msg: Droplet IP address is {{ my_droplet.data.ip_address }}

Install ngnix on ubuntu 20.04 using ansible playbook?

Hi i am new to ansible i have to deploy nodejs12.8.4, SSL and ngnix latest to Ubuntu 20.04 server can someone guide me how to do it thank you.
this is my yml file:
hosts: all
become: true
tasks:
- name: install nodejs prerequisites
apt:
name:
- apt-transport-https
- gcc
- g++
- make
state: present
- name: add nodejs apt key
apt_key:
url: https://deb.nodesource.com/gpgkey/nodesource.gpg.key
state: present
- name: add nodejs repository
apt_repository:
repo: deb https://deb.nodesource.com/node_12.x {{ ansible_lsb.codename }} main
state: present
update_cache: yes
- name: install nodejs
apt:
name: nodejs
state: present
it install nodejs 12 now i want to install nginx in same file how i add new task.
Try Ansible NGINX Role. See details at Github.
Q: "I want to install Nginx in the same file how I add a new task?"
A: Include the role
- include_role:
name: nginx
Download the role. See Roles and Using Roles in particular.

Ansible complains about "The MySQL-python module is required"

I have Ansible 2.6.1 installed in my local machine (WSL; Ubuntu):
ansible 2.6.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansile/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
My target machine is running Ubuntu 16.04-LTS.
I'm using this task to install python3-mysqldb:
- name: "Debian | Install Mysql Client package"
apt:
name: "{{ item }}"
state: present
with_items:
- mysql-client
- python3-dev
- libmysqlclient-dev
- python3-mysqldb
when:
- zabbix_server_database == 'mysql'
tags:
- zabbix-server
- init
- database
It fails in this task:
- name: "MySQL | Create database and import file >= 3.0"
mysql_db:
name: "{{ zabbix_server_dbname }}"
encoding: "{{ zabbix_server_dbencoding }}"
collation: "{{ zabbix_server_dbcollation }}"
state: import
target: "{{ ls_output_create.stdout }}"
when:
- zabbix_version is version_compare('3.0', '>=')
- zabbix_database_sqlload
- not done_file.stat.exists
delegate_to: "{{ delegated_dbhost }}"
tags:
- zabbix-server
- database
Here's the fail message:
fatal: [target_host -> target_host-db]: FAILED! => {"changed": false, "msg": "The MySQL-python module is required."}
I can confirm python3-mysqldb was indeed installed:
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/python3-mysqldb
/usr/share/doc/python3-mysqldb/changelog.Debian.gz
/usr/share/doc/python3-mysqldb/copyright
/usr/lib
/usr/lib/python3
/usr/lib/python3/dist-packages
/usr/lib/python3/dist-packages/mysqlclient-1.3.7.egg-info
/usr/lib/python3/dist-packages/mysqlclient-1.3.7.egg-info/top_level.txt
/usr/lib/python3/dist-packages/mysqlclient-1.3.7.egg-info/PKG-INFO
/usr/lib/python3/dist-packages/mysqlclient-1.3.7.egg-info/dependency_links.txt
/usr/lib/python3/dist-packages/_mysql_exceptions.py
/usr/lib/python3/dist-packages/_mysql.cpython-35m-x86_64-linux-gnu.so
/usr/lib/python3/dist-packages/MySQLdb
/usr/lib/python3/dist-packages/MySQLdb/connections.py
/usr/lib/python3/dist-packages/MySQLdb/release.py
/usr/lib/python3/dist-packages/MySQLdb/cursors.py
/usr/lib/python3/dist-packages/MySQLdb/constants
/usr/lib/python3/dist-packages/MySQLdb/constants/ER.py
/usr/lib/python3/dist-packages/MySQLdb/constants/CLIENT.py
/usr/lib/python3/dist-packages/MySQLdb/constants/REFRESH.py
/usr/lib/python3/dist-packages/MySQLdb/constants/FIELD_TYPE.py
/usr/lib/python3/dist-packages/MySQLdb/constants/FLAG.py
/usr/lib/python3/dist-packages/MySQLdb/constants/__init__.py
/usr/lib/python3/dist-packages/MySQLdb/constants/CR.py
/usr/lib/python3/dist-packages/MySQLdb/converters.py
/usr/lib/python3/dist-packages/MySQLdb/compat.py
/usr/lib/python3/dist-packages/MySQLdb/__init__.py
/usr/lib/python3/dist-packages/MySQLdb/times.py
I also tried installing the python package MySQL-python using pip but I also got the same error message.
I'm stumped. I don't know what to do anymore.
EDIT: I also tried installing Python 2.7.x on the target machine and made sure that /usr/bin/python is symlinked to Python 2.7.x but I'm still getting the same error. I'm using DJ Wasabi's zabbix-server role
I think you are mixing things up with your delegation. I would simplify things.
Option one: run everything locally. Assumes your DB server is reachable through the network:
- hosts: localhost
connection: local
tasks:
- name: "Debian | Install Mysql Client package"
apt:
name: "{{ item }}"
state: present
with_items:
- mysql-client
- python3-dev
- libmysqlclient-dev
- python3-mysqldb
when:
- zabbix_server_database == 'mysql'
tags:
- zabbix-server
- init
- database
- name: "MySQL | Create database and import file >= 3.0"
mysql_db:
name: "{{ zabbix_server_dbname }}"
encoding: "{{ zabbix_server_dbencoding }}"
collation: "{{ zabbix_server_dbcollation }}"
state: import
target: "{{ ls_output_create.stdout }}"
when:
- zabbix_version is version_compare('3.0', '>=')
- zabbix_database_sqlload
- not done_file.stat.exists
tags:
- zabbix-server
- database
Option two: run the SQL commands from the DB server (then you don't need mysql-python on your local machine, but you need python and mysql-python on the remote server hosting MySQL):
- hosts: dbserver
tasks:
- name: "Debian | Install Mysql Client package"
apt:
name: "{{ item }}"
state: present
with_items:
- mysql-client
- python3-dev
- libmysqlclient-dev
- python3-mysqldb
when:
- zabbix_server_database == 'mysql'
tags:
- zabbix-server
- init
- database
- name: "MySQL | Create database and import file >= 3.0"
mysql_db:
name: "{{ zabbix_server_dbname }}"
encoding: "{{ zabbix_server_dbencoding }}"
collation: "{{ zabbix_server_dbcollation }}"
state: import
target: "{{ ls_output_create.stdout }}"
when:
- zabbix_version is version_compare('3.0', '>=')
- zabbix_database_sqlload
- not done_file.stat.exists
tags:
- zabbix-server
- database
add ansible_python_interpreter in ansible.cfg as follows:
[test-server]
server1 ansible_ssh_host=x.x.x.x ansible_ssh_user=test ansible_python_interpreter=/usr/bin/python3

Why Ansible keeps giving me error "Could not find the requested service httpd: cannot check nor set state"?

I am doing a dry run on installing apache web server on a centos 7 box.
This is the webserver.yml file:
--- # Outline to Playbook Translation
- hosts: apacheWeb
user: aleatoire
sudo: yes
gather_facts: no
tasks:
- name: date/time stamp for when the playbook starts
raw: /bin/date > /home/aleatoire/playbook_start.log
- name: install the apache web server
yum: pkg=httpd state=latest
- name: start the web service
service: name=httpd state=started
- name: install client software - telnet
yum: pkg=telnet state=latest
- name: install client software - lynx
yum: pkg=lynx state=latest
- name: log all the packages installed on the system
raw: yum list installed > /home/aleatoire/installed.log
- name: date/time stamp for when the playbook ends
raw: /bin/date > /home/aleatoire/playbook_end.log
When I do a dry run with:
ansible-playbook webserver.yml --check
I keep getting this error:
fatal: [<ip_address>]: FAILED! => {"changed": false, "failed": true, "msg": "Could not find the requested service httpd: cannot check nor set state"}
to retry, use: --limit #/home/aleatoire/Outline/webserver.retry
I tried adding ignore_issues: true and that did not work either.
--check is not going to actually install the httpd package if it's not there yet. So then the service: call will fail if there is no httpd unit file installed yet.
You can use --syntax-check option instead.

Resources