Module apt_repository: Why is validate_certs not working? - ansible

I have an Ansible playbook that adds an Apt repository.
The Repo is located on my own server - accessed by HTTPS (with a self-signed cert).
If I put the repo manually in the hosts sources.list I get this error when updating the cache.
Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. The name in the certificate does not match the expected.
Ok, its self-signed. Disable the check and all is fine.
apt -o "Acquire::https::Verify-Peer=false" update
Very good - now I try to handle it with Ansible.
- name: add my repo and update cache
apt_repository:
repo: deb https://myserver/debian bullseye main
filename: myrepo
validate_certs: false
state: present
No matter if I set validate_certs: false - it fails any time.
FAILED! => {"changed": false, "msg": "Failed to update apt cache: unknown reason"}
If I make the repo public via HTTP and change the URL then all is fine.
Also if I keep it with HTTPS-selfsigned and set
Acquire::https::Verify-Peer "false";
at the update-seeking hosts apt-config it will work too.
But I expect validate_certs to handle that.
So why is it not working??
debian 11.6
ansible 2.10.8
python 3.9.2

The validate_certs argument to apt_repository does not affect the configuration of apt or the repository, it controls whether certs are validated during the module's internal fetching of PPA info.
If you want apt to ignore validation failures, you need to configure apt accordingly.

Related

Problems when installing git on my targets

I have some problems when I try to install git on my target node.
1st method: I used the ansible command
ansible <ip-node> -u root -b -K -m raw -a "apt install -y git"
and I have this error on my controller node terminal:
E: Impossible de récupérer certaines archives, peut-être devrez-vous lancer apt-get update ou essayer avec --fix-missing ? )
2nd method: I played the following playbook
name: This sets up an git
hosts: vm2
tasks:
- name: install git
apt:
name: git
state: present
cache_update: True
I get an other error
fatal: [192.168.57.10]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (apt) module: cache_update. Supported parameters include: allow_downgrade (allow-downgrade, allow-downgrades, allow_downgrades), policy_rc_d, autoremove, force_apt_get, update_cache_retry_max_delay, fail_on_autoremove, install_recommends (install-recommends), update_cache_retries, default_release (default-release), state, autoclean, cache_valid_time, only_upgrade, deb, purge, allow_unauthenticated (allow-unauthenticated), lock_timeout, upgrade, dpkg_options, package (name, pkg), force, update_cache (update-cache)."})
My questions are:
How can I debug the above errors?
I'm not sure but I suspect my errors happen because my target node does not have internet access. For example, ping google.fr fails. Could this be the issue?
What should I change in my target node network configuration to fix my issues?
Note that you tried cache_update, it should be update_cache as the error message tells. Tip: the package module chooses the package manager on your target machines (apt, rpm, etc.)
- name: Install git
become: true
become_method: sudo
become_user: root
package:
name: git
state: present
update_cache: true

Ansible: 'shell' not executing

I'm trying to run a shell command to install a package downloaded from my local Artifactory repository, as I don't have access to download it straight from the internet.
When I run the command directly on the node as such
rpm -ivh kubectl-1.1.1.x86_64.rpm --nodigest --nofiledigest
It works perfectly.
But then put in Ansible playbook as such
- name: Install Kubectl
shell: rpm -ivh kubectl-1.1.1.x86_64.rpm --nodigest --nofiledigest
Nothing happens.
It doesn't error.. It just doesn't install.
I've tried the command and ansible.builtin.shell module as well, but nothing works.
Is there a way to do this please?
There are different topics in your question.
Regarding
to install a package downloaded from my local Artifactory repository, as I don't have access to download it straight from the internet.
you can use different approaches.
1. Direct download
- name: Make sure package becomes installed from internal repository
yum:
name: https://{{ REPOSITORY_URL }}/artifactory/kube/kubectl-{{ KUBE_VERSION }}.x86_64.rpm
state: present
2. Configure local repository
The next one is to provide a .repo template file like
[KUBE]
name = Kubectl - $basearch
baseurl = https://{{ REPOSITORY_URL }}/artifactory/kube/
username = {{ API_USER }}
password = {{ API_KEY }}
sslverify = 1
enabled = 1
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-KUBE
and to perform
- name: Make sure package becomes installed from internal repository
yum:
name: kubectl
state: present
This is possible because JFrog Artifactory can provide local RPM repositories if configured correctly. For more information you research the documentation there since it is almost only about proper configuration.
Regarding
Nothing happens. It doesn't error.. It just doesn't install.
you can use several task to split up your steps, make them idempotent and get an better insight how they are working.
3. shell, rpm and debug
- name: Make sure destination folder for package download (/opt/packages) exists
file:
path: "/opt/packages/"
state: directory
- name: Download RPM to remote hosts
get_url:
url: "https://{{ REPOSTORY_URL }}/artifactory/kube/kubectl-{{ KUBE_VERSION }}.x86_64.rpm"
dest: "/opt/packages/kubectl-{{ KUBE_VERSION }}.x86_64.rpm"
- name: Check package content
shell:
cmd: "rpm -qlp /opt/packages/kubectl-{{ KUBE_VERSION }}.x86_64.rpm"
register: rpm_qlp
- name: STDOUT rpm_qlp
debug:
msg: "{{ rpm_qlp.stdout.split('\n')[:-1] }}"
- name: Install RPM using 'command: rpm -ivh'
shell:
cmd: "rpm -ivh /opt/packages/kubectl-{{ KUBE_VERSION }}.x86_64.rpm"
register: rpm_ivh
- name: STDOUT rpm_ivh
debug:
msg: "{{ rpm_ivh.stdout.split('\n')[:-1] }}"
Depending on the RPM package, environment and configuration, all may just work good.
Try use command module and register the output i use it to install oci8 by pecl for oracle database on linux

How to list for all Openssl certificates through ansible-playbook

Want to know how to write an ansible-playbook to list all the Openssl certificates in the localhost and their expiration date
As I will have a similar use case and probably some more in the future, I've setup a short test on a RHEL 7.9 environment.
Since I am behind a proxy I had to specify it in Bash for CLI, as well for Ansible modules.
export HTTP_PROXY="http://localhost:3128"
export HTTPS_PROXY="http://localhost:3128"
The x509_certificate_module comes from the Community Collections, therefore an installation is necessary before.
ansible-galaxy collection install community.crypto # --ignore-certs
Process install dependency map
Starting collection install process
Installing 'community.crypto:1.9.3' to '/home/${USER}/.ansible/collections/ansible_collections/community/crypto'
As well addig the collection path to the library.
vi ansible.cfg
...
[defaults]
library = /usr/share/ansible/plugins/modules:~/.ansible/plugins/modules:~/.ansible/collections/ansible_collections/
...
On remote host the Python cryptography module needs to be installed.
- name: Make sure dependencies are resolved
pip:
name: cryptography
extra_args: '--index-url https://"{{ ansible_user }}":"{{ API_KEY }}"#repository.example.com/artifactory/api/pypi/pypi-remote/simple --upgrade'
tags: check_certs
... I am using an internal private repository service even for Python packages, so I do not need to specify the proxy for pip_module, but the index-url here instead of in /etc/pip.conf.
environment:
HTTP_PROXY: "localhost:3128"
HTTPS_PROXY: "localhost:3128"
Than it is simply possible to gather and list certificate information.
- name: Get certificate information
community.crypto.x509_certificate_info:
path: /etc/pki/ca-trust/source/anchors/DC_com_DC_example_CA_cert.pem
register: cert_info
tags: check_certs
- name: Show certificate information
debug:
msg: "{{ cert_info.not_after }}"
tags: check_certs
... Due to some interoperability problems I found out that it seems to be the best to use a POSIX portable file name character set for certificate file names only.
Further reading, implementation parts and enhancement I'll leave to you.
Documentation:
Ansible pip_module
Ansible x509_certificate_module

Adding NewRelic PHP agent via Ansible

I've been trying to install NewRelic agent for PHP on Amazon Linux 2 the "ansible way", but I cannot get it to work with either rpm_key or yum_repository. I've also tried just copying the repo file to /etc/yum.repos.d/newrelic.repo, but it's supposed to use a GPG key and the only one I found is 548C16BF.gpg and at that point I felt this was getting to hacky.
My current setup is:
- name: add the new relic repository
# noqa 303
command: rpm -Uvh http://yum.newrelic.com/pub/newrelic/el5/x86_64/newrelic-repo-5-3.noarch.rpm
but that doesn't sit well with ansible-lint (hence the rule exception).
Am I missing something here or maybe my preconception of what the "ansible-way" would be is incorrect. Asking for a friend (with a lot of Ansible experience).
To add the GPG key:
- name: Adding RPM key
rpm_key:
state: present
key: https://download.newrelic.com/548C16BF.gpg
and Add the repository:
- name: Add repository
yum_repository:
name: rewrelic
description: Newrelic YUM repo
baseurl: http://yum.newrelic.com/pub/newrelic/el5/x86_64/newrelic-repo-5-3.noarch.rpm
Finally install the yum:
- name: install Rewrelic
yum:
name: rewrelic
state: present

Ansible yum disablerepo does not work if repo is not installed

I have a number of different Centos7 servers running. I like to use ansible to update them all at once.
As one of my servers has an additional repository enabled, which I do not want to update. I've added to the playbook the option to disable this repo. This works as expected.
However, on my other servers, I did not install and enable this repo. When using the disablerepo in my ansible playbook, I get an error: repository not found.
How do I solve this in the ansible-playbook? Is it possible to add an condition like, if repo installed; then disablerepo; else do nothing?
Is it possible to ignore these errors?
ansible-playbook:
---
- hosts: [all]
tasks:
- name: update all packages to lastest version
yum:
name: '*'
state: latest
disablerepo: sernet-samba-4.2
you can put ignore_errors: yes as in the link from the comment, or you can put when, only if certain package is installed, sure you have to register them to variables first, I'm thinking something like:
- name: check if installed
shell: rpm -qa sernet-samba-4.2
register: is_installed
- name: update all packages to lastest version
yum:
name: '*'
state: latest
disablerepo: sernet-samba-4.2
when: is_installed.rc == 1
Warning: Untested.
After a day of research in internet and experiments finally found a solution that worked. Try to use wildcard.. then it will not fail when repo is missing.
yum:
name: ''
state: latest
disablerepo: sernet-samba

Resources