Ansible "apt cache update failed" during postgresql repo install - ansible

I'm trying to install postgresql on my remote host using Ansible.
I have 2 solutions, but both work only half as I expect them to work/turn out.
1. Solution one (using Ansible apt_repository module);
- name: Get Postgres latest repository release
become: yes
apt_repository:
repo: deb https://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main
state: present
filename: pgdg.list
This option first looks like it's successful on execution, but eventually gives an error (apt cache update failed):
2. Solution two (use shell module)
- name: Get Postgres latest repository release
become: yes
shell: sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
This accomplishes exactly the same (correct me if I'm wrong) and later gives some other exception (No package matching):
So I'm not getting any further with my playbook, any tips?
My play to install postgresql-13 (or 14):
- name: Install Postgres dependencies (if not installed)
become: yes
apt:
name:
- postgresql-{{ postgres_version }}
- postgresql-doc-{{ postgres_version }}
- postgresql-contrib
- zabbix-sql-scripts
when:
- "not 'postgresql-13' in ansible_facts.packages"
- "not 'postgresql-doc-13' in ansible_facts.packages"
- "not 'postgresql-contrib' in ansible_facts.packages"
- "not 'zabbix-sql-scripts' in ansible_facts.packages"
Repository keys are provided in another play and performs without errors.
This is executed on a host with Ubuntu 20.04LTS, Ansible version 2.9.6, Python version 3.8.10
I have searched for both individual problems but did not find a specific answer that solved my problem.
If if perform the same as Ansible manually, it all installs fine (commands used):
sudo sh -c ‘echo “deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main” > /etc/apt/sources.list.d/pgdg.list’
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
sudo apt-get -y install postgresql-13 postgresql-doc-13
Edit:
Seemed I also had a quote (") missing at the start of my repo link (before deb https://...).
I had: repo: deb https://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main"
Should be: "repo: deb https://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main"

Basically, it must be said that an Ansible module, such as apt_repository is generally preferable to the shell module.
However, you have a few mistakes in your approach.
As you can see, apt_repository automatically updates the package sources, i.e. the repository key must already exist. You can add the repository key with the apt_key module, but you need to have the gpg package installed, which you should ensure first.
From this follows:
install gpg
- name: Ensure gpg is installed
apt:
name: gpg
add repository key
- name: Add repository signing key
apt_key:
url: "https://www.postgresql.org/media/keys/ACCC4CF8.asc"
state: present
add repository source
- name: Add postgresql repository
apt_repository:
repo: "deb https://apt.postgresql.org/pub/repos/apt/ {{ ansible_distribution_release }}-pgdg main"
state: present
filename: pgdg
When using apt_repository one of your problems was that you left $(lsb_release -cs) unchanged, but this only works via the shell, instead you have to use the variable ansible_distribution_release when using Ansible. Also, the filename parameter does not need an extension (.list), as the documentation tells you: "The .list extension will be automatically added." Your specification creates the file pgdg.list.list.
Again, as a whole playbook:
- name: Database Server
hosts: databaseserver
become: yes
tasks:
- name: Ensure gpg is installed
apt:
name: gpg
- name: Add repository signing key
apt_key:
url: "https://www.postgresql.org/media/keys/ACCC4CF8.asc"
state: present
- name: Add postgresql repository
apt_repository:
repo: "deb https://apt.postgresql.org/pub/repos/apt/ {{ ansible_distribution_release }}-pgdg main"
state: present
filename: pgdg
Now you should be able to install the desired packages, if they are provided by the included repository.

Maybe this is what cause your problem: answer
Before to add the repository you need to:
Add the APT Key.
Add Repository.
Install Packages.

Related

yum_repository module fails properly insert *repo configuration file

there, I am having an issue if anybody had encountered and solved it, please share your knowledge.
Machine:
CentOS Linux release 7.6.1810 (Core)
NAME="CentOS Linux"
epel.yml
- name: Add repository
yum_repository:
name: epel
description: epel-repo
baseurl: https://download.fedoraproject.org/pub/epel/$releasever/$basearch/
ansible-playbook epel.yml (I have removed not necessary part of the epel.yml)
Above, code when run successfully enters epel.repo in /etc/yum.repos.d/ folder. However, when I try to install any package it gives me en error referring "Failed to connect. Network is unreachable"
I have checked #cat /etc/yum.repos.d/epel.repo
baseurl=https://download.fedoraproject.org/pub/epel///
I searched for where $releasever adn $basearch variables come from? Not very concrete answers around.
Please help.
It seems like yum couldn't determine $releasever and $basearch. Check this post for the possible reasons why this wasn't possible.
To workaround the problem, you could try using the yum module instead:
- name: install the latest version of epel
yum:
name: epel-release
state: latest
Or install it directly from the rpm package:
- name: install from url
yum:
name: https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
state: present

How to pass options to dnf when using Ansible

When installing packages through the Ansible's dnf module. How do I pass options like --nobest to dnf? Is there alternative to using raw shell commands.
I had similar problem (but i'm using yum package manager) and figured a work around here
The issue is that docker-ce has not been packaged for CentOS 8 yet.
An ugly workaround is to install containerd.io manually beforehand:
pre_tasks:
- yum:
name: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
So, try to set full package url as package name and it should definitely work.
nobest can be passed as a parameter while using the DNF module in the playbook
You can refer here dnf_module for other options/parameters that can be passed to the dnf ansible module
For example :
- name: Install the latest version of Apache from the testing repo
dnf:
name: httpd
enablerepo: testing
state: present
- name: Install the latest version of Apache
dnf:
name: httpd
state: present
nobest: false

Skip package download if already installed

I am using ansible to install a deb package and I do not want to download the remote file if the package is already installed. Currently I am doing like this:
- name: Check if elasticsearch installed
become: true
stat: path=/etc/elasticsearch/elasticsearch.yml
register: st
- name: Install elasticsearch
become: yes
apt:
deb: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.12.deb
update_cache: true
state: present
when: not st.stat.exists
Is there a better way to skip the download of the deb package if it is already installed?
You'll want package_facts or, of course, to just cheat and shell out something like command: dpkg --search elasticsearch
- name: gather installed packages
package_facts:
- name: Install elasticsearch
when: elasticsearch not in ansible_facts.packages
Unless your question is about how to do that when elasticsearch might have been installed by hand, and not through dpkg, in which case your stat: and register: approach is a sensible one. You may even want to use a with_items: to check a few places the file might have been installed, depending on your circumstance

How to cache the downloaded packages of the apt-get in travis?

I am using travis-ci to test my github repository and I found out that 3 to 10 minutes are spent on downloading deb packages. They are only 127 MB big.
I checked travis-ci/Caching Dependencies and Directories
but there is no support for apt-get.
How to do this?
It is possible to achieve this by caching the content in another folder that is accessible for non-root users, and mv all deb under it into /var/cache/apt/archives/. After installation, cp them back to that folder.
Note:
I recommand you to make YOUR_DIR_FOR_DEB_PACKAGES somewhere under ~.
# .travis.yml
sudo: required
cache:
- directories:
- $YOUR_DIR_FOR_DEB_PACKAGES # This must be accessible for non-root users
addons:
apt:
sources:
# Whatever source you need
# Download the dependencies if it is not cached
# All the "echo" and "ls" in "before_script" can be remove since they are only used for debugging.
before_script:
- echo "Print content of $YOUR_DIR_FOR_DEB_PACKAGES"
- ls $YOUR_DIR_FOR_DEB_PACKAGES
- echo "Check whether apt-get has no cache"
- ls /var/cache/apt/archives
-
- echo "Start loading cache"
- |
exist() {
[ -e "$1" ]
}
- |
if exist ~/$YOUR_DIR_FOR_DEB_PACKAGES/*.deb
then
sudo mv ~/$YOUR_DIR_FOR_DEB_PACKAGES/*.deb /var/cache/apt/archives/
ls /var/cache/apt/archives
fi
-
- echo "Start to install software"
- sudo apt-get update
- sudo apt-get install -y --no-install-recommends --no-install-suggests $THE_PACKAGES_REQUIRED
-
- echo "Start updating the cache"
- cp /var/cache/apt/archives/*deb ~/$YOUR_DIR_FOR_DEB_PACKAGES/
script:
- # Do whatever you want here.
It seems to me like this is not a recommended practice.
According to official documentation
Large files that are quick to install but slow to download do not
benefit from caching, as they take as long to download from the cache
as from the original source:
Debian packages

Ansible: prevent 'changed' when looping

I like this pattern in Ansible:
tasks:
- name: install packages
apt:
name: "{{ item }}"
update_cache: yes
cache_valid_time: 3600
state: latest
with_items:
- build-essential
- git
- libjpeg-dev
However, I get a changed notification every time, even though nothing has been installed. I don't want to set changed_when: False. Is there a way to get the proper changed status from this loop?
Update If anything was installed by apt, I want changed to be True. If everything was already installed, and apt did not do any work when looping over this list, then I want changed to be False.
I'm using Ansible 2.0.1.0.
The changed indication is due the apt cache (apt-get update) being executed each time the task is run. If you only want an indication of whether packages were installed then remove the update_cache: yes directive or set to update_cache: no.
Here's what it should be, after applying the accepted answer by #Petro026.
tasks:
- name: install packages
apt:
name: "{{ item }}"
state: latest
with_items:
- build-essential
- git
- libjpeg-dev

Resources