Ansible "apt" puts unwanted second repository into sources list - ansible

On Linux Mint 21 I am trying to install signed packages from external repos.
I have the same problem with 5 different repos.
I can get the PGP key and add the repo to the /etc/apt/sources.list.d/ directory, but when I call apt, it makes another entry in the sources directory (but without the pointer to the key).
This causes the install to fail.
If I remove the second entry, then the package installs correctly.
I have tried several of the parameters to apt, but without success.
Here is an example, to install Chrome:
- name: Add Chrome signing key
get_url:
url: https://dl.google.com/linux/linux_signing_key.pub
dest: /usr/share/keyrings/google-chrome.asc
mode: '0644'
force: true
- name: Add Chrome repository
apt_repository:
repo: deb [arch=amd64 signed-by=/usr/share/keyrings/google-chrome.asc] https://dl.google.com/linux/chrome/deb/ stable main
state: present
At this point I correctly have:
/etc/apt/sources.list.d/dl_google_com_linux_chrome_deb.list
which correctly contains:
deb [arch=amd64 signed-by=/usr/share/keyrings/google-chrome.asc] https://dl.google.com/linux/chrome/deb/ stable main
After calling
- name: Add Chrome package
apt:
name: "google-chrome-stable"
there is a second list in the sources directory:
/etc/apt/sources.list.d/dl_google_com_linux_chrome_deb.list
/etc/apt/sources.list.d/google-chrome.list
This second list points to the repo, but without the key:
### THIS FILE IS AUTOMATICALLY CONFIGURED ###
# You may comment out this entry, but any other modifications may be lost.
deb [arch=amd64] https://dl.google.com/linux/chrome/deb/ stable main
Like I say, if I remove the second entry, then the package installs correctly.
Question: How do I stop this incorrect list from being added?
Further testing:
I used Ansible to get the key and add the repo to /etc/apt/sources.list.d/ and then I manually called
sudo apt install google-chrome-stable
It correctly installed but then also added the google-chrome.list file.
The same thing happens when I try to install Docker, TeamViewer, VS Code, and 1Password so it isn't just Chrome.
So how do I use Ansible to install signed external packages?

I have experienced such behavior a few times when manually installing deb packages. When the deb package is installed, a sources-list file is automatically created.
I can think of a few possibilities you could test:
Obviously when chrome is installed, the file google-chrome.list is created. You could test if this is overwritten during installation if you name your file google-chrome.list instead of dl_google_com_linux_chrome_deb.list. You have to add the parameter filename (without specifying the file extension).
- name: Add Chrome repository
apt_repository:
repo: deb [arch=amd64 signed-by=/usr/share/keyrings/google-chrome.asc] https://dl.google.com/linux/chrome/deb/ stable main
state: present
filename: google-chrome
As long as the existing file is not overwritten, everything should be fine. If it is overwritten again, you could try to run the apt_repository command again after the installation.
You could delete the file created by Chrome after the installation. However, based on the note in the file "any other modifications may be lost.", I don't know if that holds.
- name: Remove google-chrome.list file
file:
path: /etc/apt/sources.list.d/google-chrome.list
state: absent
You could comment out the deb line in the new file after installation. As it is mentioned there, this modification should then hopefully be preserved.
- name: Comment out Chrome's default source.
lineinfile:
path: /etc/apt/sources.list.d/google-chrome.list
regexp: '^(deb .*)$'
line: '# \g<1>'
backrefs: yes
However, you should still test how the behavior is during an update, if the file is overwritten/created again every time. If there is a switch to prevent the creation of the sources-list-file directly during the installation, I don't know.
Edit: Combination of solution 1 and 3:
- name: Add Chrome repository
apt_repository:
repo: deb [arch=amd64 signed-by=/usr/share/keyrings/google-chrome.asc] https://dl.google.com/linux/chrome/deb/ stable main
state: present
filename: google-chrome
- name: Add Chrome package
apt:
name: "google-chrome-stable"
- name: Comment out Chrome's default source.
lineinfile:
path: /etc/apt/sources.list.d/google-chrome.list
regexp: '^(deb \[arch=amd64\] .*)$'
line: '# \g<1>'
backrefs: yes
I have adjusted the regexp, so only the deb line without signature key matches.

Related

Ansible ERROR! no action detected in task if I use full module name

I try to do a simple apt update on a remote system. My playbook looks like this
---
- hosts: all
become: true
tasks:
- name: update
ansible.builtin.apt:
update_cache: yes
When I run it I get the error
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/opt/LabOS/ansible/test_pb.yml': line 6, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- name: update
^ here
However if I remove the module path (ansible.builtin.) it runs just fine. According to the documentation both notations should work. What am I missing here?
My ansible-playbook version is 2.7.7
The (optional) full name of the module like ansible.builtin.apt is only available starting from ansible version 2.10.
Since, you are on an older version, it gives an error and you can use only the short name of the module, for example: apt

Do anchors and aliases in YAML store the string or the interpolated string?

When I run the following job on CircleCI the cache key mysteriously changes between the cache being read and the cache being written. The only explanation I can think of is that my understanding of anchors and aliases in YAML is incorrect.
I'm using a YAML anchor and alias to capture the cache key when I read it and then to read it back when I write it. And yet somehow the two keys are different (as if the line is being re-evaluated).
commands:
npm_install:
description: Install and cache (cached) node modules
steps:
- restore_cache:
key: &NPM_CACHE_KEY v8-npm--{{ .Branch }}-{{arch}}-{{checksum "~/project/angular/package-lock.json"}}
- run:
name: Install Node modules
command: |
cd ~/project/angular
npm install ci
- save_cache:
key: *NPM_CACHE_KEY
paths:
- ~/project/angular/node_modules/
- /usr/local/bin/node_modules
- /home/circleci/.cache/
The result of running this is the following:
Zooming in on the cache key lines you can see that the cache key is different.
# When reading the cache
v8-npm--add_angular_version_display-arch1-linux-amd64-6_85-IXQLtfeuATYKtD1dRoCJu_FbRnfhYZb1+XkVFRET5Pc=
---
#When writing the cache
v8-npm--add_angular_version_display-arch1-linux-amd64-6_85-0wC66hCG9ZcSN9wfW4NlpxQswHV+n4foEcQ15cWRxqg=
Does the YAML anchor/alias method save the interpolated or un-interpolated string?
Leaving aside the question of why the package-lock.json file might have changed in the first place there's another question. Why are these two keys different? Is the YAML being re-interpolated?
Is what's being saved to &NPM_CACHE_KEY, the interpolated value, i.e. v8-npm--add_angular_version_display-arch1-linux-amd64-6_85-IXQLtfeuATYKtD1dRoCJu_FbRnfhYZb1+XkVFRET5Pc=
or... is it saving ``NPM_CACHE_KEY v8-npm--{{ .Branch }}-{{arch}}-{{checksum "~/project/angular/package-lock.json"}}` and re-evaluating this the second time round?
If anchors/aliases re-interpolate then what doesn't?
If the anchor is re-interpolating then how can I get it to simply store the output?
(I also have another problem which is why the hash of package-lock is changing but that's a separate problem)
The answer is that the anchor stores the template for the interpolation and not the interpolated string.
I discovered this by using #jonrsharpe's suggestion to look at the expanded version of the config file. You can view this inside CircleCI and when viewed it was very clear that the problem was that it is being re-interpolated.
# Expanded view snippet of config.yml
- restore_cache:
key: v8-npm--{{ .Branch }}-{{arch}}-{{checksum "~/project/angular/package-lock.json"}}
- run:
name: Install Node modules
command: |
cd ~/project/angular
npm ci
- save_cache:
key: v8-npm--{{ .Branch }}-{{arch}}-{{checksum "~/project/angular/package-lock.json"}}
paths:
- ~/project/angular/node_modules/
- /usr/local/bin/node_modules
- /home/circleci/.cache/
(NB the reason the checksums were different was that I should have been going npm ci and not npm install ci as I was actually doing).

Using github actions to publish documentation

What I considered:
github offers github pages to host documentation in either a folder on my master branch or a dedicated gh-pages branch, but that would mean to commit build artifacts
I can also let readthedocs build and host docs for me through webhooks, but that means learning how to configure Yet Another Tool at a point in time where I try to consolidate everything related to my project in github-actions
I already have a docu-building process that works for me (using sphinx as the builder) and that I can also test locally, so I'd rather just leverage that instead. It has all the rules set up and drops some static html in an artifact - it just doesn't get served anywhere. Handling it in the workflow, where all the other deployment configuration of my project is living, feels better than scattering it over different tools or github specific options.
Is there already an action in the marketplace that allows me to do something like this?
name: CI
on: [push]
jobs:
... # do stuff like building my-project-v1.2.3.whl, testing, etc.
release_docs:
steps:
- uses: actions/sphinx-to-pages#v1 # I wish this existed
with:
dependencies:
- some-sphinx-extension
- dist/my-project*.whl
apidoc_args:
- "--no-toc"
- "--module-first"
- "-o docs/autodoc"
- "src/my-project"
build-args:
- "docs"
- "public" # the content of this folder will then be served at
# https://my_gh_name.github.io/my_project/
In other words, I'd like to still have control over how the build happens and where artifacts are dropped, but do not want to need to handle the interaction with readthedocs or github-pages.
###Actions that I tried
❌ deploy-to-github-pages: runs the docs build in an npm container - will be inconvenient to make it work with python and sphinx
❌ gh-pages-for-github-action: no documentation
❌ gh-pages-deploy: seems to target host envs like jekyll instead of static content, and correct usage with yml syntax not yet documented - I tried a little and couldn't get it to work
❌ github-pages-deploy: looks good, but correct usage with yml syntax not yet documented
✅ github-pages: needs a custom PAT in order to trigger rebuilds (which is inconvenient) and uploads broken html (which is bad, but might be my fault)
✅ deploy-action-for-github-pages: also works, and looks a little cleaner in the logs. Same limitations as the upper solution though, it needs a PAT and the served html is still broken.
The eleven other results when searching for github+pages on the action marketplace all look like they want to use their own builder, which sadly never happens to be sphinx.
In the case of managing sphinx using pip (requirements.txt), pipenv, or poetry, we can deploy our documentation to GitHub Pages as follows. For also other Python-based Static Site Generators like pelican and MkDocs, the workflow works as same. Here is a simple example of MkDocs. We just add the workflow as .github/workflows/gh-pages.yml
For more options, see the latest README: peaceiris/actions-gh-pages: GitHub Actions for GitHub Pages 🚀 Deploy static files and publish your site easily. Static-Site-Generators-friendly.
name: github pages
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v2
- name: Setup Python
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Upgrade pip
run: |
# install pip=>20.1 to use "pip cache dir"
python3 -m pip install --upgrade pip
- name: Get pip cache dir
id: pip-cache
run: echo "::set-output name=dir::$(pip cache dir)"
- name: Cache dependencies
uses: actions/cache#v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: python3 -m pip install -r ./requirements.txt
- run: mkdocs build
- name: Deploy
uses: peaceiris/actions-gh-pages#v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./site
I got it to work, but there is no dedicated action to build and host sphinx docs on either github pages or readthedocs as of yet, so as far as I am concerned there is quite a bit left to be desired here.
This is my current release_sphinx job that uses the deploy-action-for-github-pages action and uploads to github-pages:
release_sphinx:
needs: [build]
runs-on: ubuntu-latest
container:
image: python:3.6
volumes:
- dist:dist
- public:public
steps:
# check out sources that will be used for autodocs, plus readme
- uses: actions/checkout#v1
# download wheel that was build and uploaded in the build step
- uses: actions/download-artifact#v1
with:
name: distributions
path: dist
# didn't need to change anything here, but had to add sphinx.ext.githubpages
# to my conf.py extensions list. that fixes the broken uploads
- name: Building documentation
run: |
pip install dist/*.whl
pip install sphinx Pallets-Sphinx-Themes
sphinx-apidoc --no-toc --module-first -o docs/autodoc src/stenotype
sphinx-build docs public -b dirhtml
# still need to build and set the PAT to get a rebuild on the pages job,
# apart from that quite clean and nice
- name: github pages deploy
uses: peaceiris/actions-gh-pages#v2.3.1
env:
PERSONAL_TOKEN: ${{ secrets.PAT }}
PUBLISH_BRANCH: gh-pages
PUBLISH_DIR: public
# since gh-pages has a history, this step might no longer be necessary.
- uses: actions/upload-artifact#v1
with:
name: documentation
path: public
Shoutout to the deploy action's maintainer, who resolved the upload problem within 8 minutes of me posting it as an issue.

Ansible 'ini_file' module not creating file if not exists

Ok, so I need to update a flag inside the config file etc/letsencrypt/dnscloudflare.ini with a new value, and also create the aforementioned file if it doesn't exist already.
So I wrote the task with the ini_file module as below,
- name: Update the "letsencrypt cloudflare plugin"'s config
ini_file:
path: /etc/letsencrypt/dnscloudflare.ini
section: null
option: "dns_cloudflare_api_key"
value: "my-key-here"
mode: 0600
backup: yes
create: yes
become: yes
become_user: root
Now, the file isn't there by default, so it should be created in the process but No matter what I do, the file just doesn't get created.
Note: I found this bug report, solution for which at the moment is still not merged.
So, as a work around I am now manually creating a file and then updating that file in the next task.
So
Why is this happening, like am I missing something?
Is there any known solution to this for the moment?
This is a documented bug and the discussion is on-going as of 29-JUN-2018.
So as an alternative for the time being, you can copy a dummy file (if not present) with the same filename then proceed to update it or you can go ahead with using other file modification modules like lineinfile.

How do I prevent module.run in saltstack if my file hasn't changed?

In the 2010.7 version of SaltStack, the onchanges element is available for states. However, that version isn't available for Windows yet, so that's right out.
And unfortunately salt doesn't use the zipfile module to extract zipfiles. So I'm trying to do this:
/path/to/nginx-1.7.4.zip:
file.managed:
- source: http://nginx.org/download/nginx-1.7.4.zip
- source_hash: sha1=747987a475454d7a31d0da852fb9e4a2e80abe1d
extract_nginx:
module.run:
- name: extract.zipfile
- archive: /path/to/nginx-1.7.4.zip
- path: /path/to/extract
- require:
- file: /path/to/nginx-1.7.4.zip
But this tries to extract the files every time. I don't want it to do that, I only want it to extract the file if the .zip file changes, because once it's been extracted then it'll be running (I've got something setup to take care of that). And once it's running, I can't overwrite nginix.exe because Windows is awesome like that.
So how can I extract the file only if it's a newer version of nginx?
I would probably use jinja to test for the existence of a file that you know would only exist if the zip file has been extracted.
{% if salt['file.exists']('/path/to/extract/known_file.txt') %}
extract_nginx:
module.run:
- name: extract.zipfile
- archive: /path/to/nginx-1.7.4.zip
- path: /path/to/extract
- require:
- file: /path/to/nginx-1.7.4.zip
{% endif %}
This will cause the extract_nginx state to not appear in the final rendered sls file if the zip file has been extracted.

Resources