I got some trouble with automating an installation using ansible.
I use this role (https://github.com/elastic/ansible-elasticsearch) to install elasticsearch on my ubuntu 16.04 server.
The role depends on the package python-jmespath, as mentioned in the documentation.
The role DOES NOT install the package itsself, so i try to install it before role execution.
- hosts: elasticsearch_master_servers
become: yes
tasks:
- name: preinstall jmespath
command: "apt-get install python-jmespath"
- name: Run the equivalent of "apt-get update" as a separate step
apt:
update_cache: yes
- hosts: elasticsearch_master_servers
become: yes
roles:
- role: elastic.elasticsearch
vars:
...
When running the playbook i expect the python-jmespath package to be installed before execuction of role takes place, but role execution fails with
You need to install \"jmespath\" prior to running json_query filter"
When i check if the package is installed manually using dpkg -s python-jmespath i can see the package is installed correctly.
A second run of the playbook (with the package already installed) doesnt fail.
Do I miss an ansible configuration, that updates the list of installed packages during the playbook run ?
Am I doing something wrong in general ?
Thanks in advance
FWIW. It's possible to tag installation tasks and install the packages in the first step. For example
- name: install packages
package:
name: "{{ item.name }}"
state: "{{ item.state|default('present') }}"
state: present
loop: "{{ packages_needed_by_this_role }}"
tags: manage_packages
Install packages first
shell> ansible_playbook my-playbook.yml -t manage_packages
and then run the playbook
shell> ansible_playbook my-playbook.yml
Notes
This approach makes checking of the playbooks with "--check" much easier.
Checking idempotency is also easier.
With tags: [manage_packages, never] the package task will be skipped when not explicitly selected. This will speed up the playbook.
Related
I'm trying create a Ansible playbook that will read contents of a file and use those contents to install packages on a target machine.
In simpler terms, I want to run this command converted to an ansible playbook
cat ./meta/install-list/apt | xargs apt install -y
./meta/install-list/apt
neofetch
tmux
git
./ansible/playbooks/apt.yaml
- hosts: all
become: true
tasks:
- name: Extract APT packages to install
command: cat ../../meta/install-list/apt
register: _pkgs
delegate_to: localhost
run_once: true
- name: Install APT packages
apt:
name: "{{ _pkgs.stdout_lines }}"
state: latest
./ansible.cfg
[defaults]
inventory = ./ansible/inventory/hosts.yaml
./ansible/inventory/hosts.yaml
---
all:
children:
group-machines:
hosts:
target-machine.local
Command to run playbook
ansible-playbook --ask-become-pass ./ansible/playbooks/apt.yaml --limit group-machine
When running the command, it gets stuck on Extract APT packages to install
NOTE:
these files mentioned above are to be only on machine that is running the command. If possible, I'd like to prevent copying files to target machines and then running the playbooks tasks
PS: new to ansible
I don't see anything in your "Extract APT packages to install" task that should cause it to get stuck... but you don't need that task in any case; you can combine your two tasks into a single task like this:
- hosts: all
become: true
tasks:
- name: Install APT packages
apt:
name: "{{ packages }}"
state: latest
vars:
packages: "{{ lookup('file', '../../meta/install-list/apt').splitlines() }}"
Here we're using a file lookup to read the contents of a file. Lookups always run on the local (control) host.
Note that you could write the above like this as well...
- hosts: all
become: true
tasks:
- name: Install APT packages
apt:
name: "{{ lookup('file', '../../meta/install-list/apt').splitlines() }}"
state: latest
...but I like to keep longer jinja expressions in vars in order to keep the rest of the arguments more readable.
The above answer is more than enough by #Zeitounator. But if you do some formatting to your original file of package list as below
packages:
- neofetch
- tmux
- git
After that you can simply run the playbook like below
- hosts: all
become: true
vars_files: ../../meta/install-list/apt
tasks:
- name: Install APT packages
apt:
name: "{{ packages }}"
state: latest
Now suppose if you are lazy enough to not want to do the formatting then below playbook also will do the trick. Its much cleaner and scalable in my opinion.
---
- name: SHow the packages list
hosts: localhost
become: true
tasks:
- name: View the packages list file
shell: cat ../../meta/install-list/apt
register: output
- name: Install the package
apt:
name: "{{ output.stdout_lines }}"
state: latest
I'm working on a project that is still running Python 2. I'm trying to use Ansible to set up new test servers. The base Linux installation that I start with only has Python 3, so I need my very first "bootstrap" playbook to use Python 3, but then I want subsequent playbooks to use Python 2.
I can specify the version of python in my inventory file like this:
[test_server:vars]
ansible_python_interpreter=/usr/bin/python3
[test_server]
test_server.example.com
But then I have to go edit the inventory file to make sure that I'm using Python 3 for the bootstrap playbook, and then edit it again for the rest of my playbooks. That seems odd. I've tried a few different versions of changing ansible_python_interpreter within my playbooks, like
- hosts: test_server
ansible_python_interpreter: /usr/bin/python
and
- hosts: test_server
tasks:
- name: install pip
ansible_python_interpreter: /usr/bin/python
apt:
name: python-pip
but ansible complains that
ERROR! 'ansible_python_interpreter' is not a valid attribute for a Task
Even though https://docs.ansible.com/ansible/latest/reference_appendices/interpreter_discovery.html says that
You can still set ansible_python_interpreter to a specific path at any variable level (for example, in host_vars, in vars files, in playbooks, etc.).
What's the invocation to do this correctly?
Q: "What's the invocation to do this correctly?"
- hosts: test_server
tasks:
- name: install pip
ansible_python_interpreter: /usr/bin/python
apt:
name: python-pip
ERROR! 'ansible_python_interpreter' is not a valid attribute for a Task
A: ansible_python_interpreter is not a Playbook keyword. It is a variable and must be declared as such. For example, in the scope of a task
- hosts: test_server
tasks:
- name: install pip
apt:
name: python-pip
vars:
ansible_python_interpreter: /usr/bin/python
, or in the scope of a playbook
- hosts: test_server
vars:
ansible_python_interpreter: /usr/bin/python
tasks:
- name: install pip
apt:
name: python-pip
, or in any other suitable place. See Variable precedence: Where should I put a variable?
See:
INTERPRETER_PYTHON
INTERPRETER_PYTHON_FALLBACK
Notes:
Quoting from Changelog core 2.14:
configuration entry INTERPRETER_PYTHON_DISTRO_MAP is now 'private' and won't show up in normal configuration queries and docs, since it is not 'settable' this avoids user confusion.
Search _INTERPRETER_PYTHON_DISTRO_MAP in ansible/lib/ansible/config/base.yml to see current value of INTERPRETER_PYTHON_DISTRO_MAP
I am trying to write an ansible playbook which installs some RPMs for me after they have been copied to a known location by a Jenkins job. The problem is, I'm not sure how to get the name of the RPM to install without hard coding it.
Here is what I have now:
- hosts: localhost
roles:
- { role: some_role, artifacts: "{{ rpm_path }}/prefix_.*.rpm" }
In this case, rpm_path would be something like:
"/home/jenkins/workspace/rpm_install/artifacts"
The role that is called in this example handles the yum install part:
- name: Install RPMs
yum: name={{item}} state=present
with_items:
- "{{ artifacts }}"
I'd rather not have to hard code RPM names since they come from Jenkins and they are always different. But is there a way either through the yum module, or when I call the role where the regular expression or glob can be interpreted so the full path (rpm name included) is handed to yum?
You should use with_fileglob insted of with_items, something like
- name: Install RPMs
yum: name="{{item}}" state=present
with_fileglob:
- "{{ artifacts }}"
I am trying to get all the installed YUM packages on an RHEL machine. I can easily get it through using shell commands which is not idempotent and would like to use the YUM command instead.
The shell command works fine:
- name: yum list packages
shell: yum list installed > build_server_info.config
But when I try to use the YUM command, it just executes, but it does not give any results:
- name: yum_command
action: yum list=${pkg} list=available
I can easily get it through using shell commands which is not idempotent
You can't really talk about idempotence, when you are querying the current state of a machine.
"Idempontent" means that the task will ensure the machine is in the desired state no matter how many times you run a certain task.
When you query current state, you don't describe the desired state. No matter what you do, what method you use, the term "idempotent" is just not applicable.
Regarding your example, which does not give you results - you have repeated twice the same argument list and the task should fail (it doesn't, which looks like an Ansible quirk).
To get a list of installed packages, you should use:
- name: yum_command
yum:
list=installed
register: yum_packages
- debug:
var: yum_packages
It saves a list of dictionaries describing each package to a variable yum_packages.
You can then use a JSON Query Filter to get a single package (tar):
- debug: var=item
with_items: "{{yum_packages|json_query(jsonquery)}}"
vars:
jsonquery: "results[?name=='tar']"
to get a result like this:
"item": {
"arch": "x86_64",
"epoch": "2",
"name": "tar",
"nevra": "2:tar-1.26-31.el7.x86_64",
"release": "31.el7",
"repo": "installed",
"version": "1.26",
"yumstate": "installed"
}
Or only its version:
- debug: var=item
with_items: "{{yum_packages|json_query(jsonquery)}}"
vars:
jsonquery: "results[?name=='tar'].version"
"item": "1.26"
Since Ansible 2.5, you can also use the package_facts module: it will gather the list of installed packages as Ansible facts.
Example from the CPU:
- name: get the rpm package facts
package_facts:
manager: rpm
- name: show them
debug: var=ansible_facts.packages
Well, the official Ansible documentation for the yum module describes list as:
"Various (non-idempotent) commands for usage with /usr/bin/ansible and not playbooks."
so you're going to be out of luck with finding an idempotent list invocation.
If you just want to suppress the changed output, set the changed_when parameter to False.
(Also, having the duplicate list parameter is suspicious.)
If your requirement is to check for only one package, and based on that you want to execute another task, you can use package_facts module as
- name: install docker community edition in rhel8
hosts: localhost
connection: local
tasks:
- name: Gather the rpm package facts
package_facts:
manager: auto
- name: check if docker-ce is already installed
debug:
var: ansible_facts.packages['docker-ce']
- name: using command module to install docker-ce
command: yum install docker-ce --nobest -y
when: "'docker-ce' not in ansible_facts.packages"
register: install_res
- name: whether docker is installed now
debug:
var: install_res
I am trying to install Apache2 with Ansible. I have a role and handler for Apache.
My playbook (site.yml) contains:
---
- hosts: webservers
remote_user: ansrun
become: true
become_method: sudo
The Ansible role file contains:
---
- name: Install Apache 2
apt: name={{ item }} update_cache=yes state=present
with_items:
- apache2
when: ansible_distribution == "Ubuntu"
- name: Enable mod_rewrite
apache2_module: name=rewrite state=present
notify:
- reload apache2
Whenever I run the playbook, I get this message, but nothing has changed.
changed: [10.0.1.200] => (item=[u'apache2'])
I think this has something to do with the conditional.
You are running into a problem introduced to Ansible 2.2.0 (and fixed in 2.2.1).
With update_cache=yes the apt module was made to return changed-status whenever APT cache update occurred, not only when the actual package was upgraded.
You need to upgrade Ansible to version 2.2.1 (released officially on Jan 16th)
You need to do one of the following:
upgrade Ansible to at least 2.2.1 (currently in release candidate state and not available in PyPI, so you'd have to run Ansible from source);
downgrade Ansible to 2.1.3;
retain Ansible 2.2.0 and split the Install Apache 2 task into two:
one for cache update only (maybe with changed_when set to false),
one for the actual apache2 package installation (without update_cache=yes), calling the handler.