Iterate over dict object using ansible - ansible

I have a local.yml file shown below where I am defining the variable JDK_VERSION as list. calling the role from different repo
- role: jdk_install
vars:
JDK_VERSION: [ 'jdk1.8', 'jdk11', 'jdk17' ]
I have defined list varibles in local.yml as key in vars.yml
JDK_VERSIONS:
"jdk1.8": ["1.8.0_352", "1.8.0_322"]
"jdk11": ["11.0.17_8", "11.0.18_8"
"jdk17": "17.0.5_8"
1.8.0_352:
"package_name": "sincro-jdk-1.8.0_352"
"jdk_dirname": "jdk1.8.0_352"
"sym_link": "/opt/jdk1.8"
"installer": "rpm"
1.8.0_322:
"package_name": "sincro-jdk-1.8.0_322"
"jdk_dirname": "jdk1.8.0_322"
"sym_link": "/opt/jdk1.8"
"installer": "rpm"
11.0.17_8:
"download_url": https://artifactory.sincrod.com/artifactory/github-releases/adoptium/temurin11-binaries/releases/download/jdk-11.0.17+8/OpenJDK11U-jdk_x64_linux_hotspot_11.0.17_8.tar.gz
"package_name": "OpenJDK11U-jdk_x64_linux_hotspot_11.0.17_8.tar.gz"
"jdk_dirname": "jdk-11.0.17+8"
"sym_link": "/opt/jdk11"
"installer": "tar"
17.0.5_8:
"download_url": https://artifactory.sincrod.com/artifactory/github-releases/adoptium/temurin17-binaries/releases/download/jdk-17.0.5+8/OpenJDK17U-jdk_x64_linux_hotspot_17.0.5_8.tar.gz
"package_name": "OpenJDK17U-jdk_x64_linux_hotspot_17.0.5_8.tar.gz"
"jdk_dirname": "jdk-17.0.5+8"
"sym_link": "/opt/jdk17"
"installer": "tar"
The main.yml file looks like below
---
- name: Install Java
include: install_jdk.yml
vars:
"install_jdk" : "{{ item }}"
with_items:
- "{{ JDK_VERSION }}"
- name: Setting default jdk
include: default_jdk.yml
vars:
"default_jdk" : "{{ JDK_VERSION.0 }}"
And install.yml file looks like below
---
- name: JDK version
debug:
msg: "{{ install_jdk }}"
- name: Process JDK details
set_fact:
jdk_details: "{{ lookup('vars', JDK_VERSIONS[install_jdk], default='1.8.0_352') }}"
- name: Print JDK version
debug:
msg: "{{ jdk_details }}"
- name: Installing JDK from rpm
yum:
name: "{{ item }}"
update_cache: true
state: installed
when: jdk_details.installer == "rpm"
- name: Installing JDK from source
block:
- name: download jdk tar
get_url:
url: "{{ jdk_details.download_url }}"
dest: /tmp
mode: 0755
group: root
owner: root
- name: Create jdk installation directory path
file:
path: "/opt/data/services/jdks/"
state: directory
- name: Untar JDK installtion files
unarchive:
src: /tmp/{{ jdk_details.package_name }}
dest: /opt/data/services/jdks
remote_src: True
when: jdk_details.installer == "tar"
- name: create symlink for JDK version
file:
src: "/opt/data/services/jdks/{{ jdk_details.jdk_dirname }}"
dest: "{{ jdk_details.sym_link }}"
state: link
force: yes
follow: False
I want to iterate over the list mentioned in local.yml.
Also local.yml list item act as key for vars.yml. Now vars.yml or vars/main.yml has keys with multiple values.
So, basically I want to iterate similar to nested list or list[dict{[list]}] kind of structure.
In the above mentioned example the key "jdk1.8": ["1.8.0_352", "1.8.0_322"] has two item, so it should install two packages

Related

In Ansible loop, test existence of files from registered results

I have several files that I need to backup in different directories. I have tried the code below and not working for me.
vars:
file_vars:
- {name: /file1}
- {name: /etc/file2}
- {name: /etc/file/file3}
tasks:
- name: "Checking if config files exists"
stat:
path: "{{ item.name }}"
with_items: "{{ file_vars }}"
register: stat_result
- name: Backup Files
copy: src={{ item.name }} dest={{ item.name }}{{ ansible_date_time.date }}.bak
with_items: "{{ file_vars }}"
remote_src: yes
when: stat_result.stat.exists == True
The problem is the condition
when: stat_result.stat.exists == True
There is no attribute stat_result.stat. Instead, the attribute stat_result.results is a list of the results from the loop. It's possible to create a dictionary of files and their statuses. For example
- set_fact:
files_stats: "{{ dict(my_files|zip(my_stats)) }}"
vars:
my_files: "{{ stat_result.results|json_query('[].item.name') }}"
my_stats: "{{ stat_result.results|json_query('[].stat.exists') }}"
Then simply use this dictionary in the condition
when: files_stats[item.name]
Below is a shorter version which creates the dictionary more efficiently
- set_fact:
files_stats: "{{ dict(stat_result.results|
json_query('[].[item.name, stat.exists]')) }}"
Please try using below worked for me:
---
- name: Copy files
hosts: localhost
become: yes
become_user: root
vars_files:
- files.yml
tasks:
- name: "Checking if config files exists"
stat:
path: "{{ item }}"
with_items: "{{ files }}"
register: stat_result
- name: Ansible
debug:
msg: "{{ stat_result }}"
- name: Backup Files
copy:
src: "{{ item }}"
dest: "{{ item.bak }}"
with_items: "{{ files }}"
when: stat_result == "True"
and files.yml will look like:
---
files:
- /tmp/file1
- /tmp/file2
you can check you playbook syntax using below command:
ansible-playbook copy.yml --syntax-check
Also you do dry run your playbook before actual execution.
ansible-playbook -i localhost copy.yml --check

Is it possible to install custom Ansible plugin from git

I'd like to share a custom inventory plugin across multiple playbooks and users.
Is it possible to host a custom inventory plugin on git and the like roles with requirements.yml do something like:
ansible-galaxy install -r requirements.yml
I tried to embbed it in a role using:
/myrole/library/inventory_plugins/custom_inventory.py
/myrole/plugins/inventory_plugins/custom_inventory.py
/myrole/inventory_plugins/custom_inventory.py
but so far no luck.
Q: "Is it possible to install custom Ansible plugin from git?"
A: Yes. It's possible. For example
1) Download and extract the plugins
vars:
ma_src_path: "/usr/local/ansible/src"
ma_plugins_path: "/usr/local/ansible/plugins"
map_mitogen_ver: 0.2.8
map_mitogen_sha256: "sha256:1bfca66bcc522346c9167a3a9829feac5ee3b84431e49354fb780e4b9a4b0eee"
ma_plugins:
- archive: mitogen-{{ map_mitogen_ver }}.tar.gz
archive_url: https://networkgenomics.com/try/mitogen-{{ map_mitogen_ver }}.tar.gz
checksum: "{{ map_mitogen_sha256 }}"
plugins:
- path: mitogen-{{ map_mitogen_ver }}/ansible_mitogen/plugins/strategy
ini_key: strategy_plugins
enable: true
tasks:
- name: "plugins: Download archives"
get_url:
url: "{{ item.archive_url }}"
dest: "{{ ma_src_path }}"
checksum: "{{ item.checksum }}"
loop: "{{ ma_plugins }}"
- name: "plugins: Extract archives"
unarchive:
src: "{{ ma_src_path }}/{{ item.archive }}"
dest: "{{ ma_plugins_path }}"
loop: "{{ ma_plugins }}"
2) Configure the plugins with template ansible-plugins.cfg.j2
vars:
ma_config:
- path: "/etc/ansible/ansible.cfg"
template: "ansible-plugins.cfg.j2"
owner: "root"
group: "root"
mode: "0644"
config:
- { section: "defaults", key: "inventory", value: "/etc/ansible/hosts" }
- { section: "defaults", key: "strategy", value: "mitogen_linear" }
tasks:
- name: "configure: Ansible configuration from template"
template:
src: "{{ item.template }}"
dest: "{{ item.path }}"
owner: "{{ item.owner }}"
group: "{{ item.group }}"
mode: "{{ item.mode }}"
backup: "{{ ma_backup_conf }}"
loop: "{{ ma_config }}"
when: ma_config|length > 0
See the complete role at Ansible Galaxy.

Ansible find module giving error "does not seem to be a valid directory or it cannot be accessed" absolute path

Ansible find module isn't working as expected.
So i have three instances
One is test node , second controller node and third is from where i am running my ansible playbook
I am trying to generate ssh-keys on test_nodes and then fetching the public keys from those nodes. This is working fine.
Then I am trying to appending these public keys in the authorized_keys file of a different host(controller_node). For this, I am using the find module to get list of files and then loop over these files in authorized_key module.
I was using :
- name: Set authorized key file taken from file
authorized_key:
user: absrivastava
key: "{{ lookup('file','item') }}"
state: present
#with_file:
- "/home/absrivastava/ANSIBLE/ssh-keys/*/home/ribbon/.ssh/id_rsa.pub" This didnt work
#with_filetree:
- "/home/absrivastava/ANSIBLE/ssh-keys/*/home/ribbon/.ssh/id_rsa.pub" This was not appending data
But it didnt seem to work. So i am using find to get list of files and then iterate over them.
- name: Generate ssh keys
hosts: media_nodes
gather_facts: false
tasks:
- name: key generation
openssh_keypair:
path: ~/.ssh/id_ssh_rsa
force: True
register: public_key
- debug:
var: public_key.public_key
- name: fetch public key from all nodes
fetch:
src: ~/.ssh/id_ssh_rsa.pub
dest: ssh-keys/
- name: Controller play
hosts: controller
gather_facts: false
tasks:
- name: Find list of public key files
find:
paths: /home/abhilasha/ANSIBLE/ssh-keys/
file_type: file
recurse: yes
patterns: ".*pub"
use_regex: yes
register: files_matched
- name: debug files matched
debug:
var: files_matched.files
- name: Debug files_matched loop
debug:
var: item.path
loop: "{{ files_matched.files|flatten(levels=1) }}"
loop_control:
label: "{{ item.path }}"
- name: Set authorized key file taken from file
authorized_key:
key: "{{ lookup('file','item') }}"
state: present
with_file:
- "{{ files_matched.files }}"
- name: Find list of public key files
This play is not working giving error
TASK [Find list of public keys] *****************************************************************************************************************************************************************************************************************
ok: [test_controller] => {"changed": false, "examined": 0, "files": [], "matched": 0, "msg": "/home/abhilasha/ANSIBLE/ssh-keys/ was skipped as it does not seem to be a valid directory or it cannot be accessed\n"}
Okay so i got the issue , i was using hosts: controller for this play but the files are on my test VM instance .
But I am not sure how to still solve my problem. I want to use publoc keys on my local and then append it to controller server
- name: Fetch public key files from localhost
gather_facts: false
hosts: 127.0.0.1
connection: local
tasks:
- name: Find list of public keys
find:
paths: ssh-keys/
file_type: file
recurse: yes
patterns: "pub"
use_regex: yes
hidden: yes
register: files_matched
- name: Debug files_matched loop
debug:
var: item.path
loop: "{{ files_matched.files|flatten(levels=1) }}"
loop_control:
label: "{{ item.path }}"
- name: Add Public keys to controller authorized keys
hosts: controller
gather_facts: false
tasks:
- name: Set authorized key file taken from file
authorized_key:
key: "{{ lookup('file','item') }}"
state: present
with_file:
- "{{ files_matched.files }}"
I am unable to use files_matched variable outside the scope of that play. How can i make this work. Thanks in advance
Q: "msg": "/home/abhilasha/ANSIBLE/ssh-keys/ was skipped as it does not seem to be a valid directory or it cannot be accessed\n"
A: Take a look at the directory ssh-keys/ at controller and check the content. Instead of
paths: /home/abhilasha/ANSIBLE/ssh-keys/
find it in the same path
path: ssh-keys/
it has been fetch to
dest: ssh-keys/
Can you change the paths as below and try
- name: fetch public key from all nodes
fetch:
src: ~/.ssh/id_ssh_rsa.pub
dest: /tmp/ssh-keys/
- name: Find list of public key files
find:
paths: /tmp/ssh-keys/
file_type: file
recurse: yes
patterns: ".*pub"
use_regex: yes
register: files_matched
If you are trying to copy/find files from your local machine to the remote, by default the find module will run on the remote, not your local machine. As a result, the error will be thrown if those directories don't exist on your remote.
So you just tell it to "find" on your local machine by specifying delegate_to: localhost and it should work.
tasks:
- find:
paths:
- local_dir1
- local_dir2
file_type: file
patterns: '*.tgz'
register: files_output
# Execute task on this host instead of the target (inventory_hostname).
delegate_to: localhost
- block:
- set_fact:
files: "{{ files_output.files | map(attribute='path') }}"
- set_fact:
files: '{{ files + more }}'
vars:
more:
- '{{playbook_dir}}/remote.sh'
- debug:
msg: '{{ item }}'
loop: '{{ files }}'

Problem with creating consul cluster using ansible

I'm trying to create a Consul cluster using Ansible and i'm using this example https://github.com/brianshumate/ansible-consul .i'm using the vagrant file to up 3 Ubuntu machines
the problem is that the task Install unzip package seem to always fail,and it gives this error message:
fatal: [consul1.consul -> localhost]: FAILED! => {"changed": false, "msg": "Could not detect which package manager to use. Try gathering facts or setting the \"use\" option."}
Ansible seem unable to recognize the package manager,even though ansible localhost -m setup | grep mgr shows that the variable ansible_pkg_mgr has the value apt
i'm not sure what could be the source of the problem.i tried upping 3 debian machines and i still have the same problem.
UPDATE:
here's the task file for consul
---
# File: install.yml - package installation tasks for Consul
- name: Install OS packages
package:
name: "{{ item }}"
state: present
with_items: "{{ consul_os_packages }}"
tags: installation
- name: Read package checksum file
local_action:
module: stat
path: "{{ role_path }}/files/consul_{{ consul_version }}_SHA256SUMS"
become: no
run_once: true
register: consul_checksum
tags: installation
- name: Download package checksum file
local_action:
module: get_url
url: "{{ consul_checksum_file_url }}"
dest: "{{ role_path }}/files/consul_{{ consul_version }}_SHA256SUMS"
become: no
run_once: true
tags: installation
when: not consul_checksum.stat.exists | bool
- name: Read package checksum
local_action:
module: shell
grep "{{ consul_pkg }}" "{{ role_path }}/files/consul_{{ consul_version }}_SHA256SUMS" | awk '{print $1}'
become: no
run_once: true
register: consul_sha256
tags: installation
- name: Check Consul package file
local_action:
module: stat
path: "{{ role_path }}/files/{{ consul_pkg }}"
become: no
run_once: true
register: consul_package
tags: installation
- name: Download Consul package
local_action:
module: get_url
url: "{{ consul_zip_url }}"
dest: "{{ role_path }}/files/{{ consul_pkg }}"
checksum: "sha256:{{ consul_sha256.stdout }}"
timeout: "42"
become: no
run_once: true
tags: installation
when: not consul_package.stat.exists | bool
- name: Update alpine package manager (apk)
local_action:
module: apk
update_cache: yes
run_once: true
when: lookup('file','/etc/alpine-release')
- name: Install unzip package
local_action:
module: package
name: unzip
state: present
run_once: true
when:
- consul_install_dependencies | bool
- name: Unarchive Consul package
local_action:
module: unarchive
src: "{{ role_path }}/files/{{ consul_pkg }}"
dest: "{{ role_path }}/files/"
creates: "{{ role_path }}/files/consul"
become: no
run_once: true
tags: installation
- name: Install Consul
copy:
src: "{{ role_path }}/files/consul"
dest: "{{ consul_bin_path }}/consul"
owner: "{{ consul_user }}"
group: "{{ consul_group }}"
mode: 0755
tags: installation
- name: Daemon reload systemd in case the binaries upgraded
command: systemctl daemon-reload
become: yes
notify: restart consul
when:
- ansible_service_mgr == "systemd"
- consul_install_upgrade
- name: Cleanup
local_action: file path="{{ item }}" state="absent"
become: no
with_fileglob: "{{ role_path }}/files/consul"
run_once: true
tags: installation
The problem is with the Alpine package Manager,somehow it seem to cause an error with Ubuntu,so all what i did is use Apt instead of Apk.
here's the new version of the task file
---
# File: install.yml - package installation tasks for Consul
- name: Install OS packages
package:
name: "{{ item }}"
state: present
with_items: "{{ consul_os_packages }}"
tags: installation
- name: Read package checksum file
local_action:
module: stat
path: "{{ role_path }}/files/consul_{{ consul_version }}_SHA256SUMS"
become: no
run_once: true
register: consul_checksum
tags: installation
- name: Download package checksum file
local_action:
module: get_url
url: "{{ consul_checksum_file_url }}"
dest: "{{ role_path }}/files/consul_{{ consul_version }}_SHA256SUMS"
become: no
run_once: true
tags: installation
when: not consul_checksum.stat.exists | bool
- name: Read package checksum
local_action:
module: shell
grep "{{ consul_pkg }}" "{{ role_path }}/files/consul_{{ consul_version }}_SHA256SUMS" | awk '{print $1}'
become: no
run_once: true
register: consul_sha256
tags: installation
- name: Check Consul package file
local_action:
module: stat
path: "{{ role_path }}/files/{{ consul_pkg }}"
become: no
run_once: true
register: consul_package
tags: installation
- name: Download Consul package
local_action:
module: get_url
url: "{{ consul_zip_url }}"
dest: "{{ role_path }}/files/{{ consul_pkg }}"
checksum: "sha256:{{ consul_sha256.stdout }}"
timeout: "42"
become: no
run_once: true
tags: installation
when: not consul_package.stat.exists | bool
- name: Install unzip package
apt:
name: unzip
state: present
run_once: true
when:
- consul_install_dependencies | bool
- name: Unarchive Consul package
local_action:
module: unarchive
src: "{{ role_path }}/files/{{ consul_pkg }}"
dest: "{{ role_path }}/files/"
creates: "{{ role_path }}/files/consul"
become: no
run_once: true
tags: installation
- name: Install Consul
copy:
src: "{{ role_path }}/files/consul"
dest: "{{ consul_bin_path }}/consul"
owner: "{{ consul_user }}"
group: "{{ consul_group }}"
mode: 0755
tags: installation
- name: Daemon reload systemd in case the binaries upgraded
command: systemctl daemon-reload
become: yes
notify: restart consul
when:
- ansible_service_mgr == "systemd"
- consul_install_upgrade
- name: Cleanup
local_action: file path="{{ item }}" state="absent"
become: no
with_fileglob: "{{ role_path }}/files/consul"
run_once: true
tags: installation

Trying to ignore task on certain playbook

Right now I have two playbooks that I execute. One for a code_update and one for the entire site update. I'd like to have a "Quick Code Update", that doesn't run any of the NPM tasks.
Currently the task looks as such:
---
- name: Create app database
mysql_db: name={{ app_db_name }} state=present
- name: Copy deploy key
copy:
src: templates/path.pem
dest: "{{ ssh_key_path }}"
owner: "{{ app_user }}"
mode: 0600
- name: Create app dir
file:
path: "{{ app_dir }}"
state: directory
mode: 0755
owner: "{{ app_user }}"
- name: Create log dir
file:
path: "{{ log_dir }}"
state: directory
recurse: yes
mode: 0755
owner: "{{ app_user }}"
- name: Pull sixnexus app
git:
repo: "{{ app_repo }}"
dest: "{{ app_dir }}"
version: master
force: yes
accept_hostkey: yes
ssh_opts: "-i {{ ssh_key_path }}"
- name: Create venv
pip:
virtualenv: "{{ app_env }}"
requirements: "{{ app_dir }}/requirements.txt"
- name: Copy local_settings.py
template:
src: templates/local_settings.py.j2
dest: "{{ app_dir }}/local_settings.py"
owner: "{{ app_user }}"
mode: 0755
- name: Run migrations
django_manage:
command: migrate
app_path: "{{ app_dir }}"
virtualenv: "{{ app_env }}"
# Partially doing this to save on memory
- name: Stop Elasticsearch
service: name=elasticsearch state=stopped
- name: Install react deps
command: npm install chdir=/home/ubuntu/path/app/react_ui
- name: Package react app
command: npm run package chdir=/home/ubuntu/path/app/react_ui
- name: Start Elasticsearch
service: name=elasticsearch state=started
- name: Run collectstatic
django_manage:
command: collectstatic
app_path: "{{ app_dir }}"
virtualenv: "{{ app_env }}"
I'd like to edit the play to look like this:
---
- name: Create app database
mysql_db: name={{ app_db_name }} state=present
- name: Copy deploy key
copy:
src: templates/sixnexus_deploy_key.pem
dest: "{{ ssh_key_path }}"
owner: "{{ app_user }}"
mode: 0600
- name: Create app dir
file:
path: "{{ app_dir }}"
state: directory
mode: 0755
owner: "{{ app_user }}"
- name: Create log dir
file:
path: "{{ log_dir }}"
state: directory
recurse: yes
mode: 0755
owner: "{{ app_user }}"
- name: Pull sixnexus app
git:
repo: "{{ app_repo }}"
dest: "{{ app_dir }}"
version: master
force: yes
accept_hostkey: yes
ssh_opts: "-i {{ ssh_key_path }}"
- name: Create venv
pip:
virtualenv: "{{ app_env }}"
requirements: "{{ app_dir }}/requirements.txt"
- name: Copy local_settings.py
template:
src: templates/local_settings.py.j2
dest: "{{ app_dir }}/local_settings.py"
owner: "{{ app_user }}"
mode: 0755
- name: Run migrations
django_manage:
command: migrate
app_path: "{{ app_dir }}"
virtualenv: "{{ app_env }}"
# Partially doing this to save on memory
- name: Stop Elasticsearch
service: name=elasticsearch state=stopped
3
- name: Start Elasticsearch
service: name=elasticsearch state=started
- name: Run collectstatic
django_manage:
command: collectstatic
app_path: "{{ app_dir }}"
virtualenv: "{{ app_env }}"
But, I can not figure how to incorporate a second file for my tasks.
You have to tag related tasks:
- name: Install react deps
command: npm install chdir=/home/ubuntu/path/app/react_ui
tags:
- npm
- name: Package react app
command: npm run package chdir=/home/ubuntu/path/app/react_ui
tags:
- npm
And when you call your playbook, you have to specify the --skip-tags option:
$ ansible-playbook main.yml --skip-tags "npm"

Resources