ansible error conflicting hosts, tasks when I include playbooks in role - ansible

my playbook directory structure.
/ansible_repo/
└── playbooks/
├──playbooks1.yml
├──playbooks2.yml
├── somerole.yml --> main playbook with roles
└── roles/
└── somerole
├── default
│ └── main.yml
├── handler
│ └── main.yml
├── tasks
│ └── main.yml
└── vars
└── main.yml
playbooks1.yml :
---
- hosts: all
tasks:
- pause:
minutes: 3
- name: ping host
win_ping:
somerole.yml :
---
- hosts: ci_host
roles:
- somerole
somerole\tasks\main.yml :
---
- include: playbooks/playbooks1.yml
when I run the role on some host:
ansible-playbook role-test.yml -vv --limit somehost
I get this error:
fatal: [somehost]: FAILED! =>
reason: |-
conflicting action statements: hosts, tasks
if I change the like that it passed:
- pause:
minutes: 3
- name: ping host
win_ping:
I tried understand how to set hosts and tasks in both, role-tasks-main and playbook.yml
and include the playbook into the role task.
if I get conflict I can config hierarchy host?

The error indicates that you are including a playbook inside a role, and for a role hosts and tasks are not allowed.
As somerole.yml is your main playbook, you can invoke other playbooks and roles as necessary.
Example:
- name: run playbook playbook1
import_playbook: playbooks/playbooks1.yml
- hosts: ci_host
roles:
- somerole
- name: run playbook playbook2
import_playbook: playbooks/playbooks2.yml

Related

Why does including var files using vars_files not work in Ansible?

I want to setup a server using Ansible. This is my file structure:
group_vars/
all.yml
development.yml
production.yml
vault/
all.yml
development.yml
production.yml
playbooks/
development.yml
production.yml
roles/
common/
tasks/
main.yml
vars/
main.yml
ansible.cfg
hosts
This is my ansible.cfg:
[defaults]
vault_password_file = ./vault_pass.txt
host_key_checking = False
inventory = ./hosts
The development.yml playbook:
- hosts: all
name: Development Playbook
become: true
roles:
- ../roles/common
vars_files:
- ../group_vars/development.yml
- ../group_vars/all.yml
- ../group_vars/vault/development.yml
- ../group_vars/vault/all.yml
And the tasks/main.yml file of the common role:
# Set hostame
- name: Set hostname
become: true
ansible.builtin.hostname:
name: "{{ server.hostname }}"
# Set timezone
- name: Set timezone
become: true
community.general.timezone:
name: "{{ server.timezone }}"
# Update all packages
- name: Update all packages
become: true
ansible.builtin.apt:
upgrade: dist
update_cache: true
The group_vars/all.yml file looks like this:
server:
hostname: "myhostname"
timezone: "Europe/Berlin"
When running the playbook using ansible-playbook playbooks/development.yml, I get this error:
fatal: [default]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'hostname'. 'dict object' has no attribute 'hostname'\n\nThe error appears to be in '/ansible/roles/common/tasks/main.yml': line 6, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# Set hostame\n- name: Set hostname\n ^ here\n"}
Can someone explain to me why the vars_files does not work and how to fix this?
Ansible automatically imports files and directories in group_vars that match the name of an active group. That is, if you are targeting the production group, and you have a file group_vars/production.yaml, the variables defined in this file will be imported automatically.
If instead of a file you have the directory group_vars/production/, then all files in that directory will be imported for hosts in the production group.
So your files in group_vars/vault/ will only be imported automatically for hosts in the vault hostgroup, which isn't the behavior you want.
Without knowing all the details about your deployment, I would suggest:
Create directories group_vars/{all,production/development}.
Rename group_vars/all.yml inventory files to group_vars/all/common.yml, and similarly for development.yml and production.yml (the name common.yml isn't special, you can use whatever name you want).
Rename group_vars/vault/all.yml to group_vars/all/vault.yaml, and similarly for the other files.
This will give you the following layout:
group_vars/
├── all
│   ├── common.yml
│   └── vault.yml
├── development
│   ├── common.yml
│   └── vault.yml
└── production
├── common.yml
└── vault.yml

how can I make ansible config less tied

I'm trying to build an ansible configuration with less tied roles. but I'm struggling to find the best config ..
First of all I created many roles as elementary as possible and here what my ansible folder looks like:
.
├── group_vars
│ └── all
├── inventory.yml
├── playbook.yml
└── roles
├── app_1
│ └── defaults
├── app_2
│ └── defaults
├── postgres
│ └── defaults
├── rabbitmq
│ └── defaults
└── redis
└── defaults
inventory.yml
all:
children:
db:
hosts:
db.domain.com:
app1:
hosts:
app1.domain.com:
app2:
hosts:
app2.domain.com:
cache:
hosts:
cache.domain.com:
playbook.yml
- name: "play db"
hosts: db
roles:
- postgres
- name: "play cache"
hosts: cache
roles:
- redis
- name: "play app1"
hosts: app1
roles:
- app_1
- rabbitmq
- name: "play app2"
hosts: app2
roles:
- app_2
- rabbitmq
the problem here is that I have no idea how different roles can share variables because they're in different hosts. app_1 and app_2 needs variables defined in redis and postgres for example.
I have two solutions:
Define all variables in group_vars/all => the problem is that there are a lot of variable and my file will be too big besides the duplication of variables (locally in the role + globally)
in each role I could say, If you need a variable from postgres then use hostvars from the group "db" but here I think the role is not supposed to know anything about hosts configuration .
I really have no idea how to solve this problem to have a clean config.
thank you !
for the purpose of tests, any role need to have it's own variables, so you can test them individualy.
And variables also have a scope and precedence. see: variable precedence
So when you declare a variable at the role scope, it will not be available for others roles. if you need a variable to be global, add them to group_vars scope, host_vars scope, play scope or extra_vars scope (cli). anyway, you will need to include them.
One way to reuse the variables from other roles or group_vars is to use vars_files to load them for the play you want.
For example, if your app1 hosts require variables defined in redis/defaults/main.yml:
- name: "play app1"
hosts: app1
vars_files:
- roles/redis/defaults/main.yml
roles:
- app_1
- rabbitmq
Or a better option in my opinion would be to have variables segregated into group_vars and load them same way for other hosts.
- name: "play app2"
hosts: app2
vars_files:
- group_vars/db.yml
roles:
- app_2
- rabbitmq

Ansible - malformed block was encountered while loading a block

trying to run a playbook:
---
- name: azure authorization
hosts: localhost
become: yes
gather_facts: true
tasks:
- azure_authorization_configuration
where task look like:
---
- name:
stat: >
path="{{ azure_subscription_authorization_configuration_file_dir }}"
register: stat_dir_result
tags:
- azure
and defaults main file look like:
---
azure_subscription_authorization_configuration_file_dir: '~/.azure/'
Directories tree look like:
├── hosts
├── playbooks
│ └── azure_authorization_playbook.yml
├── roles
│ ├── az_auth
│ │ ├── defaults
│ │ │ └── main.yml
│ │ └── tasks
│ │ └── main.yml
Ansible version: 2.9.1
Ansible playbook command line snippet:
/> ansible-playbook "/Users/user/Dev/Ansible/playbooks/azure_authorization_playbook.yml"
Output:
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
ERROR! A malformed block was encountered while loading a block
Don't have any idea which block was encountered while loading a which block, can anyone tell me where is the issue? Thanks!
The error is clearly coming from your playbook, because it doesn't call any roles or load any other playbooks. That is, if I put this in a file:
---
- name: azure authorization
hosts: localhost
become: yes
gather_facts: true
tasks:
- azure_authorization_configuration
And try to run it, I get the same error. The issue is the entry in your tasks block. A task should be a dictionary, but you've provided only a string:
tasks:
- azure_authorization_configuration
You include an example of a correctly written task in your question. If we put that into your playbook, it would look like:
- name: azure authorization
hosts: localhost
become: yes
gather_facts: true
tasks:
- name:
stat: >
path="{{ azure_subscription_authorization_configuration_file_dir }}"
register: stat_dir_result
tags:
- azure
I got this error because i had a syntax error in my playbook. Note the use of colons(':') in your playbook.
Ok, now I know how my playbook should look like, it was:
---
- name: azure authorization
hosts: localhost
become: yes
gather_facts: true
tasks:
- azure_authorization_configuration
Should be:
---
- name: azure authorization
hosts: localhost
become: yes
gather_facts: true
roles:
- azure_authorization_configuration
in my case it was the error in the role. i missed ":"
the wrong code
$ cat main.yml
---
# tasks file for db.local
- include pre_install.yml
- include my_sql.yml
the good code:
$ cat main.yml
---
# tasks file for db.local
- include: pre_install.yml
- include: my_sql.yml

Including sub-roles in ansible playbook?

I have a ansible role that requires other roles, but is not loading correctly from my playbook. My role is here, Jenkins Role, it has dependencies on two other geerlingguy roles, which are listed here
meta/main.yml
galaxy_info:
author: Jd Daniel
description: Jenkins installer
company: GE Health
min_ansible_version: 2.2
role_name: jenkins
license: GPLv3
platforms:
- name: EL
versions:
- 7
galaxy_tags:
- jenkins
- deployment
- continuous-deployment
- cd
dependencies:
- { role: geerlingguy.repo-epel }
- { role: geerlingguy.jenkins, jenkins_plugins: [] }
My roles ansible.cfg in this role also points to the roles/ directory
[defaults]
# without this, the connection to a new instance is interactive
host_key_checking = False
roles_path = roles/
and the roles are downloaded into the roles/ folder
┌─[10:25:24]─[ehime#GC02WW38KHTD6E]─[~/Repositories/Infra/Ansible/ansible-role-jenkins]
└──> tree -L 2
.
├── [ehime 34K] LICENSE
├── [ehime 1.3K] README.md
├── [ehime 128] ansible.cfg
├── [ehime 96] defaults
│   └── [ehime 32] main.yml
├── [ehime 96] files
│   └── [ehime 633] job.xml
├── [ehime 96] handlers
│   └── [ehime 33] main.yml
├── [ehime 96] meta
│   └── [ehime 417] main.yml
├── [ehime 160] roles
│   ├── [ehime 384] geerlingguy.java
│   ├── [ehime 416] geerlingguy.jenkins
│   └── [ehime 320] geerlingguy.repo-epel
├── [ehime 96] tasks
│   └── [ehime 737] main.yml
├── [ehime 352] tests
│   ├── [ehime 669] README.md
│   ├── [ehime 276] Vagrantfile
│   ├── [ehime 121] ansible.cfg
│   ├── [ehime 203] inventory
│   ├── [ehime 221] requirements.yml
│   ├── [ehime 96] roles
│   ├── [ehime 10] test_7_default.retry
│   └── [ehime 182] test_7_default.yml
└── [ehime 96] vars
└── [ehime 91] main.yml
My tasks should be pulling these in then? right?
tasks/main.yml
---
- name: Install required roles
include_role:
name: "{{ roles }}"
vars:
roles:
- geerlingguy.epel-repo
- geerlingguy.jenkins
.... other tasks ....
When running my playbook though...
jenkins.yml
#
# Ansible to provision Jenkins on remote host
#
- name: Install Jenkins and its plugins
hosts: all
become: yes
become_method: sudo
gather_facts: yes
vars:
jenkins_hostname: localhost
jenkins_http_port: 8080
roles:
- ehime.jenkins
pre_tasks:
- name: CA-Certificates update command line execution
command: /bin/update-ca-trust
tasks:
- name: Set up pipeline
jenkins_job:
config: "{{ lookup('file', 'files/job.xml') }}"
name: test-auto
user: "{{ jenkins_admin_username }}"
password: "{{ jenkins_admin_password }}"
When trying to run the following playbook though
#
# Ansible to provision Jenkins on remote host
#
- name: Install Jenkins and its plugins
hosts: all
become: yes
become_method: sudo
gather_facts: yes
vars:
jenkins_hostname: localhost
jenkins_http_port: 8080
roles:
- ehime.jenkins
pre_tasks:
- name: CA-Certificates update command line execution
command: /bin/update-ca-trust
tasks:
- name: Set up pipeline
jenkins_job:
config: "{{ lookup('file', 'files/job.xml') }}"
name: test-auto
user: "{{ jenkins_admin_username }}"
password: "{{ jenkins_admin_password }}"
With my playbooks config...
ansible.cfg
[defaults]
# without this, the connection to a new instance is interactive
host_key_checking = False
roles_path = roles/
remote_user = ec2-user
private_key_file = ../_keys/test-jenkins
I get the following error....
error
TASK [include_role : {{ roles }}] ************************************************************************************************************************************************************************************************************
ERROR! Invalid role definition: [u'geerlingguy.epel-repo', u'geerlingguy.jenkins']
It's evidently NOT seeing the roles in roles/ehime.jenkins/roles but I'm not sure how to get those working. It also seem like it ignores my meta/main.yml for a galaxy install? Should these be in the requirements.yml?
Like an idiot, I had my dependencies tabbed in too far...
galaxy_info:
author: Jd Daniel
description: Jenkins installer
company: GE Health
min_ansible_version: 2.2
role_name: jenkins
license: GPLv3
platforms:
- name: EL
versions:
- 7
galaxy_tags:
- jenkins
- deployment
- continuous-deployment
- cd
# Issue is here....
dependencies:
- { role: geerlingguy.repo-epel }
- { role: geerlingguy.jenkins, jenkins_plugins: [] }
should have been
galaxy_info:
author: Jd Daniel
description: Jenkins installer
company: GE Health
min_ansible_version: 2.2
role_name: jenkins
license: GPLv3
platforms:
- name: EL
versions:
- 7
galaxy_tags:
- jenkins
- deployment
- continuous-deployment
- cd
# Issue is here....
dependencies:
- { role: geerlingguy.repo-epel }
- { role: geerlingguy.jenkins, jenkins_plugins: [] }

Is there anyway to run multiple Ansible playbooks as multiple users more efficiently?

Currently my playbook structure is like this:
~/test_ansible_roles ❯❯❯ tree .
.
├── checkout_sources
│   └── tasks
│   └── main.yml
├── install_dependencies
│   └── tasks
│   └── main.yml
├── make_dirs
│   └── tasks
│   └── main.yml
├── setup_machine.yml
One of the roles that I have is to install dependencies on my box, so for this I need sudo. Because of that all of my other tasks I need to include the stanza:
become: yes
become_user: my_username
Is there a better way to do this ?
You can set the become options per:
play
role
task
Per play:
- hosts: whatever
become: true
become_user: my_username
roles:
- checkout_sources
- install_dependencies
- make_dirs
Per role:
- hosts: whatever
roles:
- checkout_sources
- role: install_dependencies
become: true
become_user: my_username
- make_dirs
Per task:
- shell: do something
become: true
become_user: my_username
You can combine this however you like. The play can run as user A, a role as user B and finally a task inside the role as user C.
Defining become per play or role is rarely needed. If a single task inside a role requires sudo it should only be defined for that specific task and not the role.
If multiple tasks inside a role require become, blocks come in handy to avoid recurrence:
- block:
- shell: do something
- shell: do something
- shell: do something
become: true
become_user: my_username

Resources