I need to permform few task on same hosts but wanted to group the tasks into the different roles which need to share some output to each other. Consider below example
│── hosts
├── playbooks
│ ├── Playbook1.yml
│
├── roles
│ └── role1
│ ├── files
│ │ └── project1.conf
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── tasks
│ │ └── main.yml [Creates variable role1_a, role1_b]
│ ├── templates
│ └── vars
│ └── main.yml
│ └── role2
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── tasks
│ │ └── main.yml [uses variable from role1 role1_a and creates variable role2_c]
│ ├── templates
│ └── vars
│ └── main.yml
│ └── role3
│ ├── handlers
│ │ └── main.yml
│ ├── meta
│ │ └── main.yml
│ ├── tasks
│ │ └── main.yml [uses variable from role1 role1_b and role2 role2_c]
│ ├── templates
│ └── vars
│ └── main.yml
│
Is there any way to collect output of role1 and pass it to role2 and role3 like
- hosts: localhost
roles:
- role: role1_a, role1_b = {role1}
- role: role2_c = {role2 role1_a: role1_a, role1_b: role1_b}
- role: {role2 role1_b: role1_b role2_c: role2_c}
or anyother mechanism to share the variable between the roles ?
What about including tasks in role1 to store the value of role1_a and role1_b into the variable definition of role2 and role3, by combining local_action with lineinfile
- name: Copy value of role1_a to var definition of role2
local_action:
module: lineinfile
path: ~/ansible/roles/role2/vars/main.yml
regexp: '^role1_a'
line: "role1_a: {{ role1_a }}"
- name: Copy value of role1_b to var definition of role3
local_action:
module: lineinfile
dest: ~/ansible/roles/role3/vars/main.yml
regexp: '^role1_b'
line: "role1_b: {{ role1_b }}"
Related
Ι have the following inventory
[foo]
foosrv ansible_host=11.22.33.44
[bar]
barsrv ansible_host=44.11.22.33
[zoo]
zoosrv ansible_host=21.21.21.21
and the following file structure
.
├── ansible.cfg
├── files
│ └── aws.yaml
├── host_vars
│ ├── foosrv
│ │ ├── templates
│ │ │ └── repoConf.yaml.j2
│ │ └── vault
│ └── barsrv
│ ├── templates
│ │ └── repoConf.yaml.j2
│ └── vault
├── inventory
├── site.yaml
├── templates
│ ├── amend.py.j2
│ └── config.j2
└── vars
├── ansible_vars.yaml
└── vault
My problem is that any variables under ./vars/vault are not recognized by ansible.
Any hints about what might be the issue here?
Ansible doesn't automatically load variables from a vars directory.
You have several options for loading variables:
Ansible will load variables from a file (or directory) in host_vars matching the inventory hostname.
Ansible will load variables from a file (or directory) in group_vars matching the names of groups of which a host is a member.
You can use the include_vars task to explicitly load variables from a file.
If you want to load the variables for all hosts automatically, then place them in group_vars/all.yaml (or a subdirectory of group_vars/all).
Ansible will automatically detect (and decrypt, if you've provided a password) files that are vault encrypted.
I'm looking for a solution to load local ansible.cfg at root playbook_dir.
This is my architeture folder of playbooks:
ansible
├── deploy_manager
│ ├── ansible.cfg
│ ├── deploy_manager.yml
│ ├── environments
│ │ ├── demo
│ │ │ ├── group_vars
│ │ │ │ └── demo.yml
│ │ │ ├── inventory.yml
│ │ │ └── vars
│ │ │ └── vault.yml
│ │ ├── int
│ │ │ ├── group_vars
│ │ │ │ └── int.yml
│ │ │ ├── inventory.yml
│ │ │ └── vars
│ │ │ └── vault.yml
│ │ └── prod
│ │ ├── group_vars
│ │ │ └── prod.yml
│ │ ├── inventory.yml
│ │ └── vars
│ │ └── vault.yml
│ ├── README.md
│ └── roles
│ ├── create_instance
│ │ └── tasks
│ │ └── main.yml
When I execute the playbook with ansible-playbook cli, I have an ansible.cfg in current dir so ansible.cfg is loaded.
When I execute the playbook from AWX, the project is in tmp/cev039fj/awx_1900_tw78u5vh/project.
There is no ansible.cfg in /tmp/ cev039fj/awx_1900_tw78u5vh so it's the /etc/ansible/ansible.cfg which is loaded.
I have an ansible.cfg in each playbook directory with different params so how could I setup the ANSIBLE_CONFIG path to playbook dir ansible.cfg when a playbook is launched by AWX ?
I did some tests unsuccessful with ANSIBLE_CONFIG setup.
Have you any ideas ?
Thanks
EDIT:
I have open an issue to awx github https://github.com/ansible/awx/issues/10398
This is a limitation of AWX which loads ansible config to root project only because the playbook is executed at root of the project.
Usually, I execute playbook in current directory playbook that's why I have no problem with my specific ansible.cfg.
I made two proposals to resolve this problem, I will post the final point here when it will be finished
Okay, I know that ansible-runner won't respect your callback_plugins setting here. It tries to respect the user settings for callback plugins, and append its own to the end of the list.
https://github.com/ansible/ansible-runner/blob/4926a6afda13e768f55010f46509d2888e83e9a7/ansible_runner/config/_base.py#L242
But the issue is that you didn't set the env var ANSIBLE_CALLBACK_PLUGINS. You set the config file setting for the same thing. To combine with that, it would have to read the config file. It doesn't.
However, AWX does read the config file in cases, so that it doesn't clobber the user settings. This is the wrong place to do it. That needs to be moved into ansible-runner so that it can take responsibility for properly over-riding existing user settings. Or else, Ansible core needs to add some syntax to denote "existing values of a list-valued setting".
There is 1 other way we could fix this - we could put the ansible-runner callback plugin in a collection in an expected location. See ansible/ansible-runner#482. This avoids the need for ansible-runner to change the ANSIBLE_CALLBACK_PLUGINS setting at all and it just references the standard out callback plugin. However, it would still need to set the ANSIBLE_STDOUT_CALLBACK setting.
You are also setting the stdout callback setting. This doesn't make a lot of sense to me. There can only be 1 standard out callback plugin, because that's what dictates what gets written to standard out. I toyed with ideas for layering them, but they would step on each other's toes. Instead, I think you should consider doing your plugin as a "normal" callback plugin instead of a stdout callback plugin. If you do this, you can still enable it by changing the AWX_TASK_ENV setting without ansible-runner clobbering it. You can't do that with the config file - and that's our bug.
Based on the following Ansible inventory directory tree:
inventories/region
├── staging
│ ├── group_vars
│ │ ├── all.yml
│ │
│ │── hosts.yml
│ └── site1
│ ├── group_vars
│ │ ├── all.yml
│ │
│ ├── hosts.yml
│
├── group_vars
│ └── all.yml
└── prod
├── group_vars
│ ├── all.yml
│── hosts.yml
├── site1
│ ├── group_vars
│ │ ├── all.yml
│── hosts.yml
└──site2
├── group_vars
│ ├── all.yml
│── hosts.yml
Is there a way for me to load vars from inventories/region/group_vars/all.yml and inventories/region/env/group_vars/all.yml inside inventories/region/env/siteX/group_vars/all.yml? I will be calling playbooks with the reference to the site-specific inventory files, such as inventories/region/prod/site2/group_vars/all.yml. I'm trying to avoid maintaining the same variables with the same values across multiple var files (group_vars/all.yml in this example) for each site and environment.
The value of the variables will change for different regions.
Thanks
I have problem with ansible.
I have couple of group_vars folders and in this folders there is files encrypted by ansible-vault with difference passwords between prod and test:
├── group_vars
│ ├── app1_prod
│ │ ├── application.yml <- Ancryptes by Ansible Vault prod pass
│ │ └── service.yml
│ ├── app1_test
│ │ ├── application.yml <- Ancryptes by Ansible Vault test pass
│ │ └── service.yml
│ ├── app2_prod
│ │ ├── application.yml <- Ancryptes by Ansible Vault prod pass
│ │ └── service.yml
│ └── app2_test
│ ├── application.yml <- Ancryptes by Ansible Vault test pass
│ └── service.yml
And my inventory file looks like:
[test_hosts]
test_host1
test_host2
[prod_hosts]
prod_host1
prod_host2
[app1_test:children]
test_hosts
[app2_test:children]
test_hosts
[app1_prod:children]
prod_hosts
[app2_prod:children]
prod_hosts
When I running playbook command:
ansible-playbook app1_playbook.yml -i ./inventory/hosts -l app1_test -u ssh_user -k --vault-password-file path_to_vault_key
I get error that saying the vault password is wrong for file and pointing for file in prod and from other group:
Decryption failed on ansible/group_vars/app1_prod/application.yml
I don't know how to fix this.
Personally, I think your inventory structure is a Bad Idea. I do not condone having PROD and TEST servers in the same inventory, and I see no good reason for it.
I would restructure your system like this:
├── prod
│ ├── ansible.cfg
│ ├── group_vars
│ │ ├── app1
│ │ │ ├── application.yml <- Ancryptes by Ansible Vault prod pass
│ │ │ └── service.yml
│ │ ├── app2
│ │ │ ├── application.yml <- Ancryptes by Ansible Vault prod pass
│ │ │ └── service.yml
├── test
│ ├── ansible.cfg
│ ├── group_vars
│ │ ├── app1
│ │ │ ├── application.yml <- Ancryptes by Ansible Vault prod pass
│ │ │ └── service.yml
│ │ ├── app2
│ │ │ ├── application.yml <- Ancryptes by Ansible Vault prod pass
│ │ │ └── service.yml
And, of course, there would be two host files:
PROD:
[hosts]
prod_host1
prod_host2
[app1:children]
hosts
[app2:children]
hosts
TEST:
[hosts]
test_host1
test_host2
[app1:children]
hosts
[app2:children]
hosts
Have an ansible.cfg file in each inventory directory with the lines:
inventory = .
vault_password_file = /path/to/vault_password_file
remote_user = ssh_user
ask_pass = True
(Best if you just copy /etc/ansible/ansible.cfg to the inventory directory and change what you need to change.)
Once you have that setup, you go into the prod or test directory, and execute the playbook from there. Of course, you will need to specify the path to the playbooks:
cd prod
ansible-playbook /path/to/playbooks/app_playbook.yml
cd test
ansible-playbook /path/to/playbooks/app_playbook.yml
Trust me, life is much easier with inventory separation.
Good luck!
I have a playbook like so:
- hosts: "{{env}}"
name: "REDIS Playbook"
sudo: no
vars:
product: redis
roles:
- redis
And I call it with: ansible-playbook pb_redis.yml -i inventory/redis -e env=qa -v
I have a directory structure like:
.
├── group_vars
│ ├── qa
│ │ ├── common
│ │ | └── redis.yml
│ │ ├── products
│ │ | └── abc-1.yml
│ │ | └── xyz-2.yml
│ ├── test
│ │ ├── common
│ │ | └── redis.yml
│ │ ├── products
│ │ | └── abc-1.yml
│ │ | └── xyz-2.yml
├── inventory
└── roles
└── redis
├── files
├── handlers
├── meta
├── tasks
├── templates
└── vars
And I have an inventory like:
[qa:children]
qa_redis
[qa_redis]
mybox.1.space
mybox.2.space
mybox.3.space
My Issue: When I run ansible-playbook pb_redis.yml -i inventory/redis -e env=qa -v, I'm still picking up group_vars defined in the ../test/common/redis.yml instead of ../qa/common/redis.yml -- am I misunderstanding how this should work? The correct hosts get picked up from the inventory file, but not the correct group_vars. Should I place the redis.yml under ../qa/products/ instead?
Inventory (host and group) variables in Ansible are bound to host. Group variables exist for convenience.
If a host is in multiple groups at the same time, all group variables are applied to that host.
If different groups have same variables, they overwrite each other during inventory load process.
So if you have mybox.1.space in groups qa and test, variables from groups qa and test are applied to this host.
Usually you want to use separate inventories to work with different deploy environments. And groups are used to separate different logical units inside inventory.