Use custom ansible variables per customer - ansible

I'm trying to figure out what is the best strategy to use custom variables on my ansible-playbooks/ansible-roles.
What I'm doing is at this time is:
Customer1:
Create a new inventory file with Customer1 devices on inventories/customer1.ini
Overwrite customer variables on vars/controls.yml
Execute my ansible-playbook/ansible-role for Customer1 ansible-playbook -i inventories/customer1.ini site.yml
Customer2:
Create a new inventory file with Customer2 devices on inventories/customer2.ini
Overwrite customer variables on vars/controls.yml
Execute my ansible-playbook/ansible-role for Customer2 ansible-playbook -i inventories/customer2.ini site.yml
Customer N:
What I want to do, is just create a different variables controls vars/controls.yml per customer:
vars/controls-customer1.yml
vars/controls-customer2.yml
And that my ansible-playbook/ansible-role is able to read it without any change (reusability).
I hope you can give me some light on this.
Thank you!

You can also put the following into you ansible.cfg for this/these playbooks:
[defaults]
inventory = ./inventories/
Which will then load all "inventory"-files inside directory "inventories". You don't have to use the -i option on the command line if you don't specificly want this one specific inventory to be used.

Related

Ansible: Configure global variable which "environments" to set up

I want to be able to setup via ansible different "environments" (sets of same variables), e.g. production, testing, staging, etc.
I would like to:
define which environments to set up (setup_environments) at some global place
run a role r1 once for each environment in setup_environments
run a role r2 without setting its environment, but r2 should have access both to the variable setup_environments AND be able to load the defined environments for certain tasks.
Is this possible? 3. should be possible with include_vars, but I mention it to show that I really need the "setup_environments" variable and can't simply run ansible multiple times including each environment separately with -i.
Reading your requirement it seems that you need to structure your inventory only. Let's assume your inventory looks like
inventory.ini
[environment:children]
prod
qa
dev
webserver
database
[environment:vars]
# No global variables here / yet
[prod:children]
webserver_prod
database_prod
[qa:children]
webserver_qa
database_qa
[dev:children]
webserver_dev
database_dev
# Logical services nodes
[webserver:children]
webserver_prod
webserver_qa
webserver_dev
[database:children]
database_prod
database_qa
database_dev
# List of all hosts
# [prod:children]
[webserver_prod]
web.prod.example.com
[database_prod]
primary.prod.example.com
secondary.prod.example.com
# [qa:children]
[webserver_qa]
web.qa.example.com
[database_qa]
primary.qa.example.com
secondary.qa.example.com
# [dev:children]
[webserver_dev]
web.dev.example.com
[database_dev]
primary.dev.example.com
secondary.dev.example.com
you may get familar with the structure than by
ansible-inventory dev --graph
ansible-inventory database_dev --graph
...
It will be possible to apply playbooks, tasks, etc. just to certain nodes via
ansible-playbook database_dev ...
Furthermore you should introduce group_vars like
all
prod
qa
dev
webserver_prod
database_prod
...
All depdends on the structure of the inventory but makes it easier afterwards.

How to save Ansible ad-hoc result to JSON fast?

For example,
I have 100 IP, I want to collect their hostnames or product type. I know Saltstack has this feature via a simple --output=json # or yaml.
How should I save this result to JSON format even a CSV file fast?
According to Ansible Callback, I can edit ansible.cfg to use a JSON callback. But can I only use specific callback form time to time? I don't want to use specific callback all the time.
If fact_caching is enabled you can according setup_module Examples just run something like
ansible test --ask-pass -m setup --args 'filter=ansible_fqdn'
and find via cat /tmp/ansible/facts_cache/test.example.com a result like
ansible_fqdn: test.example.com
discovered_interpreter_python: /usr/bin/python
depending on your configuration. See Enabling inventory cache plugins for more information.

Use ansible for manual staged rollout using `serial` and unknown inventory size

Consider an Ansible inventory with an unknown number of servers in a nodes key.
The script I'm writing should be usable with different inventories that should be as simple as possible and are out of my control, so I don't know the number of nodes ahead of time.
My command to run the playbook is pretty vanilla and I can freely change it. There could be two separate commands for both rollout stages.
ansible-playbook -i $INVENTORY_PATH playbooks/example.yml
And the playbook is pretty standard as well and can be adjusted:
- hosts: nodes
vars:
...
remote_user: '{{ sudo_user }}'
gather_facts: no
tasks:
...
How would I go about implementing a staged execution without changing the inventory?
I'd like to run one command to execute the playbook for 50% of the inventory first. Here the result needs to be checked manually by a human. Then I'd like to use another command to execute the playbook for the other half. The author of the inventory should not have to worry about this. All machines below the nodes key are the same.
I've looked into the serial keyword, but it doesn't seem like I could automatically end execution after one batch and then later come back to continue with the second half.
Maybe something creative could be done with variables passed to ansible-playbook? I'm just wondering, shouldn't this be a common use-case? Are all staged rollouts supposed to be fully automated?
Without even using serial here is a possible very simple scenario.
First get a calculation of $half of the inventory by inspecting the inventory itself. The following is enabling the json callback plugin for the ad hoc command and making sure it is the only plugin enabled. It is also using jq to parse the result. You can adapt to any other json parser (or even use the yaml callback with a yaml parser if your prefer). Anyway, adapt to your own needs.
half=$( \
ANSIBLE_LOAD_CALLBACK_PLUGINS=1 \
ANSIBLE_STDOUT_CALLBACK=json \
ANSIBLE_CALLBACK_WHITELIST=json \
ansible localhost -i yourinventory.yml -m debug -a "msg={{ (groups['nodes'] | length / 2) | round(0, 'ceil') | int }}" \
| jq -r ".plays[0].tasks[0].hosts.localhost.msg" \
)
Then launch your playbook limiting to the first $half nodes with whatever vars are needed for human check, and launch it again for the remainder nodes without check.
ansible-playbook -i yourinventory.yml example_playbook.yml -l nodes[0:$(($half-1))] -e human_check=true
ansible-playbook -i yourinventory.yml example_playbook.yml -l nodes[$half:] -e human_check=false

Ansible aws_ec2 plugin how to set credentials for discovered hosts?

Need to launch ad-hook commands like "-m ping" on existing ec2 instances, but it requred key-pair.
How to set key-pair for boto, like "aws_access_key_id" stored in ~/.aws/credentials ?
Also, have a problem invertory:
i got "invertory" folder near Ansible, where stored both local hosts and aws_ec2.yml file. But ansible-invertory --list works only for aws_ec2.yml file...
You can declare appropriate credentials and other host-sensitive variable right in the inventory file.
I.e.:
[ec2fleet]
35.... ansible_ssh_user=ec2-user
15.... ansible_ssh_user=ubuntu
[ec2fleet:vars]
ansible_user=deployer
ansible_ssh_private_key_file=/home/deployer/.ssh/deployer.pem

Why playbook take wrong values for group variables?

I have a problem with groups variables.
Example: I have two inventory groups group_A and group_B, and also have the same name files in group_vars:
inventories/
hosts.inv
[group_A]
server1
server2
[group_B]
server3
server4
group_vars/
group_A - file
var_port: 9001
group_B - file
var_port: 9002
The problem is when i execute:
ansible-playbook playbooks/playbook.yml -i inventories/hosts.inv -l group_B
playbook was executed for proper scope of servers (server3, server4) but it takes variables from group variables file group_A.
expected result: var_port: 9002
in realty : var_port: 9001
ansible 2.4.2.0
BR Oleg
I included ANSIBLE_DEBUG , and what i have found:
2018-05-03 15:23:23,663 p=129458 u=user | 129458 1525353803.66336: Loading data from /ansible/inventories/prod/group_vars/group_B.yml
2018-05-03 15:23:23,663 p=129458 u=user | 129661 1525353803.66060: in run() - task 00505680-eccc-d94e-2b1b-0000000000f4
2018-05-03 15:23:23,664 p=129458 u=user | 129661 1525353803.66458: calling self._execute()
2018-05-03 15:23:23,665 p=129458 u=user | 129458 1525353803.66589: Loading data from /ansible/inventories/prod/group_vars/group_A.yml
on playbook execution ansible scan all files with variables in folder group_vars which have variable "var_port", last will win.....
as you can found in another topic:
Ansible servers/groups in development/production
and from documentation:
http://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
Note
Within any section, redefining a var will overwrite the previous instance. If multiple groups have the same variable, **the last one loaded wins**. If you define a variable twice in a play’s vars: section, the **2nd one wins**.
For me now NOT clear how to manage configuration files. In this case I must use unique variables names for each group, but it is not possible regarding roles, or should I use include_vars when i call playbook?
Super example how to manage variables files in multistage environment from DigitalOcean
How to Manage Multistage Environments with Ansible
I believe that the problem here, while not explicitly stated in the original question, is that Server{1,2} and Server{3,4} are actually the same servers in 2 different groups at the same level.
I ran into this problem which caused me to do some digging. I don't agree with it, but it is as designed. This was even fixed with full compatibility and the pull request was rejected
Discussion
Pull Request

Resources