How to save Ansible ad-hoc result to JSON fast? - ansible

For example,
I have 100 IP, I want to collect their hostnames or product type. I know Saltstack has this feature via a simple --output=json # or yaml.
How should I save this result to JSON format even a CSV file fast?
According to Ansible Callback, I can edit ansible.cfg to use a JSON callback. But can I only use specific callback form time to time? I don't want to use specific callback all the time.

If fact_caching is enabled you can according setup_module Examples just run something like
ansible test --ask-pass -m setup --args 'filter=ansible_fqdn'
and find via cat /tmp/ansible/facts_cache/test.example.com a result like
ansible_fqdn: test.example.com
discovered_interpreter_python: /usr/bin/python
depending on your configuration. See Enabling inventory cache plugins for more information.

Related

Use ansible for manual staged rollout using `serial` and unknown inventory size

Consider an Ansible inventory with an unknown number of servers in a nodes key.
The script I'm writing should be usable with different inventories that should be as simple as possible and are out of my control, so I don't know the number of nodes ahead of time.
My command to run the playbook is pretty vanilla and I can freely change it. There could be two separate commands for both rollout stages.
ansible-playbook -i $INVENTORY_PATH playbooks/example.yml
And the playbook is pretty standard as well and can be adjusted:
- hosts: nodes
vars:
...
remote_user: '{{ sudo_user }}'
gather_facts: no
tasks:
...
How would I go about implementing a staged execution without changing the inventory?
I'd like to run one command to execute the playbook for 50% of the inventory first. Here the result needs to be checked manually by a human. Then I'd like to use another command to execute the playbook for the other half. The author of the inventory should not have to worry about this. All machines below the nodes key are the same.
I've looked into the serial keyword, but it doesn't seem like I could automatically end execution after one batch and then later come back to continue with the second half.
Maybe something creative could be done with variables passed to ansible-playbook? I'm just wondering, shouldn't this be a common use-case? Are all staged rollouts supposed to be fully automated?
Without even using serial here is a possible very simple scenario.
First get a calculation of $half of the inventory by inspecting the inventory itself. The following is enabling the json callback plugin for the ad hoc command and making sure it is the only plugin enabled. It is also using jq to parse the result. You can adapt to any other json parser (or even use the yaml callback with a yaml parser if your prefer). Anyway, adapt to your own needs.
half=$( \
ANSIBLE_LOAD_CALLBACK_PLUGINS=1 \
ANSIBLE_STDOUT_CALLBACK=json \
ANSIBLE_CALLBACK_WHITELIST=json \
ansible localhost -i yourinventory.yml -m debug -a "msg={{ (groups['nodes'] | length / 2) | round(0, 'ceil') | int }}" \
| jq -r ".plays[0].tasks[0].hosts.localhost.msg" \
)
Then launch your playbook limiting to the first $half nodes with whatever vars are needed for human check, and launch it again for the remainder nodes without check.
ansible-playbook -i yourinventory.yml example_playbook.yml -l nodes[0:$(($half-1))] -e human_check=true
ansible-playbook -i yourinventory.yml example_playbook.yml -l nodes[$half:] -e human_check=false

Ansible aws_ec2 plugin how to set credentials for discovered hosts?

Need to launch ad-hook commands like "-m ping" on existing ec2 instances, but it requred key-pair.
How to set key-pair for boto, like "aws_access_key_id" stored in ~/.aws/credentials ?
Also, have a problem invertory:
i got "invertory" folder near Ansible, where stored both local hosts and aws_ec2.yml file. But ansible-invertory --list works only for aws_ec2.yml file...
You can declare appropriate credentials and other host-sensitive variable right in the inventory file.
I.e.:
[ec2fleet]
35.... ansible_ssh_user=ec2-user
15.... ansible_ssh_user=ubuntu
[ec2fleet:vars]
ansible_user=deployer
ansible_ssh_private_key_file=/home/deployer/.ssh/deployer.pem

Use custom ansible variables per customer

I'm trying to figure out what is the best strategy to use custom variables on my ansible-playbooks/ansible-roles.
What I'm doing is at this time is:
Customer1:
Create a new inventory file with Customer1 devices on inventories/customer1.ini
Overwrite customer variables on vars/controls.yml
Execute my ansible-playbook/ansible-role for Customer1 ansible-playbook -i inventories/customer1.ini site.yml
Customer2:
Create a new inventory file with Customer2 devices on inventories/customer2.ini
Overwrite customer variables on vars/controls.yml
Execute my ansible-playbook/ansible-role for Customer2 ansible-playbook -i inventories/customer2.ini site.yml
Customer N:
What I want to do, is just create a different variables controls vars/controls.yml per customer:
vars/controls-customer1.yml
vars/controls-customer2.yml
And that my ansible-playbook/ansible-role is able to read it without any change (reusability).
I hope you can give me some light on this.
Thank you!
You can also put the following into you ansible.cfg for this/these playbooks:
[defaults]
inventory = ./inventories/
Which will then load all "inventory"-files inside directory "inventories". You don't have to use the -i option on the command line if you don't specificly want this one specific inventory to be used.

Why playbook take wrong values for group variables?

I have a problem with groups variables.
Example: I have two inventory groups group_A and group_B, and also have the same name files in group_vars:
inventories/
hosts.inv
[group_A]
server1
server2
[group_B]
server3
server4
group_vars/
group_A - file
var_port: 9001
group_B - file
var_port: 9002
The problem is when i execute:
ansible-playbook playbooks/playbook.yml -i inventories/hosts.inv -l group_B
playbook was executed for proper scope of servers (server3, server4) but it takes variables from group variables file group_A.
expected result: var_port: 9002
in realty : var_port: 9001
ansible 2.4.2.0
BR Oleg
I included ANSIBLE_DEBUG , and what i have found:
2018-05-03 15:23:23,663 p=129458 u=user | 129458 1525353803.66336: Loading data from /ansible/inventories/prod/group_vars/group_B.yml
2018-05-03 15:23:23,663 p=129458 u=user | 129661 1525353803.66060: in run() - task 00505680-eccc-d94e-2b1b-0000000000f4
2018-05-03 15:23:23,664 p=129458 u=user | 129661 1525353803.66458: calling self._execute()
2018-05-03 15:23:23,665 p=129458 u=user | 129458 1525353803.66589: Loading data from /ansible/inventories/prod/group_vars/group_A.yml
on playbook execution ansible scan all files with variables in folder group_vars which have variable "var_port", last will win.....
as you can found in another topic:
Ansible servers/groups in development/production
and from documentation:
http://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
Note
Within any section, redefining a var will overwrite the previous instance. If multiple groups have the same variable, **the last one loaded wins**. If you define a variable twice in a play’s vars: section, the **2nd one wins**.
For me now NOT clear how to manage configuration files. In this case I must use unique variables names for each group, but it is not possible regarding roles, or should I use include_vars when i call playbook?
Super example how to manage variables files in multistage environment from DigitalOcean
How to Manage Multistage Environments with Ansible
I believe that the problem here, while not explicitly stated in the original question, is that Server{1,2} and Server{3,4} are actually the same servers in 2 different groups at the same level.
I ran into this problem which caused me to do some digging. I don't agree with it, but it is as designed. This was even fixed with full compatibility and the pull request was rejected
Discussion
Pull Request

Run Ansible playbook on UNIQUE user/host combination

I've been trying to implement Ansible in our team to manage different kinds of application things such as configuration files for products and applications, the distribution of maintenance scripts, ...
We don't like to work with "hostnames" in our team because we have 300+ of them with meaningless names. Therefor, I started out creating aliases for them in the Ansible hosts file like:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
Meaning we have a BPM application named "app1" and it's deployed on two hosts in integration-testing and on two hosts in user-acceptance-testing. So far so good. Now I can run an Ansible playbook to (for example) setup the SSH accesses (authorized_keys) for team members or push a maintenance script. I can run those PBs on each host seperately, on all hosts ITT or UAT or even everywhere.
But, typically, we'll have install the same application app1 again on an existing host but with a different purpose - say "training" environment. My reflex would be to do this:
[bpm-i]
bpm-app1-i1 ansible_user=bpmadmin ansible_host=el1001.bc
bpm-app1-i2 ansible_user=bpmadmin ansible_host=el1003.bc
[bpm-u]
bpm-app1-u1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-u2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-t]
bpm-app1-t1 ansible_user=bpmadmin ansible_host=el2001.bc
bpm-app1-t2 ansible_user=bpmadmin ansible_host=el2003.bc
[bpm-all:children]
bpm-i
bpm-u
bpm-t
But ... running PB's becomes a mess now and cause errors. Logically I have two alias names to reach the same user/host combination : bpm-app1-u1 and bpm-app1-t1. I don't mind, that's perfectly logical, but if I were to test a new maintenance script, I would first push it to bpm-app1-i1 for testing and when ok, I probably would run the PB against bpm-all. But because of the non-unique user/host combinations for some aliases the PB would run multiple times on the same user/host. Depending on the actions in the PB this may work coincidentally, but it may also fail horribly.
Is there no way to tell Ansible "Run on ALL - UNIQUE user/host combinations" ?
Since most tasks change something on the remote host, you could use Conditionals to check for that change on the host before running.
For example, if your playbook has a task to run a script that creates a file on the remote host, you can add a when clause to "skip the task if file exists" and check for the existence of that file with a stat task before that one.
- Check whether script has run in previous instance by looking for file
stat: path=/path/to/something
register: something
- name: Run Script when file above does not exist
command: bash myscript.sh
when: not something.exists

Resources