How to get variables in script to work - ansible

I have the following Ansible script.
ansible localhost -m known_hosts -a "path=/home/vagrant/.ssh/known_hosts name=web key=\"{{ lookup('pipe', 'ssh-keyscan ' + item) }}\" state=present with_items={{hosts}}" --user vagrant -e "{hosts:[web, db]}"
Essentially it is meant to add a bunch of known hosts to the known_hosts file. I can't seem to get it to work with the array. Although I have managed to get it working for a single host.
ansible localhost -m known_hosts -a "path=/home/vagrant/.ssh/known_hosts name=web key=\"{{ lookup('pipe', 'ssh-keyscan ' + host) }}\" state=present" --user vagrant -e "host='web'"
Any ideas how I can get it working with the array?

This might not be an answer to your question. More proposing an alternative.
Why are you trying to do it with Ansible? Ansible is a nice tool to get some tasks quickly done on remote hosts but I don't see how you could benefit from it in this situation.
Here's a one-liner that's even shorter than your Ansible command:
for HOST in web db; do if [ ! -n "$(grep "^$HOST " /home/vagrant/.ssh/known_hosts)" ]; then ssh-keyscan $HOST >> /home/vagrant/.ssh/known_hosts 2>/dev/null; fi; done

Related

bash - executing ssh command in a loop

I have the loop:
IFS=','
for host in host1,host2
do
ssh root#$host script.sh
done
When I execute the loop, ssh command works fine for host1, but for host2 I see:
bash: host2: command not found
Could you tell me where can be the problem?
Just like this
for host in host1 host2
do
ssh root#$host script.sh
done
You above script will return an output command like this
ssh root#host1 host2 script.sh
Here host2 is considered as an argument or a command which the shell is not able to recognize and hence you are getting the above error.
Assuming your use case is to run the shell script on both hosts the following script might help you
IFS=', ' read -r -a array <<< host1,host2
for host in ${array[#]}; do ssh root#$host script.sh; done
output command will be similar to below
ssh root#host1 script.sh
ssh root#host2 script.sh
PSSH (https://github.com/lilydjwg/pssh)
Create your file with hosts/ip addresses, then try following:
pssh -h hostfile.txt -l root -i "echo 'hello world'; another_command; exit"
-h = hostfile with ip adddresses
-l = username to use
-i = command to execute on source
-A = for use password

Ansible: How to use inventory variables in ad-hoc commands

I want to run an ad-hoc command over a host file, in the host file I have defined a variable for each host, How can I use that variable while executing an ad-hoc command.
For example:
ansible -i /home/bob/hosts_file -m shell -a "$VAR/project run"
I have defined the $VAR for each host in "hosts_file", the $VAR is different for every host in the inventory file. How can I use that variable in my ad-hoc command replacing for each host when executing.
With Ansible ad-hoc commands ansible can make use of all the same variables regardless.
Examples
#1. group_names
$ ansible -i inventory/lab -m debug -a "var=group_names" all | head -10
es-master-01.mydom.local | SUCCESS => {
"group_names": [
"elasticsearch",
"engineering",
"lab",
"lab-es-master"
]
}
Here I'm querying the servers in an inventory to find out what groups the server is assigned to in the inventory file. This variable group_names shows this from my inventory file.
#2. inventory_hostnames
Here's another example where I'm using the variable inventory_hostnames and accessing it using Jinja notation:
$ ansible -i inventory/nyc1 -l ocp-app* all -c local -m shell -a "echo {{ inventory_hostname }}"
ocp-app-01e.nyc1.dom.us | CHANGED | rc=0 >>
ocp-app-01e.nyc1.dom.us
ocp-app-01c.nyc1.dom.us | CHANGED | rc=0 >>
ocp-app-01c.nyc1.dom.us
ocp-app-01d.nyc1.dom.us | CHANGED | rc=0 >>
ocp-app-01d.nyc1.dom.us
ocp-app-01a.nyc1.dom.us | CHANGED | rc=0 >>
ocp-app-01a.nyc1.dom.us
ocp-app-01b.nyc1.dom.us | CHANGED | rc=0 >>
ocp-app-01b.nyc1.dom.us
ocp-app-01f.nyc1.dom.us | CHANGED | rc=0 >>
ocp-app-01f.nyc1.dom.us
Host variables are available to ansible, even while running an ad-hoc command. You insert them as you would through a playbook, using a jinja template.
ansible all -i /home/bob/hosts_file -m shell -a "{{var}}/project run"

Ansible-playbook run against arbitrary host which is not in inventory file and include group vars

When I run a playbook runrole.yml this way:
ansible-playbook -i '192.168.0.7,' runrole.yml -e "ROLE=allwindows" -e "TARGETIP=192.168.0.7" -e "ansible_port=5986" --ask-vault-pass
runrole.yml has:
- hosts: '{{TARGETIP}}'
roles:
- { role: '{{ROLE}}' }
It works (i.e. it runs against 192.168.0.7), but it fails because I have not provided all additional arguments
ansible_user: Administrator
ansible_password: SecretPasswordGoesHere
ansible_connection: winrm
I would like Ansible to use variables which are defined in group-vars/allwindows.yml.
It will work, If I add into inventory file into a group [allwindows] host 192.168.0.7:
[allwindows]
host1
...
hostN
192.168.0.7
and run using:
ansible-playbook runrole.yml -e "ROLE=allwindows" -e "TARGETIP=192.168.0.7" -e "ansible_port=5986" --ask-vault-pass
It works fine as it detects that 192.168.0.7 belongs to a group allwindows.
In certain scenarios I would like to run a role against a host without touching the inventory file. How do I specify to include group allwindows to use all variables from group_vars/allwindows.yml without modifying the inventory file?
I have found a hack how to do that. It is not nice as #techraf's answer but it works with ansible-playbook
ATARGETIP=192.168.0.7 && echo "[allwindows]" > tmpinventory && echo "$ATARGETIP" >> tmpinventory && ansible-playbook -i tmpinventory runrole.yml -e "ROLE=allwindows" -e "TARGETIP=$ATARGETIP" -e "ansible_port=5986" --ask-vault-pass && rm tmpinventory

Ansible adhoc command in sequence

I want to run ansible adhoc command on a list of EC2 instances. I want ansible to run it in sequence but ansible runs them in random. For example:
13:42:21 #cnayak ansible :► ansible aws -a "hostname"
ec2 | SUCCESS | rc=0 >>
ip-172-31-36-255
ec3 | SUCCESS | rc=0 >>
ip-172-31-45-174
13:42:26 #cnayak ansible :► ansible aws -a "hostname"
ec3 | SUCCESS | rc=0 >>
ip-172-31-45-174
ec2 | SUCCESS | rc=0 >>
ip-172-31-36-255
Any way to make them run in order?
By default ansible runs tasks in parallel. If you want them to be executed serially then you can limit number of workers running at the same time by using "--forks" option.
Adding "--forks 1" to your ansible invocation should run your command sequentially on all hosts (in order defined by inventory).
You can use the forks with adhoc command and serial: 1 inside the playbook.
On adhoc command:
ansible aws -a "hostname" --forks=1
Inside the playbook:
- hosts: aws
become: yes
gather_facts: yes
serial: 1
tasks:
- YOUR TASKS HERE
--forks=1 hasn't been sorting inventory for me in recent versions of ansible (2.7)
Another approach I find useful is using the "oneline" output callback, so I can use the standard sort and grep tools on the output:
ANSIBLE_LOAD_CALLBACK_PLUGINS=1 \
ANSIBLE_STDOUT_CALLBACK=oneline \
ansible \
-m debug -a "msg={{ansible_host}}\t{{inventory_hostname}}" \
| sort \
| grep -oP '"msg": \K"[^"]*"' \
| xargs -n 1 echo -e
This has been useful for quick-n-dirty reports on arbitrary vars or (oneline) shell command outputs.

Ansible - List available hosts

I got some hosts in my ansible inventory which the ansible server cannot connect to (there is no pubkey deployed).
How do I list all of them? (List unreachable hosts)
Maybe there is a way to generate an inventory file with all of the hosts?
(the less elegant way is to write a playbook and to copy the command line output, but is there a better way?)
To list them, you can use the ping module, and pipe the output :
ANSIBLE_NOCOWS=1 ansible -m ping all 2>&1 | grep 'FAILED => SSH' | cut -f 1 -d' '
If you want to generate an inventory, you can just redirect the output in a file :
ANSIBLE_NOCOWS=1 ansible -m ping all 2>&1 | grep 'FAILED => SSH' | cut -f 1 -d' ' > hosts_without_key
Then, you can use it later providing the -i switch to ansible commands :
ansible-playbook -i hosts_without_key deploy_keys.yml
If you can ssh using passwords, and assuming you have a key deploying playbook (e.g. deploy_keys.yml), you can issue :
ansible-playbook -i hosts_without_key deploy_keys.yml -kKu someuser
But if the point is to deploy keys on hosts that don't have them, remember Ansible is idempotent. It does no harm to execute the deploy_keys.yml playbook everywhere (it's just a bit longer).
Good luck.

Resources