I have a number of raspberry pis that I swap out (only one running at a time) and run ansible against. Most pis respond to ping raspberrypi but I have one that responds to ping raspberrypi.local
Rather than remembering to manually ping the correct hostname before executing the playbook, is there a way in ansible to run a playbook against a different hostname if the first fails?
Currently my playbook is
---
- hosts: raspberrypi
and /etc/ansible/hosts
[raspberrypi]
raspberrypi
#raspberrypi.local
If I uncomment out the second hostname and the first fails, then the playbook will fail and not run on the .local hostname
I am not sure if this is directly possible in ansible.
But a hack I can think of is to create a list of hosts store them in a variable do a ping using the localhost. If ping is successful create a custom hosts group and execute the task you want to do.
Also are you executing your playbook with serial: 1 ?
Hope so this helps.
You could run the play against both host groups.
- hosts: raspberrypi:raspberrypi.local
Related
I have some old equipment that I manage and it is running VXWorks. I am able to ssh into it and run commands, but the commands and/or the prompts are not standard. I would like to use ansible to automate some of the tasks that I do. I am not sure what module to use for this.
Is there a way to just ssh into a box and start running commands with Ansible for non Linux boxes?
Is there a way to download files via scp/sftp with Ansible for non Linux boxes?
How can I get the raw output. The commands I run are generally show commands and I need to see the output of the commands.
Is there a way to just ssh into a box and start running commands with Ansible for non Linux boxes?
That's what Ansible's raw module is for: it's a minimal wrapper for ssh <somehost> <somecommand>.
Is there a way to download files via scp/sftp with Ansible for non Linux boxes?
If the remote system supports scp or sftp, your Ansible playbooks can just run the appropriate scp/sftp command on your local system. E.g.,
- hosts: localhost
tasks:
- name: copy a file from the remote system
command: scp myserver:somefile.txt .
How can I get the raw output. The commands I run are generally show commands and I need to see the output of the commands.
When you run a command in a playbook, you can register the result, and for raw tasks that registered variable will have stdout and stderr attributes containing the output from the command. For example:
- hosts: myserver
gather_facts: false
tasks:
- name: demonstrate the raw module
raw: date
register: date
- debug:
var: date.stdout
The output from that playbook will include:
TASK [demonstrate the raw module] ************************************************************************************************************************************************************
changed: [myserver]
TASK [debug] *********************************************************************************************************************************************************************************
ok: [myserver] => {
"date.stdout": "Fri Dec 18 09:09:34 PM EST 2020\r\n"
}
The gather_facts: false part is critical, because that prevents Ansible from trying to implicitly run the setup module on your target host.
Without writing ansible-playbook Why ansible is not able to ping locally ?
Problem:-
I have taken 1 ec2 instance and the IP of ec2 is "52.15.160.250". I installed ansible in it. Inside the inventory file [/etc/ansible/hosts] i have taken
[localhost]
52.15.160.250
Then visudo description
I tried to ping local host
ansible -m ping all
or
ansible -m ping 52.15.160.250
I am getting the following error
error
try adding like this:
[localhost]
52.15.160.250 ansible_connection=local
this way, it would not attempt over ssh rather it would go by local connection.
Having hit a brick wall with troubleshooting why one shell script is hanging when I'm trying to run it via Ansible on the remote host, I've discovered that if I run it in an ssh session from the ansible host it executes successfully.
I now want to build that into a playbook as follows:
- name: Run script
local_action: shell ssh $TARGET "/home/ansibler/script.sh"
I just need to know how to access the $TARGET that this playbook is running on from the selected/limited inventory so I can concatenate it into that local_action.
Is there an easy way to access that?
Try with ansible_host:
- name: Run script
local_action: 'shell ssh {{ ansible_host }} "/home/ansibler/script.sh"'
I'm using Vagrant to create EC2 virtual machines and ansible to provision them. I'm using this guide, along with the ec2.py script for inventory.
I am currently provisioning one host with ansible, to which I've given a tag named Purpose (let's say the value is "Machine Purpose") so that I can do this in my ansible file (the ec2.py script provides this):
- hosts: tag_Purpose_Machine_Purpose
My problem is that if I want to add another server, and I want to provision that, I can't do that using vagrant provision server2, because that will run the ansible script, which will match the first host, too, and provision that one as well.
The reason I want to avoid that is that, even though the ansible instructions are mostly idempotent, not all of them are, so I will unnecessarily move some files etc. on node1, and more importantly, also restart the service already running there.
Is there a way to make ansible only provision the servers I specify on the command line?
You can limit the Ansible play with the parameter --limit. It's not very well documented but you can feed it group names as well as host names.
ansible-playbook ... --limit hostA
Also multiple hostnames separated by comma are possible:
ansible-playbook ... --limit hostA,hostB,hostC
You can set it in the Vagrantfile
v.vm.provision "ansible" do |ansible|
ansible.limit = 'all' # Change this
And you can load it from the command line
v.vm.provision "ansible" do |ansible|
ansible.limit = (ENV['ANSIBLE_LIMIT'] || 'all')
With
ANSIBLE_LIMIT='x' vagrant provision
I'm new to Ansible. I'm trying to start a process on a remote host using a very simple Ansible Playbook.
Here is how my playbook looks like
-
hosts: somehost
gather_facts: no
user: ubuntu
tasks:
- name: change directory and run jetty server
shell: cd /home/ubuntu/code; nohup ./run.sh
async: 45
run.sh calls a java server process with a few parameters.
My understanding was that using async my process on the remote machine would continue to run even after the playbook has completed (which should happen after around 45 seconds.)
However, as soon as my playbook exits the process started by run.sh on the remote host terminals as well.
Can anyone explain what's going and what am I missing here.
Thanks.
I have ansible playbook to deploy my Play application. I use the shell's command substitution to achieve this and it does the trick for me. I think this is because command substitution spawns a new sub-shell instance to execute the command.
-
hosts: somehost
gather_facts: no
user: ubuntu
tasks:
- name: change directory and run jetty server
shell: dummy=$(nohup /run.sh &) chdir={{/home/ubuntu/code}}
Give a longer time to async say 6 months or an year or evenmore and this should be fine.
Or convert this process to an initscript and use the service module.
and add poll: 0
I'd concur. Since it's long running, I'd call it a service and run it like so. Just create an init.d script, push that out with a 'copy' then run the service.