Ansible execute command locally and then on remote server - ansible

I am trying to start a server using ansible shell module with ipmitools and then do configuration change on that server once its up.
Server with ansible installed also has ipmitools.
On server with ansible i need to execute ipmitools to start target server and then execute playbooks on it.
Is there a way to execute local ipmi commands on server running ansible to start target server through ansible and then execute all playbooks over ssh on target server.

You can run any command locally by providing the delegate_to parameter.
- shell: ipmitools ...
delegate_to: localhost
If ansible complains about connecting to localhost via ssh, you need to add an entry in your inventory like this:
localhost ansible_connection=local
or in host_vars/localhost:
ansible_connection: local
See behavioral parameters.
Next, you're going to need to wait until the server is booted and accessible though ssh. Here is an article from Ansible covering this topic and this is the task they have listed:
- name: Wait for Server to Restart
local_action:
wait_for
host={{ inventory_hostname }}
port=22
delay=15
timeout=300
sudo: false
If that doesn't work (since it is an older article and I think I previously had issues with this solution) you can look into the answers of this SO question.

Related

Is there a way to set up an SSH connection (pass-wordless login) between host A and host B while running playbook from hostC using ansible only?

I am trying to set up a passwordless login (copy id_rsa.pub from server A to server B) from server A to server B while running a playbook from controller machine C. The playbook:
cannot have an inventory file. The host IP will be passed from the command line to the playbook as:
ansible-playbook -i , test.yml
Server A DNS name or IP address will be hardcoded in my playbook.
I have tried:
Using fetch module, I tried fetching ssh key(id_rsa_serverA.pub) from server A to controller C and then using copy module to copy the ssh_key(id_rsa_ServerA) to Server B. While it did the work, it does not adhere to the project guidelines I am working on.
Tried 'synchronize' module with ansible 2.5. Fails.
I did a similar thing,
i use user module on serverA with option generate_ssh_key: yes and user register: user_pubkey
then i use authorized_key module with delegate_to serverB, setting the key to "{{ user_pubkey.stdout }}" for the neededuser:`
you can pass #IP of serverB as extra_vers at launch time : ansible-playbook ... ... ... -e serverB=serverB_#IP
hope this helps
cheers

Is it possible to call ansible or ansible-playbook directly on a target host using a script or ansible itself?

I need to know if it's possible to call / execute ansible playbooks from the target machine. I think i saw a vendor do it or at least something similar. they downloaded a script and it did ran the playbook.
if this is possible how would it be done?
my goal is to run ansible as a centralized server in aws to perform tasks in mulitple environments. most are behind firewalls, any reccomendations/thoughts would be appreciated.
Sure. If your host will install Ansible on target and feed it with all the playbooks the you can run it as any other executable. Should you do that is another story but technically there's no obstacle.
You can run ansible and ansible playbook as you would any other binary on the target's $PATH, so any tool that facilitates running remote commands would work.
Because you are in AWS, one way might be to use AWS System's Manager.
If you wanted to use Ansible itself to do this you could use the shell or command modules:
- hosts: target
become: false
gather_facts: false
tasks:
- name: ansible in ansible
command: ansible --version
- name: ansible-playbook in ansible
command: ansible-playbook --version
Though, as with any situation where you reach for the shell or command modules, you have to be vigilant to maintain playbook idempotency yourself.
If you're requirement is just the ability to execute Ansible commands remotely, you might look into AWX which is the upstream project for Red Hat's Ansible Tower. It wraps ansible in a nice user interface to allow you to trigger Ansible playbooks remotely and with nice-to-haves like RBAC.
If you're ok with executing tasks remotely over ssh take a look at Sparrowdo it has out of the box facilities to run bash scripts ( read ansible executable ) remotely from one master host to another. Or you can even use it to install all the ansible dependencies or whatever you need to do for your scope.

Ansible command to trigger registration on another server

I can't find any documentation on how to include a secondary server in a playbook.
If for instance, I want to install sssd on SERVERA and register with a FreeIPA server.
On the FreeIPA server (only), I need to:
get a Kerberos ticket (via kinit)
check if SERVERA is already in IPA instance
delete SERVERA from IPA if true
Since this is an installation playbook run against SERVERA, it doesn't seem right to include the IPA server in the hostlist...but nor can I see any "third party servers" module?
I presume you are searching for the delegate_to option, which allows you to delegate a task to a host that is not in the hostlist.
Often used to run things on the localhost (host running ansible), it can also be used to push a task to a host not in hostlist. The host has to be in the inventory file though.
Example:
- name: Ping the other host
ping:
delegate_to: otherhost.com # This is where you set it
More info: http://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html#delegation

Ansible differentiate local and remote connection

We have lots of playbooks that we use to setup our remote instances. We would like to use those playbooks when bringing up our local environment for test purposes as well.
Is it possible to differentiate between a playbook running locally and remotely?
I'm looking for something like:
- name: install apache2
apt: name=apache2 update_cache=yes state=latest
when: ansible.connection_type == 'local'
Which means I only want to install apache when running ansible against my local environment.
I will then execute it with:
ansible-playbook -i /root/ansible-config/ec2.py -c local myplaybook.yml
Is it possible?
There is ansible_connection variable for every host.

Ansible Delegate_to WinRM

I am using the ansible vsphere_guest module to spin up a base windows machine on a VMWare environment. In my playbook, to do this I set Hosts: 127.0.0.1 connection: local. The reason I am doing this is I beleive im not targeting this playbook at any particular host, as I dont have one yet. I instead want to run the playbook locally.
When this runs, I get a new shiny windows server VM. What I now want to do is rename that VM's computer name. To do this I am trying to upload and run a powershell script like so rename_host.ps1 $newHostname. As I understand, I need to use the script module to do this. However, this time I want to target my brand new VM, which I get the IP address of through a fact, {{ newvm_ipaddress }}.
However, when I try and run this script with delegate_to: "{{ newvm_ipaddress}}", its trying to run as SSH. SSH wont work, im targeting a windows machine with remote powershell.
is there any way to set the connection to use winRM in the context of delegate_to? Perhaps there is a better way of doing this?
Thank you for your help
I managed to work out how to solve it. The answer is the ansible module 'add_host'. I have a play under vsphere_guest as follows. This creates a new in memory host, which can then be accessed by a different play.
- add_host group=new_machine name={{ vm_ipaddress }} ansible_connection=winrm
After this, I then have a new play that can now target this host.
- host: new_machine
Also to note, variables do not span across different hosts. The solution was to use the set_fact module in play A, which can then be accessed from within play B
-set_fact:
vm_ipaddress: "{{ hw_eth0.ipaddresses[1] }}" #hw_eth0 is the fact returned from the vsphere_guest module
What about updating the inventory with the new hosts name and with ssh winrm connection params before using delegate_to, or perhaps setting some default catch-all naming scheme with these params?
For example:
[databases]
db-[a:f].example.com:5986 ansible_user=Administrator ansible_connection=winrm ansible_winrm_server_cert_validation=ignore

Resources