Is there any possible way to automatically add host if the host is not defined in /etc/ansible/hosts?
I have a playbook that will create a user in remote host when triggered.
Command:
# ansible-playbook -e "username=test_account host=10.0.1.123 acc_exp=2021-06-28" register_user.yml
I tried to create a task to register the host but the play stopped prematurely before getting into the task.
tasks:
- name: Register host IP address for the first time
add_host:
name: "{{ host }}"
group: register_user_hosts
UPDATE:
Prematurely means the play ended right after the Ansible command being executed.
Error:
PLAY [register_user.yml] *******************************************************
skipping: no hosts matched
I let the ansible host file empty under [register_user_hosts] group so that the playbook will append the host automatically.
The reason for this method is that I will not need to register host every time a new instance is provisioned. This can help to save time and allow fast deployment by not manually accessing Ansible controller just to append a new host.
Please advice.
Thank you.
Related
I am trying to set up a passwordless login (copy id_rsa.pub from server A to server B) from server A to server B while running a playbook from controller machine C. The playbook:
cannot have an inventory file. The host IP will be passed from the command line to the playbook as:
ansible-playbook -i , test.yml
Server A DNS name or IP address will be hardcoded in my playbook.
I have tried:
Using fetch module, I tried fetching ssh key(id_rsa_serverA.pub) from server A to controller C and then using copy module to copy the ssh_key(id_rsa_ServerA) to Server B. While it did the work, it does not adhere to the project guidelines I am working on.
Tried 'synchronize' module with ansible 2.5. Fails.
I did a similar thing,
i use user module on serverA with option generate_ssh_key: yes and user register: user_pubkey
then i use authorized_key module with delegate_to serverB, setting the key to "{{ user_pubkey.stdout }}" for the neededuser:`
you can pass #IP of serverB as extra_vers at launch time : ansible-playbook ... ... ... -e serverB=serverB_#IP
hope this helps
cheers
I am running Ansible Tower v3.4.1 with Ansible v2.7.6 on an ubuntu 16.04 VM running on VirtualBox. I run a playbook that works when I run it from the command line using "ansible-playbook" but fails when I try to run it from Ansible Tower. I know I must have something misconfigured in ansible tower but I can't find it.
I get this warning no matter what changes I make to the inventory (hosts) file.
$ ansible-playbook 2.7.6
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/var/lib/awx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Using /etc/ansible/ansible.cfg as config file
SSH password:
**/tmp/awx_74_z6yJB4/tmpVlXGCX did not meet host_list requirements**, check plugin documentation if this is unexpected
Parsed /tmp/awx_74_z6yJB4/tmpVlXGCX inventory source with script plugin
PLAYBOOK: addpool.yaml *********************************************************
1 plays in addpool.yaml
[WARNING]: **Could not match supplied host pattern, ignoring: bigip**
PLAY [Sample pool playbook] ****************************************************
17:05:43
skipping: no hosts matched
I have enabled inventory plugins for YAML, and made my hosts file into a hosts.yml file.
Here's my hosts file:
192.168.68.253
192.168.68.254
192.168.1.165
[centos]
dad2 ansible_ssh_host=192.168.1.165
[bigip]
bigip1 ansible_host=192.168.68.254
bigip2 ansible_host=192.168.68.253
Here's my playbook:
---
- name: Sample pool playbook
hosts: bigip
connection: local
tasks:
- name: create web servers pool
bigip_pool:
name: web-servers2
lb_method: ratio-member
password: admin
user: admin
server: '{{inventory_hostname}}'
validate_certs: no
I replaced hosts: bigip with hosts: all and specified the inventory in Tower as bigip which contains only the two hosts I want to change. This seems to provide the output I am looking for.
For the ansible-playbook command line, I added --limit bigip and this seems to provide the output I am looking for.
So things appear to be working, I just don't know whether this is best practice use.
If you get the error below while running a playbook with the command
ansible-playbook -i test-project/inventory.txt playbook.yml
{"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.24.10 port 22: Connection timed out", "unreachable": true}
The solution is to add, in the file ansible.cfg:
[defaults]
inventory=/etc/ansible/hosts
I think you need to remove the connection: local.
You have specified in hosts: bigip that you want these tasks to only run on hosts in the bigip group. You then specify connection: local which causes the task to run on the controller node (i.e. localhost), rather than the nodes in the bigip group. Localhost is not a member of the bigip group, and so none of the tasks in the play will trigger.
Check for special characters in absolute path of hosts file or playbook. Incase if you directly copied the path from putty, try copy and paste it from notepad or any editor
For me the issue was the format of the /etc/ansible/hosts file. You should use the :children suffix in order to use groups of groups like this:
[dev1]
dev_1 ansible_ssh_host=192.168.1.55 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[dev2]
dev_2 ansible_ssh_host=192.168.1.68 ansible_connection=ssh ansible_ssh_user={{username}} ansible_ssh_pass={{password}}
[devs:children]
dev1
dev2
Reference: here
I am using the ansible vsphere_guest module to spin up a base windows machine on a VMWare environment. In my playbook, to do this I set Hosts: 127.0.0.1 connection: local. The reason I am doing this is I beleive im not targeting this playbook at any particular host, as I dont have one yet. I instead want to run the playbook locally.
When this runs, I get a new shiny windows server VM. What I now want to do is rename that VM's computer name. To do this I am trying to upload and run a powershell script like so rename_host.ps1 $newHostname. As I understand, I need to use the script module to do this. However, this time I want to target my brand new VM, which I get the IP address of through a fact, {{ newvm_ipaddress }}.
However, when I try and run this script with delegate_to: "{{ newvm_ipaddress}}", its trying to run as SSH. SSH wont work, im targeting a windows machine with remote powershell.
is there any way to set the connection to use winRM in the context of delegate_to? Perhaps there is a better way of doing this?
Thank you for your help
I managed to work out how to solve it. The answer is the ansible module 'add_host'. I have a play under vsphere_guest as follows. This creates a new in memory host, which can then be accessed by a different play.
- add_host group=new_machine name={{ vm_ipaddress }} ansible_connection=winrm
After this, I then have a new play that can now target this host.
- host: new_machine
Also to note, variables do not span across different hosts. The solution was to use the set_fact module in play A, which can then be accessed from within play B
-set_fact:
vm_ipaddress: "{{ hw_eth0.ipaddresses[1] }}" #hw_eth0 is the fact returned from the vsphere_guest module
What about updating the inventory with the new hosts name and with ssh winrm connection params before using delegate_to, or perhaps setting some default catch-all naming scheme with these params?
For example:
[databases]
db-[a:f].example.com:5986 ansible_user=Administrator ansible_connection=winrm ansible_winrm_server_cert_validation=ignore
I have 2 playbooks running on ansible, one after another. After playbook 1 finishes, I want to run the second one on only the hosts for which the first playbook fully succeeded. Looking through the ansible docs, I can't find any accessible info on which hosts failed a specific playbook. How could this be done?
FYI I need separate playbooks because the second one must be run with in serial, which is only available at the playbook level
Where all hosts - successful hosts = failed hosts, you can use the following task to get the difference between the two special variables for all hosts in the play (including failed hosts) and all hosts that have not yet failed. Use of serial will affect the result.
- name: Print play hosts that failed
debug:
msg: "The hosts that failed are {{ ansible_play_hosts_all| difference(ansible_play_batch) |join('\n') }}"
Source: https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html
Honestly, the best way is to have some queryable state on each host. A simple method is to check for a file's existence, which is created after your first playbook succeeds. You can then have a task which checks for that state and notifies a notify task that it has been "updated", which will get what you want.
In an aside, I stopped using ansible because it wasn't configurable enough; I also had issues getting the parallelism controls I wanted. You my try hitting up the Ansible Project Google Group to put in a feature suggestion or describe your use case.
There is a difference between a play and a playbook. The serial argument is available on the play level. A playbook may contain multiple plays.
---
- name: Play 1
hosts: some_hosts
tasks:
- debug:
- name: Play 2
hosts: some_hosts
serial: 1
tasks:
- debug:
...
Hosts which failed in play 1 will not be processed in play 2.
If your really want to have separate playbooks, I see two options:
Create a callback plugin. You can register a function which gets fired when a task or host fails. You can store this info then locally and use in the next playbook run.
Activate Ansible logging. It will log pretty much the same stuff you see as raw output when running ansible-playbook.
Second option is bit ugly, but easier than creating a callback plugin.
It both cases you then need to create a dynamic inventory script which checks previously saved data and returns valid hosts. Or return all hosts but set a property to mark those hosts. You then can use group_by to create an ad-hoc group.
The Question
When using the rax module to spin up servers and get inventory, how do I tell Ansible to connect to the IP address on an isolated network rather than the server's public IP?
Note: Ansible is being run from a server on the same isolated network.
The Problem
I spin up a server in the Rackspace Cloud using Ansible with the rax module, and I add it to an isolated/private network. I then add it to inventory and begin configuring it. The first thing I do is lock down SSH, in part by telling it to bind only to the IP address given to the host on the isolated network. The catch is, that means ansible can't connect over the public IP address, so I also set ansible_ssh_host to the private IP. (This happens when I add the host to inventory.)
- name: Add servers to group
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax.addresses.my_network_name[0].addr }}"
groups: launched
with_items: rax_response.success
when: rax_response.action = 'create'
This works just fine on that first run of creating and configuring new instances. Unfortunately, the next time I try to connect to these servers, the connection is refused because Ansible is trying at an IP address on which SSH isn't listening. This happens because:
Ansible tries to connect to ansible_ssh_host...
But the rax.py inventory script has set ansible_ssh_host to the accessIPv4 returned by Rackspace...
And Rackspace has set accessIPv4 to the public IP address of the server.
Now, I'm not sure what to do about this. Rackspace does allow an API call to update a server and set its accessIPv4, so I thought I could run another local_action after creating the server to do that. Unfortunately, the rax module doesn't appear to allow updating a server, and even if it did it depends on pyrax which in turn depends on novaclient, and novaclient only allows updating the name of the server, not accessIPv4.
Surely someone has done this before. What is the right way to tell Ansible to connect on the isolated network when getting dynamic inventory via the rax module?
You can manually edit the rax.py file and change line 125 and line 163 from:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
This should make the value of ansible_ssh_host the private IP.
My first thought on this is to treat it like you have a tunnel you need to set up.
When you use the rax module, it creates a group called "raxhosts". By default, these are accessed using that public ipv4 address.
You could create another group using that group (via add_host), but specify the IP you want to actually access it through.
- name: Redirect to raxprivhosts
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax_addresses['private'][0]['addr'] }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
groupname: raxprivhosts
with_items: raxhosts
Then apply playbooks against those groups as your follow on actions.
Let me know how this works out for you, I'm just throwing it out as an alternate to changing your rax.py manually.
You can set an environment variable in /etc/tower/settings.py to use the private network. It would either be
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'YOURNETWORK'
or
AWX_TASK_ENV['RAX_ACCESS_NETWORK'] = 'private'
You would then restart services afterwards with ansible-tower-service restart
You then need to refresh the inventory which you can do through the Tower interface. You'll now see that the host IP is set to the network you specified in the variable.
In the latest version of ansible you need to change:
hostvars['ansible_ssh_host'] = server.accessIPv4
to:
hostvars['ansible_ssh_host'] = server.addresses['private'][0]['addr']
AND
hostvars[server.name]['ansible_ssh_host'] = server.accessIPv4
to:
hostvars[server.name]['ansible_ssh_host'] = server.addresses['private'][0]['addr']