I have set 2 servers for testing purposes, which are behind a NAT network. So I configured port forwarding to SSH port for both of them.
My inventory file looks like this:
[webservers]
example.com:12021
example.com:12121
[webservers:vars]
ansible_user=root
ansible_ssh_private_key_file=~/test/keys/id_ed25519
But Ansible only identifies one of them(whichever is first in the list). My "hack" to run ansible-playbook commands on both of them is by changing the order in the hostlist, and running the playbook twice.
So, is there any way to identify the hosts by port number, and not by hostname?
Thanks in advance.
Use any label you like:
[webservers]
server1 ansible_host=example.com ansible_port=12021
server2 ansible_host=example.com ansible_port=12121
Related
I have a host under two group names (in the example below). If group_1 is called I want it to connect to it via ssh, but if group_2 is called I want it to be a local connection. However, ansible seems to be merging the two hosts variables together even though they're in different groups? It's using a local connection for group_1.
How can I prevent this?
[group_1]
example.com ansible_user=ansible ansible_ssh_private_key_file="{{ lookup('env','PATH_TO_KEYS') }}"/my.pem
[group_2]
example.com ansible_port=8081 ansible_connection=local
The inventory hostname can be arbitrary but it is the key identifier for the host so the vars will aggregated as described here: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
We can use the arbitrary string combined with ansible_host to kind of game the system to do what you want. https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#hosts-and-non-standard-ports
Also, note that the ansible_connection=local is going to execute code on your localhost without use of any connections or service daemon (ssh or otherwise) so the ansible_port is not necessary.
[group_1]
example_ssh ansible_host=example.com ansible_user=ansible ansible_ssh_private_key_file="{{ lookup('env','PATH_TO_KEYS') }}"/my.pem
[group_2]
example_local_8081 ansible_host=example.com ansible_port=8081 ansible_connection=local
I have a Dynamic Inventory Script which outputs the following.
"NODE_A": {
"inventory_hostname": "10.0.2.6",
"inventory_lparname": "NODE_A"
}
The Nodes are not resolvable via DNS or something as this Network is some kind of isolated "Management" LAN.
Until now i had a Play running which modifies the local /etc/hosts File to enable Name Resolution.
As the Ansible Controller is going to move to an foreign Machine, this is not possible anymore.
So the big question is how to proceed. How do i instruct Ansible to connect to the IP Adress instead of the Hostname, repectively can i use "inventory_hostname" instead of "ansible_hostname" as Connection String, but keep the Hostname displayed in the Play Recap?
How do i instruct Ansible to connect to the IP Adress instead of the Hostname, repectively can i use "inventory_hostname" instead of "ansible_hostname" as Connection String, but keep the Hostname displayed in the Play Recap?
The normal way you handle this is to set the inventory hostname to the "friendly" name, and then set ansible_hostname to the ip address. That is, if your inventory script reports a host named "host0", then when called with --host host0 it should produce:
{
"inventory_hostname": "host0",
"ansible_host": "10.0.2.6",
}
You will see the name host0 in playbook output, but ansible will use the ip address for connections.
An option would be to extend the Dynamic Inventory Scrip to add or replace nodes in an inventory directory. For example in INI format
NODE_A ansible_host=10.0.2.6
I have an ansible playbook that calls an API. I have a delegate_to: localhost.
I want to run my playbook "against" every host in a group Linux.
I want the API call to do 2 things.
I want it to run once for every host in the Linux group
For every run, it should have a different host IP/hostname as the input
linux looks like:
[linux]
10.234.0.13
10.234.0.12
I run my like this: ansible-playbook -i linux my_playbook
How do I make the inventory group an input?
when ansible executes on the remote host it collects some meta data of the hosts. One of the meta data is ip address. The meta data is called ansible facts. There is an ansible fact to get the ip address
ansible_default_ipv4.address
I have inventory file which has multiple users for a server, like below.
[TEST]
server1 ansible_user=user1
server1 ansible_user=user2
server1 ansible_user=user3
server1 ansible_user=user4
When I run playbook using this inventory, it only runs on "server1 ansible_user=user4", ignoring first 3 users. How can I run playbook on all 4 users?
With this inventory you have one inventory entry server1 and with each new line you override ansible_user variable.
If you really want (what is the use case) to make this happen, use host aliasing:
[TEST]
s1_u1 ansible_host=server1 ansible_user=user1
s1_u2 ansible_host=server1 ansible_user=user2
s1_u3 ansible_host=server1 ansible_user=user3
s1_u4 ansible_host=server1 ansible_user=user4
But be prepared to possible concurrency issues, like APT lock for example.
I have a complex Ansible setup with several hosts in my group file. Something like this.
# hosts/groups
[local]
127.0.0.1
[server1]
server1.domain.com
[server2]
server2.domain.com
[group1]
local
server1
[group2]
local
server2
This way, I can run both groups against localhost:2222 which is my Vagrant box, however, they will be both executed. For testing I would very much prefer to choose, which setting, I would like to test. I have experimented with --extra-vars arguments and conditionals, which is pretty ugly. Is there a way to use the extra_vars argument with the host configuration. Using a command like ...
ansible-playbook playbook.yml -i hosts -l 127.0.0.1:2222 --extra-vars "vhost=server1.domain.com"
Or am I entirely wrong.
I don't think there's an easy way to do this via alterations to how you executes Ansible.
The best option I can think of involves potentially re-organizing your playbooks. If you create group1.yaml and group2.yaml, each containing the statements necessary for setting up group1 and group2, respectively, then you can run something like
[$]> ansible-playbook -l 127.0.0.1:2222 group1.yaml
to only run the group1 configuration against your development instance.
If you still want a convenient way to run all tasks, alter your playbook.yaml to include the other playbooks:
- include: group1.yaml
- include: group2.yaml