Ansible Hostname as variable in ansible_user - ansible

Need help with ansible. In our company we use following method to ssh to a server.
If IP of server is 172.16.1.8 , Username would be "empname~id~serverIP" e.g. john~1234~172.16.1.8 . So following ssh command is used -
> ssh john~1234~172.16.1.8#172.16.1.8 -i key.pem
Basically username has the hostname as a variable.
Now our inventory has just IPs with group web.
> cat inventory.txt
[web]
172.16.1.8
172.16.x.y
172.16.y.z
My playbook yml has ansible user as following.
> cat uptime.yml
- hosts: web
vars:
ansible_ssh_port: xxxx
ansible_user: john~1234~{{inventory_hostname}}
tasks:
- name: Run uptime command
shell: uptime
However, when I use following ansible-playbook command, it gives error for incorrect username.
> ansible-playbook -v uptime.yml -i inventory.txt --private-key=key.pem
Please help me find correct ansible_user in playbook which has hostname as a variable inside.

You can define ansible_user in group_vars/web.yml
group_vars/web.yml:
---
ansible_ssh_port: xxxx
ansible_user: "john~1234~{{ inventory_hostname }}"

Using a group var helped -
ansible_ssh_port: xxxx
ansible_user: "john~1234~{{ inventory_hostname }}"

Related

How to let Ansible run a command with host as argument locally

I installed servers at multiple hosts using ansible playbook. the hosts are defined at inventory file:
[services]
host_ip1
host_ip1
...
Now I need to test if each host works properly using command:
service-cli -h <host_ip1> -p 6380 ping
...
How do I write that using ansible? it is for sure ansible will run the command at local not remote, Then how do I pass the host_ipX to ansible?
I know how to use delegate_to: 127.0.0.1, I do not know how to get the ip of inventory file using with_items, what keywords should I set? services?
Your use case is literally documented in the extract examples
- debug:
msg: service-cli -h {{ item }} -p 6380 ping
with_items: '{{ groups["services"] | map("extract", hostvars, "ansible_host") }}'
delegate_to: localhost
# this is required if you want the task as part of a playbook
# that contains "hosts: all"
run_once: yes
or, as an alternative to the "run_once" you can isolate that as its own play:
- hosts: localhost
gather_facts: no
tasks:
- debug:
msg: service-cli -h {{ item }} -p 6380 ping
with_items: '{{ groups["services"] | map("extract", hostvars, "ansible_host") }}'
- hosts: all
... # and now back to normal life

Ansible declare host from variable

So i'm running an ansible playbook, which creates a server (using terraform) and gives saves the ip-address of the server into a variable. i'd like to execute another task on the given ip-address. How do i declare the new host?
I've tried:
- hosts: "{{ remotehost }}"
tasks:
- name: test
lineinfile:
path: /etc/environment
line: test1234
I run the playbook with: ansible-playbook variable.yaml --extra-vars='playbook=ip-address'
If you just want to execute a single task you can use delegate_to
For example:
tasks:
- name: another host execute
command: ls -ltr
delegate_to: "{{ remotehost }}"
The server should have the ssh connection working with the new hosts

Can Ansible deploy public SSH key asking password only once?

I wonder how to copy my SSH public key to many hosts using Ansible.
First attempt:
ansible all -i inventory -m local_action -a "ssh-copy-id {{ inventory_hostname }}" --ask-pass
But I have the error The module local_action was not found in configured module paths.
Second attempt using a playbook:
- hosts: all
become: no
tasks:
- local_action: command ssh-copy-id {{ inventory_hostname }}
Finally I have entered my password for each managed host:
ansible all -i inventory --list-hosts | while read h ; do ssh-copy-id "$h" ; done
How to fill password only once while deploying public SSH key to many hosts?
EDIT: I have succeeded to copy my SSH public key to multiple remote hosts using the following playbook from the Konstantin Suvorov's answer.
- hosts: all
tasks:
- authorized_key:
key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
The field user should be mandatory according to the documentation but it seems to work without. Therefore the above generic playbook may be used for any user when used with this command line:
ansible-playbook -i inventory authorized_key.yml -u "$USER" -k
Why don't you use authorized_key module?
- hosts: all
tasks:
- authorized_key:
user: remote_user_name
state: present
key: "{{ lookup('file', '/local/path/.ssh/id_rsa.pub') }}"
and run playbook with -u remote_user_name -k

Ansible: Provided hosts list is empty

I have this below playbook where the remote host is an user input and subsequently I am trying to gather facts about the remote host and copy the same to a file in local:
---
- hosts: localhost
vars_prompt:
name: hostname
prompt: "Enter Hostname"
tasks:
- name: Add hosts to known_hosts file
add_host: name={{ hostname }} groups=new
- name: Check if Host is reachable
shell: ansible -m ping {{ hostname }}
- name: Remove existing remote hosts
shell: ssh-keygen -R {{ hostname }}
- name: Setup passwordless SSH login
shell: ssh-copy-id -i ~/.ssh/id_rsa user#{{ hostname }}
- name: Display facts
command: ansible {{ groups['new'] }} -m setup
register: output
- copy: content="{{ output }}" dest=/var/tmp/dir/Node_Health/temp
...
I get the below error in the temp file:
Node_Health]# cat temp
{"start": "2016-06-17 09:26:59.174155", "delta": "0:00:00.279268", "cmd": ["ansible", "[udl360x4675]", "-m", "setup"], "end": "2016-06-17 09:26:59.453423", "stderr": " [WARNING]: provided hosts list is empty, only localhost is available", "stdout": "", "stdout_lines": [], "changed": true, "rc": 0, "warnings":
I also tried the below playbook which also gives the same error:
---
- hosts: localhost
vars_prompt:
name: hostname
prompt: "Enter Hostname"
tasks:
- name: Add hosts to known_hosts file
add_host: name={{ hostname }} groups=new
- name: Check if Host is reachable
shell: ansible -m ping {{ hostname }}
- name: Remove existing remote hosts
shell: ssh-keygen -R {{ hostname }}
- name: Setup passwordless SSH login
shell: ssh-copy-id -i ~/.ssh/id_rsa user#{{ hostname }}
- hosts: new
tasks:
- name: Display facts
command: ansible {{ groups['new'] }} -m setup
register: output
- local_action: copy content="{{ output }}" dest=/var/tmp/dir/Node_Health/temp
...
Any help will be appreciated.
Ansible assumes that you have all your hosts in an inventory file somewhere.
add_host only adds your host to the currently running Ansible, and that doesn't propagate to the copy of Ansible you call.
You're going to have to either:
change the command to use an inline inventory list, like ansible all -i '{{ hostname }},' -m setup (More details re the use of -i '<hostname>,' here
or write out the hostname to a file, and use that as your inventory file
Put your hosts in a hosts.ini file, with the following syntax:
[nodes]
node_u1 ansible_user=root ansible_host=127.0.0.1
node_u2 ansible_user=root ansible_host=127.0.1.1
node_u3 ansible_user=root ansible_host=127.0.2.1
node_u4 ansible_user=root ansible_host=127.0.3.1
node_u5 ansible_user=root ansible_host=127.0.4.1
Running ansible, use:
ansible-playbook -i hosts.ini
You can also save your hosts file in /etc/ansible/hosts to avoid passing hosts as parameters. Ansible looks there as a default location. Then just run using:
ansible-playbook <playbook.yml>
For me it looks that thay don't know where the inventory file is located.
I used cmd:
ansible-playbook name.yml -i hostfile
The parent directory for ansible files to be located at $HOME.

Override hosts variable of Ansible playbook from the command line

This is a fragment of a playbook that I'm using (server.yml):
- name: Determine Remote User
hosts: web
gather_facts: false
roles:
- { role: remote-user, tags: [remote-user, always] }
My hosts file has different groups of servers, e.g.
[web]
x.x.x.x
[droplets]
x.x.x.x
Now I want to execute ansible-playbook -i hosts/<env> server.yml and override hosts: web from server.yml to run this playbook for [droplets].
Can I just override as a one time off thing, without editing server.yml directly?
Thanks.
I don't think Ansible provides this feature, which it should. Here's something that you can do:
hosts: "{{ variable_host | default('web') }}"
and you can pass variable_host from either command-line or from a vars file, e.g.:
ansible-playbook server.yml --extra-vars "variable_host=newtarget(s)"
For anyone who might come looking for the solution.
Play Book
- hosts: '{{ host }}'
tasks:
- debug: msg="Host is {{ ansible_fqdn }}"
Inventory
[web]
x.x.x.x
[droplets]
x.x.x.x
Command: ansible-playbook deplyment.yml -i hosts --extra-vars "host=droplets"
So you can specify the group name in the extra-vars
We use a simple fail task to force the user to specify the Ansible limit option, so that we don't execute on all hosts by default/accident.
The easiest way I found is this:
---
- name: Force limit
# 'all' is okay here, because the fail task will force the user to specify a limit on the command line, using -l or --limit
hosts: 'all'
tasks:
- name: checking limit arg
fail:
msg: "you must use -l or --limit - when you really want to use all hosts, use -l 'all'"
when: ansible_limit is not defined
run_once: true
Now we must use the -l (= --limit option) when we run the playbook, e.g.
ansible-playbook playbook.yml -l www.example.com
Limit option docs:
Limit to one or more hosts This is required when one wants to run a
playbook against a host group, but only against one or more members of
that group.
Limit to one host
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1"
Limit to multiple hosts
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1,host2"
Negated limit.
NOTE: Single quotes MUST be used to prevent bash
interpolation.
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'all:!host1'
Limit to host group
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'group1'
This is a bit late, but I think you could use the --limit or -l command to limit the pattern to more specific hosts. (version 2.3.2.0)
You could have
- hosts: all (or group)
tasks:
- some_task
and then ansible-playbook playbook.yml -l some_more_strict_host_or_pattern
and use the --list-hosts flag to see on which hosts this configuration would be applied.
An other solution is to use the special variable ansible_limit which is the contents of the --limit CLI option for the current execution of Ansible.
- hosts: "{{ ansible_limit | default(omit) }}"
If the --limit option is omitted, then Ansible issues a warning, but does nothing since no host matched.
[WARNING]: Could not match supplied host pattern, ignoring: None
PLAY ****************************************************************
skipping: no hosts matched
I'm using another approach that doesn't need any inventory and works with this simple command:
ansible-playbook site.yml -e working_host=myhost
To perform that, you need a playbook with two plays:
first play runs on localhost and add a host (from given variable) in a known group in inmemory inventory
second play runs on this known group
A working example (copy it and runs it with previous command):
- hosts: localhost
connection: local
tasks:
- add_host:
name: "{{ working_host }}"
groups: working_group
changed_when: false
- hosts: working_group
gather_facts: false
tasks:
- debug:
msg: "I'm on {{ ansible_host }}"
I'm using ansible 2.4.3 and 2.3.3
I changed mine to default to no host and have a check to catch it. That way the user or cron is forced to provide a single host or group etc. I like the logic from the comment from #wallydrag. The empty_group contains no hosts in the inventory.
- hosts: "{{ variable_host | default('empty_group') }}"
Then add the check in tasks:
tasks:
- name: Fail script if required variable_host parameter is missing
fail:
msg: "You have to add the --extra-vars='variable_host='"
when: (variable_host is not defined) or (variable_host == "")
Just came across this googling for a solution. Actually, there is one in Ansible 2.5. You can specify your inventory file with --inventory, like this: ansible --inventory configs/hosts --list-hosts all
If you want to run a task that's associated with a host, but on different host, you should try delegate_to.
In your case, you should delegate to your localhost (ansible master) and calling ansible-playbook command
I am using ansible 2.5 (2.5.3 exactly), and it seems that the vars file is loaded before the hosts param is executed. So you can set the host in a vars.yml file and just write hosts: {{ host_var }} in your playbook
For example, in my playbook.yml:
---
- hosts: "{{ host_name }}"
become: yes
vars_files:
- vars/project.yml
tasks:
...
And inside vars/project.yml:
---
# general
host_name: your-fancy-host-name
Here's a cool solution I came up to safely specify hosts via the --limit option. In this example, the play will end if the playbook was executed without any hosts specified via the --limit option.
This was tested on Ansible version 2.7.10
---
- name: Playbook will fail if hosts not specified via --limit option.
# Hosts must be set via limit.
hosts: "{{ play_hosts }}"
connection: local
gather_facts: false
tasks:
- set_fact:
inventory_hosts: []
- set_fact:
inventory_hosts: "{{inventory_hosts + [item]}}"
with_items: "{{hostvars.keys()|list}}"
- meta: end_play
when: "(play_hosts|length) == (inventory_hosts|length)"
- debug:
msg: "About to execute tasks/roles for {{inventory_hostname}}"
This worked for me as I am using Azure devops to deploy an application using CICD pipelines. I had to make this hosts (in yml file) more dynamic so in release pipeline I can add it's value, for example:
--extra-vars "host=$(target_host)"
pipeline_variable
My ansible playbook looks like this
- name: Apply configuration to test nodes
hosts: '{{ host }}'

Resources