- debug:
msg: "Setting up passwordless ssh"
- name: Create SSH key
command: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
args:
creates: /root/.ssh/id_rsa
- name: Fetch the keyfile from the node to master
fetch:
src: "/root/.ssh/id_rsa.pub"
dest: "buffer/{{inventory_hostname}}-id_rsa.pub"
flat: yes
- name: Copy the key add to authorized_keys using Ansible module
authorized_key:
user: root
state: present
key: "{{ lookup('file','buffer/{{item}}-id_rsa.pub')}}"
when: "{{ item != inventory_hostname }}"
with_items:
- "{{ groups[group_names[0]] }}"
register: ssh_setup_result
- name: Add host to known hosts list
shell: ssh-keyscan -H "{{ inventory_hostname }}" >> ~/.ssh/known_hosts
- name: Add host to known hosts list
shell: ssh-keyscan -H "{{ hostname_domain }}" >> ~/.ssh/known_hosts
- name: get key
command: ssh-keyscan {{ inventory_hostname }}
register: keys
- name: add key to known_hosts
lineinfile:
path: /root/.ssh/known_hosts
line: " {{ keys.stdout }}"
insertbefore: BOF
when: "{{ item != inventory_hostname }}"
with_items:
- "{{ groups[group_names[0]] }}"
ssh-keys is not getting copied to cross hosts , say keys from node1 is not getting copied to known hosts of node2. Similarly vice versa. How to resolve this ?
Ansible version used:
ansible [core 2.11.11]
Related
I am trying to write a playbook to setup mysql master-slave replication with multiple slave servers. For each slave server, I need access to a variable called next_id that should be incremented before use for each host. For example, for the first slave server, next_id should be 2 and for the second slave server it should be 3 and so on. What is the way to achieve this in Ansible?
This is the yaml file I use to run my role.
- name: Setup the environment
hosts: master , slave_hosts
serial: 1
roles:
- setup-master-slave
vars:
master_ip_address : "188.66.192.11"
slave_ip_list :
- "188.66.192.17"
- "188.66.192.22"
This is the yaml file where I use the variable.
- name: Replace conf file with template
template:
src: masterslave.j2
dest: /etc/mysql/mariadb.conf.d/50-server.cnf
when: inventory_hostname != 'master'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id }}"
I can suggest a way which will work according to you requirement for any number of slave servers but it is not based on any any module but self conscience.
Here my master_ip_address is 10.x.x.x and input is any value of next_id you want to increment for every slave server
- hosts: master,slave1,slave2,slave3,slave4
serial: 1
gather_facts: no
tasks:
- shell: echo "{{ input }}" > yes.txt
delegate_to: localhost
when: inventory_hostname == '10.x.x.x'
- shell: cat yes.txt
register: var
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "$(({{var.stdout}}+1))"
register: next_id
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "{{ next_id.stdout }}" > yes.txt
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- name: Replace conf file with template
template:
src: masterslave.j2
dest: 50-server.cnf
when: inventory_hostname != '10.x.x.x'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id.stdout }}"
[ansibleserver#172 test_increment]$ cat masterslave.j2
- {{ ip_address }}
- {{ server_id }}
Now, if you run
ansible-playbook increment.yml -e 'master_ip_address=10.x.x.x input=1'
slave1 server
[root#slave1 ~]# cat 50-server.cnf
- 10.x.x.x
- 2
slave2 server
[root#slave2 ~]# cat 50-server.cnf
- 10.x.x.x
- 3
slave3 server
[root#slave3 ~]# cat 50-server.cnf
- 10.x.x.x
- 4
and so on
"serial" is available in a playbooks only and not working in roles
therefore, for fully automatic generation of server_id for MySQL in Ansible roles, you can use the following:
roles/my_role/defaults/main.yml
---
cluster_alias: test_cluster
mysql_server_id_config: "settings/mysql/{{ cluster_alias }}/server_id.ini"
roles/my_role/tasks/server-id.yml
---
- name: Ensures '{{ mysql_server_id_config | dirname }}' dir exists
file:
path: '{{ mysql_server_id_config | dirname }}'
state: directory
owner: root
group: root
mode: 0755
delegate_to: localhost
- name: Ensure mysql server id config file exists
copy:
content: "0"
dest: "{{ mysql_server_id_config }}"
force: no
owner: root
mode: '0755'
delegate_to: localhost
- name: server-id
include_tasks: server-id-tasks.yml
when: inventory_hostname == item
with_items: "{{ ansible_play_hosts }}"
tags:
- server-id
roles/my_role/tasks/server-id-tasks.yml
# tasks file
---
- name: get last server id
shell: >
cat "{{ mysql_server_id_config }}"
register: _last_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: get new server id
shell: >
echo "$(({{_last_mysql_server_id.stdout}}+1))"
register: _new_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: save new server id
shell: >
echo -n "{{ _new_mysql_server_id.stdout }}" > "{{ mysql_server_id_config }}"
check_mode: no
delegate_to: localhost
tags:
- server-id
- debug:
var: _new_mysql_server_id.stdout
tags:
- server-id
- name: Replace conf file with template
template:
src: server-id.j2
dest: server-id.cnf
I have a playbook that checks hard drive space on a group of servers, sends out an email, and it attaches a file containing the output contents. Is there a way to place the contents of the file in the body itself? I would like the file contents to be able to be seen at a glance in the email. The content would be the registered variable, result.
The current tasks:
---
- name: Check for output file
stat:
path: /tmp/ansible_output.txt
register: stat_result
delegate_to: localhost
- name: Create file if it does not exist
file:
path: /tmp/ansible_output.txt
state: touch
mode: '0666'
when: stat_result.stat.exists == False
delegate_to: localhost
- name: Check hard drive info
become: yes
become_user: root
shell: cat /etc/hostname; echo; df -h | egrep 'Filesystem|/dev/sd'
register: result
- debug: var=result.stdout_lines
- local_action: lineinfile line={{ result.stdout_lines | to_nice_json }} dest=/tmp/ansible_output.txt
- name: Email Result
mail:
host: some_email_host
port: some_port_number
username: my_username
password: my_password
to:
- first_email_address
- second_email_address
- third_email_address
from: some_email_address
subject: My Subject
body: <syntax for file contents in body here> <--- What is the syntax?
attach:
/tmp/ansible_output.txt
run_once: true
delegate_to: localhost
- name: Remove Output File
file:
path: /tmp/ansible_output.txt
state: absent
run_once: true
delegate_to: localhost
Edit: I tried
body: "{{ result.stdout_lines | to_nice_json }}"
but it only sends me the output of the first host in the group.
Ok, I figured it out. I created a directory, files, and sent the output to a file in that directory using the {{ role_path }} variable. In the body portion of the email task, I used the lookup plugin to grab the contents of the file.
Here is the updated playbook with the original lines commented out:
---
- name: Check for output file
stat:
#path: /tmp/ansible_output.txt
path: "{{ role_path }}/files/ansible_output.txt"
register: stat_result
delegate_to: localhost
- name: Create file if it does not exist
file:
#path: /tmp/ansible_output.txt
path: "{{ role_path }}/files/ansible_output.txt"
state: touch
mode: '0666'
when: stat_result.stat.exists == False
delegate_to: localhost
- name: Check hard drive info
become: yes
become_user: root
shell: cat /etc/hostname; echo; df -h | egrep 'Filesystem|/dev/sd'
register: result
- debug: var=result.stdout_lines
#- local_action: lineinfile line={{ result.stdout_lines | to_nice_json }} dest=/tmp/ansible_output.txt
- local_action: lineinfile line={{ result.stdout_lines | to_nice_json }} dest="{{ role_path }}/files/ansible_output.txt"
- name: Email Result
mail:
host: some_email_host
port: some_port_number
username: my_username
password: my_password
to:
- first_email
- second_email
- third_email
from: some_email_address
subject: Ansible Disk Space Check Result
#body: "{{ result.stdout_lines | to_nice_json }}"
body: "{{ lookup('file', '{{ role_path }}/files/ansible_output.txt') }}"
#attach:
#/tmp/ansible_output.txt
attach:
"{{ role_path }}/files/ansible_output.txt"
run_once: true
delegate_to: localhost
- name: Remove Output File
file:
#path: /tmp/ansible_output.txt
path: "{{ role_path }}/files/ansible_output.txt"
state: absent
run_once: true
delegate_to: localhost
Now, my email contains the attachment, as well as the contents in the body, and I didn't have to change much in the playbook. :-)
I'm sort of trying to build an inventory file from an ansible playbook run.
I'm trying to list out all the kvm hosts and the guests running on them, by running both service libvirtd status and if successful, virsh list --all, and to store the values in a file on the ansible host.
Ive tried a few different playbook structures but none have been successful in writing the file (using local_action wrote the ansible_hostname from just one host).
Please can someone guide me on what I'm doing wrong?
This is what I'm running:
- name: Determine KVM hosts
hosts: all
become: yes
#gather_facts: false
tasks:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: not(libvirtd_status.rc == 0)
ignore_errors: true
- name: List KVM guests
shell: "virsh list --all"
register: list_vms
when: libvirtd_status.rc == 0
ignore_errors: true
- name: Write hostname to file
lineinfile:
path: /tmp/libvirtd_hosts
line: "{{ ansible_hostname }} kvm guests: "
create: true
#local_action: copy content="{{ item.value }}" dest="/tmp/libvirtd_hosts"
with_items:
- variable: ansible_hostname
value: "{{ ansible_hostname }}"
- variable: list_vms
value: "{{ list_vms }}"
when: libvirtd_status.rc == 0 or list_vms.rc == 0
Was able to cobble something that's mostly working:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: libvirtd_status.rc not in [0, 1]
- name: List KVM guests
#shell: "virsh list --all"
virt:
command: list_vms
register: all_vms
when: libvirtd_status.rc == 0
---
- name: List all KVM hosts
hosts: production, admin_hosts, kvm_hosts
become: yes
tasks:
- name: create file
file:
dest: /tmp/libvirtd_hosts
state: touch
delegate_to: localhost
- name: Copy VMs list
include_tasks: run_libvirtd_commands.yaml
- name: saving cumulative result
lineinfile:
line: '{{ ansible_hostname }} has {{ all_vms }}'
dest: /tmp/libvirtd_hosts
insertafter: EOF
delegate_to: localhost
when: groups["list_vms"] is defined and (groups["list_vms"] | length > 0)
Now if only I could clean up the output to filter out false positives (machines that don't have libvirtd status, and have an empty/no list of VMs, because the above doesn't really work.
But at least there is output from all the KVM hosts!
I need to add a host from the user's input. Now I'm trying to use the ansible in-memory inventory, add_host module and prompt to add the target host to execute the remaining tasks. This is the content of my playbook:
Deploy.yml
- name: Adding the host server
hosts: localhost
- vars_prompt:
- name: "Server IP"
prompt: "Server"
private: no
- name: "Username (default: Ubuntu)"
prompt: "User"
default: "Ubuntu"
private: no
- name: "Password"
prompt: "Passwd"
private: yes
encrypt: "sha512_crypt"
- name: "Identity file path"
prompt: "IdFile"
private: no
when: Passwd is undefined
tasks:
- name: Add host server
add_host:
name: "{{ Server }}"
ansible_ssh_user: "{{ User }}"
ansible_ssh_private_key_file: "{{ IdFile }}"
when: IdFile is defined
- name: Add host server
add_host:
name: "{{ Server }}"
ansible_ssh_user: "{{ User }}"
ansible_ssh_pass: "{{ Passwd }}"
when: Passwd is defined
- hosts: "{{ Server }}"
tasks:
- name: Copy the script file to the server
copy:
src: script.sh
dest: "{{ ansible_env.HOME }}/folder/"
mode: 755
force: yes
attr:
- +x
When I run this playbook with this command $ ansible-playbook Deploy.yml, The output is:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Adding the host server] ***********************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************
ok: [localhost]
Server: <server-ip>
User [Ubuntu]:
Passwd:
IdFile: <path/to/id/file>
ERROR! the field 'hosts' is required but was not set
I don't know why it throws this error:
ERROR! the field 'hosts' is required but was not set
How can I do what I need to do?
UPDATE:
It still not working. This is the content of my playbook:
Deploy.yml
- name: Adding the host server
hosts: localhost
vars_prompt:
- name: "Server"
prompt: "Server IP"
private: no
- name: "User"
prompt: "Username"
default: "Ubuntu"
private: no
- name: "Passwd"
prompt: "Password"
private: yes
encrypt: "sha512_crypt"
- name: "IdFile"
prompt: "Identity file path"
private: no
when: Passwd is undefined
tasks:
- name: Add host server
add_host:
name: "{{ Server }}"
ansible_ssh_user: "{{ User }}"
ansible_ssh_private_key_file: "{{ IdFile }}"
when: IdFile is defined
- name: Add host server
add_host:
name: "{{ Server }}"
ansible_ssh_user: "{{ User }}"
ansible_ssh_pass: "{{ Passwd }}"
when: IdFile is undefined
- hosts: "{{ Server }}"
tasks:
- name: Copy the script file to the server
copy:
src: script.sh
dest: "{{ ansible_env.HOME }}/folder/"
mode: 755
force: yes
attr:
- +x
When I run this playbook with this command $ ansible-playbook Deploy.yml, The output is:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Adding the host server] ***********************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************
ok: [localhost]
Server IP: <server-ip>
Username [Ubuntu]:
Password:
Identity file path: <path/to/id/file>
ERROR! the field 'hosts' is required but was not set
I don't know why it throws this error:
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'Server' is undefined
Here is a flowchart of how the playbook should works:
+------------------+ +---------------+ +-----------------+
|Use ansible to run| |Get host IP fom| |Get ssh User from|
| this playbook +---->+ user's input +---->+ user's input |
+------------------+ +---------------+ +--------+--------+
|
v
+------------+--------+
|Get ssh password from|
| user's input |
+------------+--------+
|
v
+---------------+ *************************
|Add a host with| Yes | Did the user inputted |
v----------+ password +<---+| a password? |
+----------------+ +---------------+ ***************+*********
||Run some tasks|| |No
||in recently || v
||added host || +---------------+ +------------+--------+
+----------------+ |Add a host with| |Get ssh identity file|
^----------+ identity file +<------+ from user's input |
+---------------+ +---------------------+
Ok I've updated my answer to suit the changes in your question, with the original answer left for historic reasons.
To solve the substitution error you are seeing, which results in an empty host list in your second play, I would instead use an inventory group.
There are also two other syntax errors in the second play
The file mode needs to be octal (i.e. 0700)
The attribute is invalid. My assumption is you are trying to make the file executable, so fix the file mode and remove the attribute.
Here is an updated playbook:
- name: Adding the host server
hosts: localhost
vars_prompt:
- name: "Server"
prompt: "Server IP"
private: no
- name: "User"
prompt: "Username"
default: "Ubuntu"
private: no
- name: "Passwd"
prompt: "Password"
private: yes
encrypt: "sha512_crypt"
- name: "IdFile"
prompt: "Identity file path"
private: no
when: Passwd is undefined
tasks:
- name: Add host server
add_host:
name: "{{ Server }}"
ansible_ssh_user: "{{ User }}"
ansible_ssh_private_key_file: "{{ IdFile }}"
group: added_hosts
when: IdFile is defined
- name: Add host server
add_host:
name: "{{ Server }}"
ansible_ssh_user: "{{ User }}"
ansible_ssh_pass: "{{ Passwd }}"
group: added_hosts
when: IdFile is undefined
- hosts: added_hosts
tasks:
- name: Copy the script file to the server
copy:
src: script.sh
dest: "{{ ansible_env.HOME }}/folder/"
mode: 0755
force: yes
=== OLD ANSWER ===
User input is stored in the whatever variable you are using for the name attribute in each of the variable prompts.
You need to switch around your name and prompt values under vars_prompt
There are also YAML formatting issues
For example:
- vars_prompt:
- name: "Server IP"
prompt: "Server"
private: no
should be:
vars_prompt:
- name: "server"
prompt: "Server IP"
private: no
Then you can refer to the {{ server }} variable in your tasks
Your ansible script is having a problem.
vars_prompt:
remove - from vars_prompt line it will work properly.
I tried in my local server the same script is working properly.
- name: Adding the host server
hosts: localhost
vars_prompt:
- name: "Server"
prompt: "Server IP"
private: no
- name: "User"
prompt: "Username"
default: "Ubuntu"
private: no
- name: "Passwd"
prompt: "Password"
private: yes
encrypt: "sha512_crypt"
- name: "IdFile"
prompt: "Identity file path"
private: no
when: Passwd is undefined
tasks:
- name: Add host server
add_host:
name: "{{ Server }}"
ansible_ssh_user: "{{ User }}"
ansible_ssh_private_key_file: "{{ IdFile }}"
when: IdFile is defined
- name: Add host server
add_host:
name: "{{ Server }}"
ansible_ssh_user: "{{ User }}"
ansible_ssh_pass: "{{ Passwd }}"
when: Passwd is defined
- name: Create a file
shell: touch newfile
delegate_to: "{{ Server }}"
In the last task update to your task and run it.
- name: Create a file
shell: touch newfile
delegate_to: "{{ Server }}"
I am trying to create a keypair in aws using Ansible but it is
throwing me below error:
ERROR! with_dict expects a dict
Below is my keypair.yml
---
- hosts: localhost
connection: local
gather_facts: no
vars:
region: ap-southeast-1
keyname: aws_ansible
tasks:
- name: create a key-pair
local_action:
module: ec2_key
region: "{{region}}"
name: "{{keyname}}"
state: present
register: mykey
- name: write private key to a file
local_action: shell echo -e "{{ item.value.private_key }}" > ~/.ssh/"{{ keyname }}".pem &&
chmod 600 ~/.ssh/"{{ keyname }}".pem
with_dict: mykey
when: item.value.private_key is defined
I'm assuming you're using Ansible 2.x so try:
with_dict: "{{ mykey }}"
If that still doesn't work update your question with the output of:
- name: create a key-pair
...
- debug:
var: result
- name: write private key to a file
...