Im using ansible 2.9.10 and have this playbook that runs powershell script on a vm vcenter.
the playbook works for some machines and some gets stuck forever without any error.
all vms are the same (same windows template)
---
- hosts: localhost
connection: local
vars:
vars_files:
- vars.yml
tasks:
- name: "load VPN"
vmware_vm_shell:
cluster: "{{ cluster }}"
datacenter: "{{ datacenter }}"
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
folder: "{{ folder }}"
vm_id: "{{ name }}"
vm_username: "{{ vm_username }}"
vm_password: "{{ vm_password }}"
vm_shell: 'C:\Windows\System32\WindowsPowershell\v1.0\powershell.exe'
vm_shell_args: -ExecutionPolicy Bypass -File C:\SW\vpn.ps1
vm_shell_cwd: 'C:\MYDIR'
wait_for_process: yes
validate_certs: no
delegate_to: localhost
register: shell_command_output
- debug:
msg: "{{ shell_command_output }}"
If i run it manually i see it gets stuck here:
TASK [load VPN] **********************************************************************************************************************
task path: /root/ansible/api/base/vm-playbooks/install_vpn.yml:8
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /tmp/.ansible/tmp/ansible-tmp-1615384930.8184772-37874402704912 `" && echo ansible-tmp-1615384930.8184772-37874402704912="` echo /tmp/.ansible/tmp/ansible-tmp-1615384930.8184772-37874402704912 `" ) && sleep 0'
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/cloud/vmware/vmware_vm_shell.py
<localhost> PUT /tmp/.ansible/tmp/ansible-local-103875urmwcpf8/tmpsckzefh6 TO /tmp/.ansible/tmp/ansible-tmp-1615384930.8184772-37874402704912/AnsiballZ_vmware_vm_shell.py
<localhost> EXEC /bin/sh -c 'chmod u+x /tmp/.ansible/tmp/ansible-tmp-1615384930.8184772-37874402704912/ /tmp/.ansible/tmp/ansible-tmp-1615384930.8184772-37874402704912/AnsiballZ_vmware_vm_shell.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/env python3 /tmp/.ansible/tmp/ansible-tmp-1615384930.8184772-37874402704912/AnsiballZ_vmware_vm_shell.py && sleep 0'
How can I solve this ?
Related
I have the following scenario:
The inventory file is laid out as below:
[my_host]
host
server
[host:children]
hostm
hosts
[hostm]
host01
[hosts]
host02
host03
The group_vars file(s) is as below:
host.yml
user: user01
ansible_python_interpreter: /usr/bin/python3
The host_vars file(s) is as below:
host01.yml
id: ABC
nr: 00
host02.yml
id: DEF
nr: 20
host03.yml
id: GHI
nr: 02
Now using the above, I'm trying to run a playbook as described below:
custom.yml
- hosts: "{{ v_host | default([]) }}"
remote_user: root
tasks:
- name: Run the shell script
become: true
become_user: "{{ user }}"
become_method: su
become_exe: "su -"
ansible.builtin.shell: cleanipc {{ item.nr }} remove
with_items:
- "{{ v_host }}"
register: shell_result
no_log: false
changed_when: false
- name: print message
ansible.builtin.debug:
var: shell_result.stdout_lines
To run the playbook, I use the below command:
>ansible-playbook -i /path-to-inventory-file/file custom.yml -e 'v_host=host'
I'm trying to get the playbook to run the shell command on all child nodes of 'host', i.e., 'host01', 'host02' and 'host03', with the value of the variable 'nr' automatically substituted for each host.
I tried changing the lookup of the variable using hostvars as below:
ansible.builtin.shell: cleanipc {{ hostvars[item]['nr'] }} remove
But this didn't work either. Thank you for any help or guidance you can provide.
Thank you!
- debug:
msg: "Setting up passwordless ssh"
- name: Create SSH key
command: ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N ""
args:
creates: /root/.ssh/id_rsa
- name: Fetch the keyfile from the node to master
fetch:
src: "/root/.ssh/id_rsa.pub"
dest: "buffer/{{inventory_hostname}}-id_rsa.pub"
flat: yes
- name: Copy the key add to authorized_keys using Ansible module
authorized_key:
user: root
state: present
key: "{{ lookup('file','buffer/{{item}}-id_rsa.pub')}}"
when: "{{ item != inventory_hostname }}"
with_items:
- "{{ groups[group_names[0]] }}"
register: ssh_setup_result
- name: Add host to known hosts list
shell: ssh-keyscan -H "{{ inventory_hostname }}" >> ~/.ssh/known_hosts
- name: Add host to known hosts list
shell: ssh-keyscan -H "{{ hostname_domain }}" >> ~/.ssh/known_hosts
- name: get key
command: ssh-keyscan {{ inventory_hostname }}
register: keys
- name: add key to known_hosts
lineinfile:
path: /root/.ssh/known_hosts
line: " {{ keys.stdout }}"
insertbefore: BOF
when: "{{ item != inventory_hostname }}"
with_items:
- "{{ groups[group_names[0]] }}"
ssh-keys is not getting copied to cross hosts , say keys from node1 is not getting copied to known hosts of node2. Similarly vice versa. How to resolve this ?
Ansible version used:
ansible [core 2.11.11]
I have created the following ansible role for samba, which works without problems.
The user will be created and the password will be set:
role:
---
- name: "Include OS-specific variables"
include_vars: "{{ ansible_os_family }}.yml"
- name: "Ensure Samba-related packages are installed"
apt:
name:
- samba
- samba-common
state: present
when: ansible_os_family == 'Debian'
- name: "Configure smb.conf"
template:
src: etc/samba/smb.conf
dest: /etc/samba/smb.conf
notify: restart_smbd
- name: "Create system user and home directory"
ansible.builtin.user:
name: "{{ item.name }}"
home: "{{ item.home }}"
system: yes
skeleton: no
shell: /sbin/nologin
group: nogroup
state: present
with_items: "{{ smb.user_details }}"
- name: "Create samba users"
shell: >
set -e -o pipefail
&& (pdbedit --user={{ item.name }} 2>&1 > /dev/null)
|| (echo '{{ item.smbpassword }}'; echo '{{ item.smbpassword }}')
| smbpasswd -s -a {{ item.name }}
args:
executable: /bin/bash
register: samba_create_users
changed_when: "'Added user' in samba_create_users.stdout"
loop: "{{ smb.user_details }}"
- name: "Set samba passwords correctly"
shell: >
set -e -o pipefail
&& (smbclient -U {{ item.name }}%{{ item.smbpassword }} -L 127.0.0.1 2>&1 > /dev/null)
|| (echo '{{ item.smbpassword }}'; echo '{{ item.smbpassword }}')
| smbpasswd {{ item.name }}
args:
executable: /bin/bash
register: samba_verify_users
changed_when: "'New SMB password' in samba_verify_users.stdout"
loop: "{{ smb.user_details }}"
- name: "Ensure Samba is running and enabled"
service:
name: "{{ samba_daemon }}"
state: started
enabled: true
host vars:
.
.
.
smb:
user_details:
- name: test
home: /backup/test
smbpassword: testtest
When i know want to improve the role, that an absent user is getting deleted, it will not work:
- name: "Configure smb.conf"
template:
src: etc/samba/smb.conf
dest: /etc/samba/smb.conf
notify: restart_smbd
- name: "Create system user and home directory"
become: yes
user:
name: "{{ item.key }}"
state: "{{ item.value.state | default('present') }}"
append: yes
system: yes
skeleton: no
group: nogroup
home: "{{ item.value.home }}"
shell: "{{ item.value.shell | default('/sbin/nologin') }}"
loop: "{{ samba_users | dict2items }}"
when: "'state' not in item.value or item.value.state == 'present'"
- name: "Create samba users"
shell: >
set -e -o pipefail
&& (pdbedit --user={{ item.key }} 2>&1 > /dev/null)
|| (echo '{{ item.value.name }}'; echo '{{ item.smbpassword }}')
| smbpasswd -s -a {{ item.key }}
args:
executable: /bin/bash
register: samba_create_users
changed_when: "'Added user' in samba_create_users.stdout"
loop: "{{ samba_users }}"
- name: "Set samba passwords correctly"
shell: >
set -e -o pipefail
&& (smbclient -U {{ item.key }}%{{ item.smbpassword }} -L 127.0.0.1 2>&1 > /dev/null)
|| (echo '{{ item.smbpassword }}'; echo '{{ item.smbpassword }}')
| smbpasswd {{ item.key }}
args:
executable: /bin/bash
register: samba_verify_users
changed_when: "'New SMB password' in samba_verify_users.stdout"
loop: "{{ samba_users }}"
- name: "Remove unwanted users."
become: yes
user:
name: "{{ item.key }}"
state: "{{ item.value.state | default('absent') }}"
remove: true
loop: "{{ samba_users | dict2items }}"
when: "'state' in item.value and item.value.state == 'absent'"
- name: "Ensure Samba is running and enabled"
service:
name: "{{ samba_daemon }}"
state: started
enabled: true
...
host vars:
.
.
.
samba_users:
test:
state: absent
The following error happens:
msg": "Invalid data passed to 'loop', it requires a list, got this instead: {'test': {'state': 'absent'}}. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup."
Is it even possible to do this easily?
The problem is that if the user is absent, the "Create samba users" task should not actually be run through.
I would have to write a second task that deletes the Samba user.
But how can I control that when the user is absent he only uses the new task, e.g. "delete samba users"?
I am trying to write a playbook to setup mysql master-slave replication with multiple slave servers. For each slave server, I need access to a variable called next_id that should be incremented before use for each host. For example, for the first slave server, next_id should be 2 and for the second slave server it should be 3 and so on. What is the way to achieve this in Ansible?
This is the yaml file I use to run my role.
- name: Setup the environment
hosts: master , slave_hosts
serial: 1
roles:
- setup-master-slave
vars:
master_ip_address : "188.66.192.11"
slave_ip_list :
- "188.66.192.17"
- "188.66.192.22"
This is the yaml file where I use the variable.
- name: Replace conf file with template
template:
src: masterslave.j2
dest: /etc/mysql/mariadb.conf.d/50-server.cnf
when: inventory_hostname != 'master'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id }}"
I can suggest a way which will work according to you requirement for any number of slave servers but it is not based on any any module but self conscience.
Here my master_ip_address is 10.x.x.x and input is any value of next_id you want to increment for every slave server
- hosts: master,slave1,slave2,slave3,slave4
serial: 1
gather_facts: no
tasks:
- shell: echo "{{ input }}" > yes.txt
delegate_to: localhost
when: inventory_hostname == '10.x.x.x'
- shell: cat yes.txt
register: var
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "$(({{var.stdout}}+1))"
register: next_id
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- shell: echo "{{ next_id.stdout }}" > yes.txt
delegate_to: localhost
when: inventory_hostname != '10.x.x.x'
- name: Replace conf file with template
template:
src: masterslave.j2
dest: 50-server.cnf
when: inventory_hostname != '10.x.x.x'
vars:
- ip_address : "{{ master_ip_address }}"
- server_id : "{{ next_id.stdout }}"
[ansibleserver#172 test_increment]$ cat masterslave.j2
- {{ ip_address }}
- {{ server_id }}
Now, if you run
ansible-playbook increment.yml -e 'master_ip_address=10.x.x.x input=1'
slave1 server
[root#slave1 ~]# cat 50-server.cnf
- 10.x.x.x
- 2
slave2 server
[root#slave2 ~]# cat 50-server.cnf
- 10.x.x.x
- 3
slave3 server
[root#slave3 ~]# cat 50-server.cnf
- 10.x.x.x
- 4
and so on
"serial" is available in a playbooks only and not working in roles
therefore, for fully automatic generation of server_id for MySQL in Ansible roles, you can use the following:
roles/my_role/defaults/main.yml
---
cluster_alias: test_cluster
mysql_server_id_config: "settings/mysql/{{ cluster_alias }}/server_id.ini"
roles/my_role/tasks/server-id.yml
---
- name: Ensures '{{ mysql_server_id_config | dirname }}' dir exists
file:
path: '{{ mysql_server_id_config | dirname }}'
state: directory
owner: root
group: root
mode: 0755
delegate_to: localhost
- name: Ensure mysql server id config file exists
copy:
content: "0"
dest: "{{ mysql_server_id_config }}"
force: no
owner: root
mode: '0755'
delegate_to: localhost
- name: server-id
include_tasks: server-id-tasks.yml
when: inventory_hostname == item
with_items: "{{ ansible_play_hosts }}"
tags:
- server-id
roles/my_role/tasks/server-id-tasks.yml
# tasks file
---
- name: get last server id
shell: >
cat "{{ mysql_server_id_config }}"
register: _last_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: get new server id
shell: >
echo "$(({{_last_mysql_server_id.stdout}}+1))"
register: _new_mysql_server_id
check_mode: no
delegate_to: localhost
tags:
- server-id
- name: save new server id
shell: >
echo -n "{{ _new_mysql_server_id.stdout }}" > "{{ mysql_server_id_config }}"
check_mode: no
delegate_to: localhost
tags:
- server-id
- debug:
var: _new_mysql_server_id.stdout
tags:
- server-id
- name: Replace conf file with template
template:
src: server-id.j2
dest: server-id.cnf
I would like to test if a user is able to SSH using SSH password. That's all I would like to do. I tried with modules: local_action, wait_for but those didn't get me the results. The playbook result must simply tell me where a connection succeeded or failed when trying to SSH.
The requirement is to test which user account succeeds in making a SSH connection to remote servers. The user who would be running the ansible script has multiple accounts on these servers but SSH login will succeed with just the right one which the user doesn't know. The user accounts all have the same password.
The inventory file:
all:
children:
FXO-Test:
hosts:
host1.abcd.com:
host2.abcd.com:
vars:
ansible_user: user1
The Playbook:
---
- hosts: "{{ targethosts }}"
gather_facts: no
tasks:
- name: Test connection
local_action: command ssh -q -o BatchMode=yes -o ConnectTimeout=3 {{ inventory_hostname }}
register: test_user
ignore_errors: true
changed_when: false
Invoked Using Command:
ansible-playbook checkLogin.yml -i ans_inventory_test --ask-pass --extra-vars "targethosts=FXO-Test" | tee verify_user.log
Expected to see which SSH connections failed and which ones worked.
Based on Vladimir Botka response, I tweaked the playbook a bit further to pull hostnames from an inventory file.
My Updated Playbook 'verifySSHLogin.yml':
- hosts: localhost
gather_facts: no
vars:
my_users:
- user1
- user2
my_hosts: "{{ query('inventory_hostnames', 'all') }}"
tasks:
- expect:
command: "ssh -o PubkeyAuthentication=no -o StrictHostKeyChecking=no {{ item.0 }}#{{ item.1 }}"
timeout: 2
responses:
(.*)password(.*):
- "password" # Fit the password
- "\x03" # Ctrl-C
(.*)\$(.*): "exit" # Fit the prompt
loop: "{{ my_users|product(my_hosts)|list }}"
register: result
ignore_errors: yes
- debug:
msg: "{{ (item.rc == 0)|ternary(item.invocation.module_args.command ~ ' [OK]',item.invocation.module_args.command ~ ' [KO]') }}"
loop: "{{ result.results }}"
Which I now invoke using below command:
ansible-playbook verifySSHLogin.yml -i ans_inventory_test --extra-vars "targethosts=FXO-Test" | tee verify_user.log
I can then do a grep against verify_user.log like this:
grep '\"msg\": \"ssh' verify_user.log
Which gives me below result which is what I was expecting:
"msg": "ssh -o PubkeyAuthentication=no -o StrictHostKeyChecking=no user1#host1.abc.corp.com [OK]"
"msg": "ssh -o PubkeyAuthentication=no -o StrictHostKeyChecking=no user1#host2.abc.corp.com [OK]"
"msg": "ssh -o PubkeyAuthentication=no -o StrictHostKeyChecking=no user1#host3.abc.corp.com [KO]"
"msg": "ssh -o PubkeyAuthentication=no -o StrictHostKeyChecking=no user2#host1.abc.corp.com [KO]"
"msg": "ssh -o PubkeyAuthentication=no -o StrictHostKeyChecking=no user2#host2.abc.corp.com [KO]"
"msg": "ssh -o PubkeyAuthentication=no -o StrictHostKeyChecking=no user2#host3.abc.corp.com [KO]"
Tweaked the playbook further to avoid hard-coding of SSH password. The final playbook looks like now:
- hosts: localhost
gather_facts: no
vars:
my_users:
- user1
- user2
my_hosts: "{{ query('inventory_hostnames', 'all') }}"
tasks:
- expect:
command: "ssh -o PubkeyAuthentication=no -o StrictHostKeyChecking=no {{ item.0 }}#{{ item.1 }}"
timeout: 2
responses:
(.*)password(.*):
- "{{ ansible_password }}" # Fit the password
- "\x03" # Ctrl-C
(.*)\$(.*): "exit" # Fit the prompt
loop: "{{ my_users|product(my_hosts)|list }}"
register: result
ignore_errors: yes
- debug:
msg: "{{ (item.rc == 0)|ternary(item.invocation.module_args.command ~ ' [OK]',item.invocation.module_args.command ~ ' [KO]') }}"
loop: "{{ result.results }}"
The SSH password can be passed to ansible-playbook command like this:
ansible-playbook verifySSHLogin.yml -i ans_inventory_test -k --extra-vars "targethosts=FXO-Test" | tee verify_user.log
expect module shall do the job. Given the
user1#test_01 is able to log in, the play below
- hosts: localhost
vars:
my_users:
- user1
- user2
my_hosts:
- test_01
- test_02
tasks:
- expect:
command: "ssh {{ item.0 }}#{{ item.1 }}"
timeout: 2
responses:
(.*)password(.*):
- "password" # Fit the password
- "\x03" # Ctrl-C
(.*)\$(.*): "exit" # Fit the prompt
with_nested:
- "{{ my_users }}"
- "{{ my_hosts }}"
register: result
ignore_errors: yes
- debug:
msg: "{{ (item.rc == 0)|ternary(item.invocation.module_args.command ~ ' [OK]',
item.invocation.module_args.command ~ ' [KO]') }}"
loop: "{{ result.results }}"
gives (grep msg):
"msg": "ssh user1#test_01 [OK]"
"msg": "ssh user1#test_02 [KO]"
"msg": "ssh user2#test_01 [KO]"
"msg": "ssh user2#test_02 [KO]"