I want to change port of ssh server on client systems to custom one 2202 (port defined in group_var/all and also in roles/change-sshd-port/vars/main.yml). My requirement is that the playbook can also be run when port is already set to custom 2202 (then playbook should do nothing).
I already used ansible role basing on solution: https://github.com/Forcepoint/fp-pta-ansible-change-sshd-port
The port is changed fine when I run the script for the first time (when completed I can login client node on new port).
When I run the playbook again it fails because is trying to do some tasks via old 22 instead of new port 2202
TASK [change-sshd-port : Confirm host connection works] ********************************************************************************************************************
fatal: [192.168.170.113]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 192.168.170.113 port 22: Connection refused", "unreachable": true}
I can't find why it is trying to use port 22 when ansible_port variable is set to new port in roles/change-sshd-port/vars/main.yml
---
# vars file for /home/me/ansible2/roles/change-sshd-port
ansible_port: 2202
The part of the role task roles/change-sshd-port/tasks/main.yml until failing ping task is:
- name: Set configured port fact
ansible.builtin.set_fact:
configured_port: "{{ ansible_port }}"
- name: Check if we're using the inventory-provided SSH port
ansible.builtin.wait_for:
port: "{{ ansible_port }}"
state: "started"
host: "{{ ansible_host }}"
connect_timeout: "5"
timeout: "10"
delegate_to: "localhost"
ignore_errors: "yes"
register: configured_ssh
- name: SSH port is configured properly
ansible.builtin.debug:
msg: "SSH port is configured properly"
when: configured_ssh is defined and
configured_ssh.state is defined and
configured_ssh.state == "started"
register: ssh_port_set
- name: Check if we're using the default SSH port
ansible.builtin.wait_for:
port: "22"
state: "started"
host: "{{ ansible_host }}"
connect_timeout: "5"
timeout: "10"
delegate_to: "localhost"
ignore_errors: "yes"
register: default_ssh
when: configured_ssh is defined and
configured_ssh.state is undefined
- name: Set inventory ansible_port to default
ansible.builtin.set_fact:
ansible_port: "22"
when: default_ssh is defined and
"state" in default_ssh and
default_ssh.state == "started"
register: ssh_port_set
- name: Fail if SSH port was not auto-detected (unknown)
ansible.builtin.fail:
msg: "The SSH port is neither 22 or {{ ansible_port }}."
when: ssh_port_set is undefined
- name: Confirm host connection works
ansible.builtin.ping:
Your question is missing a bunch of details (there's no way for us to
reproduce the problem from the information you've given in the
question), so I'm going to have to engage in some guesswork. There are
a couple of things that could be happening.
First, if you're mucking about with the ssh port in your playbooks,
you're going to need to disable fact gathering on the play. By
default, ansible runs the setup module on target hosts before
running the tasks in your play, and this is going to use whatever port
you've configured in your inventory. If sshd is running on a different
port than expected, this will fail.
Here's a playbook that ignores whatever your port you have in your
inventory and will successfully connect to a target host whether sshd
is running on port 22 or port 2222 (it will fail with an error if sshd
is not running on either of those ports):
- hosts: target
gather_facts: false
vars:
desired_port: 2222
default_port: 22
tasks:
- name: check if ssh is running on {{ desired_port }}
delegate_to: localhost
wait_for:
port: "{{ desired_port }}"
host: "{{ ansible_host }}"
timeout: 10
ignore_errors: true
register: desired_port_check
- name: check if ssh is running on {{ default_port }}
delegate_to: localhost
wait_for:
port: "{{ default_port }}"
host: "{{ ansible_host }}"
timeout: 10
ignore_errors: true
register: default_port_check
- fail:
msg: "ssh is not running (or is running on an unknown port)"
when: default_port_check is failed and desired_port_check is failed
- when: default_port_check is success
block:
- debug:
msg: "ssh is running on default port"
- name: configure ansible to use port {{ default_port }}
set_fact:
ansible_port: "{{ default_port }}"
- when: desired_port_check is success
block:
- debug:
msg: "ssh is running on desired port"
- name: configure ansible to use port {{ desired_port }}
set_fact:
ansible_port: "{{ desired_port }}"
- name: run a command on the target host
command: uptime
register: uptime
- debug:
msg: "{{ uptime.stdout }}"
Related
I have the following code which runs on localhost (linux OS), but needs to delegate some actions to a windows server.
When I run the playbook, I give in input the vault folder which contains the credentials to connect to the windows server. The problem is that, on the "rescue" part, I call a task called error_thrower.yml which requires a connection via SSH to localhost. At this part, I have an error because it does not find the right credentials on the vault to connect to localhost which has a Linux OS. The error : fatal: [localhost -> localhost]: UNREACHABLE! =>.
- name: Disk Saturation
hosts: localhost
gather_facts: no
tasks:
- name: 'Actions on the server'
delegate_to: "{{ target }}" #Windows server
block:
- name: 'Server connection test'
win_ping:
register: test_connection
ignore_errors: yes
ignore_unreachable: yes
- name: 'Check total space of the disk'
ansible.windows.win_shell: "Get-Volume -DriveLetter {{ partition_name }} | Select-Object Size | ConvertTo-Json"
register: totalspace
rescue: 'Error thrower call'
- name:
ansible.builtin.include_role:
name: common
tasks_from: error_thrower.yml
vars:
fail_message: Server is not accessible
error_name: server_unreachable
message_details: "{{ test_connection.msg }}"
when: test_connection.unreachable is defined
Is there a way in Ansible to give two different credentials in the vault so it can have information to connect to both machines ?
I am trying to use Ansible to modify the DNS settings on a group of ESXI servers. I've been able to get my playbook to change the settings on a single server like this:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: 'myesxiserver.domain.local'
username: 'username'
password: 'password'
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
How can I get this to work for multiple servers? The Ansible documentation provides this example:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
change_hostname_to: esx01
domainname: foo.org
dns_servers:
- 8.8.8.8
- 8.8.4.4
delegate_to: localhost
I'm not clear on how to iterate through a list of hosts and pass the correct values into the variable '{{ esxi_hostname }}' for each of my servers. I'm assuming that the variables can be passed using an inventory file but I haven't found any good examples on how to do this for ESXI servers.
So I did get this working.
---
- hosts: localhost
vars_files:
- vars.yml
- vars2.yml
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: "{{ item }}"
username: 'myadmin'
password: "{{ Password }}"
validate_certs: no
change_hostname_to: "{{ item }}"
domainname: foo.org
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
loop: "{{ esxihost }}"
I had to pass a list of host names using vars_file and iterate through it using the loop keyword. I tried to use the {{inventory_hostname}} variable along with a standard inventory file but because SSH is not generally enabled by default on ESXi servers I would get an SSH connection error.
I'm running an ansible-playbook configured to provision ec2 and configure the machine. I set the connection to local for the playbook because no machine to manage before the script runs. Once provisioned, I supposed to create directory in the remote server. Since the playbook runs in local connection, I set to delete_to: {{ remote_host }} so this directory creation executed in remote host but it still creates the directory in the control machine.
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ubuntu
gather_facts: false
vars_files:
- vars/env.yml
vars:
allow_world_readable_tmpfiles: true
key_name: ansible-test
region: us-east-2
image: ami-0e82959d4ed12de3f # Ubuntu 18.04
id: "practice-akash-ajay"
sec_group: "{{ id }}-sec"
remote_host: ansible-test
remaining_days: 20
acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory
# acme_directory: https://acme-v02.api.letsencrypt.org/directory
cert_name: "{{ app_slug }}.{{ app_domain}}"
intermediate_path: /etc/pki/letsencrypt/intermediate.pem
cert:
common_name: "{{ app_slug }}.{{ app_domain}}"
organization_name: PearlThoughts
email_address: "{{ letsencrypt_email }}"
subject_alt_name:
- "DNS:{{ app_slug }}.{{ app_domain}}"
roles:
- aws
- name: Create certificate storage directory
file:
dest: "{{item.path}}"
mode: 0750
state: directory
delegate_to: {{ remote_host }}
with_items:
- path: ~/lets-seng-test
When you set connection explicitly on the play, it will be used for all tasks in that play. So don't do that. Ansible will by default use a local connection for localhost unless you have explicitly changed that in your inventory (and again, don't do that).
If you remove the connection setting on your play, delegate_to might work the way you expect...but I don't think you want to do that.
If you have your playbook provisioning a new host for you, the way to target that host with Ansible is to have a new play with that host (or it's corresponding group) listed in the target hosts: for the play. Conceptually, you want:
- host: localhost
tasks:
- name: provisiong an AWS instance
aws_ec2: [...]
register: hostinfo
- add_host:
name: myhost
ansible_host: "{{ hostinfo... }}"
- hosts: myhost
tasks:
- name: do something on the new host
command: uptime
You probably need some logic in between provisioning the host and executing tasks against it to ensure that it is up and ready to service requests.
Instead of using the add_host module, a better solution is often to rely on the appropriate inventory plugin.
I am trying to configure a VyOS vm that I"ve built from a template. The template is a fresh install without any configuration.
The vm doesn't have an IP configured, so I can't use the ssh options or the vyos ansible module. So I'm trying to use the vmware_vm_shell module, which will let me execute commands but I can't enter conf mode for VyOS.
I've tried bash and vbash for my shell. I've tried setting the conf commands to environment vars to execute, I've tried with_item but it doesn't seem that will work with vmware_vm_shell.
The bear minimum I need is to configure an IP address so that I can then ssh or use the vyos ansible module to complete the configuration.
conf
set interfaces ethernet eth0 address 192.168.1.251/24
set service ssh port 22
commit
save
---
- hosts: localhost
gather_facts: no
connection: local
vars:
vcenter_hostname: "192.168.1.100"
vcenter_username: "administrator#vsphere.local"
vcenter_password: "SekretPassword!"
datacenter: "Datacenter"
cluster: "Cluster"
vm_name: "router-01"
tasks:
- name: Run command inside a virtual machine
vmware_vm_shell:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter }}"
validate_certs: False
vm_id: "{{ vm_name }}"
vm_username: 'vyos'
vm_password: 'abc123!!!'
vm_shell: /bin/vbash
vm_shell_args: 'conf 2> myFile'
vm_shell_cwd: "/tmp"
delegate_to: localhost
register: shell_command_output
This throws the error:
/bin/vbash: conf: No such file or directory
I have an EdgeRouter, which uses a Vyatta-derived operating system. The issue you're having is caused by the fact that conf (or configure for me) isn't actually the name of a command. The cli features are implement through a complex collection of bash functions that aren't loaded when you log in non-interactively.
There is a wiki page on vyos.net that suggests a solution. By sourcing in /opt/vyatta/etc/functions/script-template, you prepare the shell environment such that vyos commands will work as expected.
That is, you need to execute a shell script (with vbash) that looks like this:
source /opt/vyatta/etc/functions/script-template
conf
set interfaces ethernet eth0 address 192.168.1.251/24
set service ssh port 22
commit
save
exit
I'm not familiar with the vmware_vm_shell module, so I don't know exactly how you would do that, but for example this works for me to run a single command:
ssh ubnt#router 'vbash -c "source /opt/vyatta/etc/functions/script-template
configure
show interfaces
"'
Note the newlines in the above. That suggests that this might work:
- name: Run command inside a virtual machine
vmware_vm_shell:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter }}"
validate_certs: False
vm_id: "{{ vm_name }}"
vm_username: 'vyos'
vm_password: 'abc123!!!'
vm_shell: /bin/vbash
vm_shell_cwd: "/tmp"
vm_shell_args: |-
-c "source /opt/vyatta/etc/functions/script-template
configure
set interfaces ethernet eth0 address 192.168.1.251/24
set service ssh port 22
commit
save"
delegate_to: localhost
register: shell_command_output
My script doesn't change the portgroup of a VM Network adapter, what am I doing wrong ?
Let's assume that I want to change the current portgroup named "A" to a different portgroup named "B".
---
- hosts: localhost
gather_facts: no
vars:
vm_name: VM
tasks:
- name: Changing Portgroup for Network adapter 1
vmware_guest_network:
hostname: "{{ vc_host }}"
username: "{{ vc_user }}"
password: "{{ vc_pass }}"
validate_certs: no
name: "{{ vm_name }}"
gather_network_info: false
networks:
- label: "Network adapter 1"
name: "B"
state: present
delegate_to: localhost
register: network_info
I'm getting output that something changed, but in VM Settings nothing changed.
TASK [Changing Portgroup for Network Adapter 1]
******************************************************************************
changed: [localhost -> localhost]
I found that removing and adding a Network adapter changes the portgroup, but when I do that I cannot add a Network adapter type Flexible which I had in the first place.
Edit1: After updating Ansible to 2.9.12 I get OK output when running the script, so it really isn't changing anything.
TASK [Changing Portgroup for Network Adapter 1] ******************************************************************************
ok: [localhost]
Edit2: After a few days searching I found that it isn't possible to just change the portgroup with Ansible, so I used PowerCLI to help me with task.
---
- hosts: localhost
gather_facts: no
vars:
vm_name: "VM"
tasks:
- name: "Changing the portgroup for {{ vm_name }}"
win_command: 'powershell.exe -ExecutionPolicy ByPass -File C:\Scripts\change_portgroup.ps1 {{ vm_name }}'
delegate_to: WIN_SRV
With powershell script going like this:
$OldNetwork = "PG old"
$NewNetwork = "PG new"
Get-VM -Name $args[0] |Get-NetworkAdapter |Where {$_.NetworkName -eq $OldNetwork } |Set-NetworkAdapter -NetworkName $NewNetwork -Confirm:$false
Edit 3:
I got it working with vmware community module. (Thanks #sky-jokerxx)
First I installed it with command:
ansible-galaxy collection install community.vmware
Then used the module like this:
- name: Change network
community.vmware.vmware_guest_network:
validate_certs: no
hostname: '{{ vc_host }}'
username: '{{ vc_user }}'
password: '{{ vc_pass }}'
name: '{{ vm_name }}'
label: "Network adapter 1"
network_name: "B"
state: present
delegate_to: localhost
The vmware_guest_network module has a lot of issues.
https://github.com/ansible-collections/community.vmware/issues/378
The following vmware_guest_network module is fixed some issues.
https://github.com/ansible-collections/community.vmware/pull/401
In the following procedure, you can use the above module.
# ansible-galaxy collection install community.vmware -p collections
# mkdir library
# cd library/
# curl -L https://raw.githubusercontent.com/ansible-collections/community.vmware/7ac9ebb9bf5df0f1ead3ef1a3ed35f2d4ad45622/plugins/modules/vmware_guest_network.py -O
# cd ..
https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html
Maybe it's fixed, so how about trying it?