Ansible playbook example code fail to run - ansible

I'm started to try Ansible, and using example code from Ansible Documentation. After I try several examples, I get error at the beginning of the code. It says
- name: Change the hostname to Windows_Ansible
^ here(Point at name)"
Any advice would be appreciate.
I tried this one
https://docs.ansible.com/ansible/latest/modules/win_hostname_module.html#win-hostname-module
---
- name: Change the hostname to Windows_Ansible
win_hostname:
name: "Windows_Ansible"
register: res
- name: Reboot
win_reboot:
when: res.reboot_required

The below task will change the hostname of the server. Make sure you run on a test server so that it wont create issues. If you just wanted to test some playbook, use the second playbook with win_command
---
- hosts: <remote server name which needs to be added in the inventory>
tasks:
- name: Change the hostname to Windows_Ansible
win_hostname:
name: "Windows_Ansible"
register: res
- name: Reboot
win_reboot:
when: res.reboot_required
---
- hosts: <remote server name which needs to be added in the inventory>
tasks:
- name: Test
win_command: whoami
register: res

Related

Error! conflicting action statements: project_path, state (ansible)

Trying to execute this code but having the said errors encountered above
code
---
- hosts: localhost
connection: local
collections:
- community.general.terraform
tasks:
- name: Execute Terraform Template
project_path: '/Users/<username>/Desktop/<repository>/<file>'
state: present
force_init: true
The offending line appears to be:
tasks:
- name: Execute Terraform Template
^ here
been trying to figure this one out.. I am using a macOS, installed Ansible locally already.
Thank you in advance!!
Trying to execute the code above but unable to succeed.
You have an indentation problem in your script.
Should work like this:
- name: Run Terraform
gather_facts: No
hosts: localhost
tasks:
- name: Execute Terraform Template
terraform:
project_path: '/Users/<username>/Desktop/<repository>/<file>'
state: present
force_init: true

Ansible: Failed to restart apache2.service: Connection timed out

I am using Ansible AWX to issue a restart command to restart an apache2 service on a host. The restart command is contained in a playbook.
---
- name: Manage Linux Services
hosts: all
tasks:
- name: Restart a linux service
command: systemctl restart '{{ service_name }}'
register: result
ignore_errors: yes
- name: Show result of task
debug:
var: result
OR
---
- name: Manage Linux Services
hosts: all
tasks:
- name: Restart a linux service
ansible.builtin.service:
name: '{{ service_name }}'
state: restarted
register: result
ignore_errors: yes
- name: Show result of task
debug:
var: result
However, when I run the command, I get the error below:
"Failed to restart apache2.service: Connection timed out",
"See system logs and 'systemctl status apache2.service' for details."
I have tried to figure out the issue, but no luck yet.
I later figured the cause of the issue.
Here's how I fixed it:
The restart command requires sudo access to run which was missing in my command.
All I have to do was to add the become: true command so that I can execute the command with root privileges.
So my playbook looked like this thereafter:
---
- name: Manage Linux Services
hosts: all
tasks:
- name: Restart a linux service
command: systemctl restart '{{ service_name }}'
become: true
register: result
ignore_errors: yes
- name: Show result of task
debug:
var: result
OR
---
- name: Manage Linux Services
hosts: all
tasks:
- name: Restart a linux service
ansible.builtin.service:
name: '{{ service_name }}'
state: restarted
become: true
register: result
ignore_errors: yes
- name: Show result of task
debug:
var: result
Another way if you want to achieve this on Ansible AWX is to tick the Privilege Escalation option in the job template.
If enabled, this runs the selected playbook in the job template as an administrator.
That's all.
I hope this helps
Restarting a service requires sudo privileges. Besides adding the 'become' directive, if you would like to prompt for the password, you can do so by passing the -K flag (note: uppercase K)
$ ansible-playbook myplay.yml -i hosts -u myname --ask-pass -K

How to wait for ssh to become available on a host before installing a role?

Is there a way to wait for ssh to become available on a host before installing a role? There's wait_for_connection but I only figured out how to use it with tasks.
This particular playbook spin up servers on a cloud provider before attempting to install roles. But fails since the ssh service on the hosts isn't available yet.
How should I fix this?
---
- hosts: localhost
connection: local
tasks:
- name: Deploy vultr servers
include_tasks: create_vultr_server.yml
loop: "{{ groups['vultr_servers'] }}"
- hosts: all
gather_facts: no
become: true
tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
- name: Gather facts for first time
setup:
- name: Install curl
package:
name: "curl"
state: present
roles: # How to NOT install roles UNLESS the current host is available ?
- role: apache2
vars:
doc_root: /var/www/example
message: 'Hello world!'
- common-tools
Ansible play actions start with pre_tasks, then roles, followed by tasks and finally post_tasks. Move your wait_for_connection task as the first pre_tasks and it will block everything until connection is available:
- hosts: all
gather_facts: no
become: true
pre_tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
roles: ...
tasks: ...
For more info on execution order, see this title in role's documentation (paragraph just above the notes).
Note: you probably want to move all your current example tasks in that section too so that facts are gathered and curl installed prior to do anything else.

How to change the host dynamically in ansible playbook

I need to change the host dynamically in ansible playbook
Below is my sample playbook
---
- name: Deployment Playbook
hosts: “{{Servers}}”
tasks:
- name: deployment
shell: "deploy.sh {{DEPLOY_NAME}}"
In above play I need to change the server with respect of DEPLOY_NAME
Example
If {{DEPLOY_NAME}}=APP
THEN {{Servers}} = 172.17.65.17
If {{DEPLOY_NAME}}=SCRIPT
THEN {{Servers}} = 172.17.65.66
Previously we passed this as inventory from AWX. But now we need to handle this on playbook.
So please help me on this issue
---
- name: Deployment Playbook targetting Servers_1, will be skipped if DEPLOY_NAME is not APP
hosts: “{{Servers_1}}”
tasks:
- name: deployment
shell: "deploy.sh {{DEPLOY_NAME}}"
when: DEPLOY_NAME == 'APP'
- name: Deployment Playbook targetting Servers_1, will be skipped if DEPLOY_NAME is not SCRIPT
hosts: “{{Servers_2}}”
tasks:
- name: deployment
shell: "deploy.sh {{DEPLOY_NAME}}"
when: DEPLOY_NAME == 'SCRIPT'
I don't think you can do that. What I think it may work for you, is to do this instead:
---
- name: Deployment Playbook
hosts: localhost
tasks:
- name: deployment
shell: ssh root#{{ item.server }} deploy.sh {{ item.app }}
loop:
- { server: 'server1', app: 'app_1' }
- { server: 'server1', app: 'app_1' }
You could even improve this, by using that "inventory from awx", loading it as a "vars_files" which contains this list. So your final loop will just iterate over that list. Like this:
---
- name: Deployment Playbook
hosts: localhost
vars_files:
- your_list_file.yml
tasks:
- name: deployment
shell: ssh root#{{ item.server }} deploy.sh {{ item.app }}
loop: "{{ your_list_variable }}"

Ansible playbook for remote copy and script execution

I wish to write a playbook in ansible which will first transfer my package to remote hosts and then run a script. In detail, let's say I have apache package in local machine and need to scp/rsync it to remote nodes A & B. Then I have my script to install the package on A & B both, check whether it was installed properly followed by scrutinizing the config file etc. This script should run only if the transfer is successful.
Have written the below playbook which should meet above requirement. Please confirm if it needs further improvement. Thanks in advance !
Playbook:
---
- hosts: droplets
remote_user: root
tasks:
- name: Copy package to target machines
synchronize: src=/home/luckee/apache.rpm dest=/var/tmp/
- name: Run installation and verification script
script: /home/luckee/apache_install.sh
register: result
- name: Show result
debug: msg="{{ result.stdout }}"
...
This way the installation script will only run if the copy tasks changed (was execuded in the process) and exited successfully:
---
- hosts: droplets
remote_user: root
tasks:
- name: Copy package to target machines
synchronize: src=/home/luckee/apache.rpm dest=/var/tmp/
register: result_copy
- name: Run installation and verification script
script: /home/luckee/apache_install.sh
register: result_run
when: result_copy.changed
- name: Show result
debug: msg="{{ result_run.stdout }}"
...

Resources