I need to change the host dynamically in ansible playbook
Below is my sample playbook
---
- name: Deployment Playbook
hosts: “{{Servers}}”
tasks:
- name: deployment
shell: "deploy.sh {{DEPLOY_NAME}}"
In above play I need to change the server with respect of DEPLOY_NAME
Example
If {{DEPLOY_NAME}}=APP
THEN {{Servers}} = 172.17.65.17
If {{DEPLOY_NAME}}=SCRIPT
THEN {{Servers}} = 172.17.65.66
Previously we passed this as inventory from AWX. But now we need to handle this on playbook.
So please help me on this issue
---
- name: Deployment Playbook targetting Servers_1, will be skipped if DEPLOY_NAME is not APP
hosts: “{{Servers_1}}”
tasks:
- name: deployment
shell: "deploy.sh {{DEPLOY_NAME}}"
when: DEPLOY_NAME == 'APP'
- name: Deployment Playbook targetting Servers_1, will be skipped if DEPLOY_NAME is not SCRIPT
hosts: “{{Servers_2}}”
tasks:
- name: deployment
shell: "deploy.sh {{DEPLOY_NAME}}"
when: DEPLOY_NAME == 'SCRIPT'
I don't think you can do that. What I think it may work for you, is to do this instead:
---
- name: Deployment Playbook
hosts: localhost
tasks:
- name: deployment
shell: ssh root#{{ item.server }} deploy.sh {{ item.app }}
loop:
- { server: 'server1', app: 'app_1' }
- { server: 'server1', app: 'app_1' }
You could even improve this, by using that "inventory from awx", loading it as a "vars_files" which contains this list. So your final loop will just iterate over that list. Like this:
---
- name: Deployment Playbook
hosts: localhost
vars_files:
- your_list_file.yml
tasks:
- name: deployment
shell: ssh root#{{ item.server }} deploy.sh {{ item.app }}
loop: "{{ your_list_variable }}"
Related
I am new to ansible and AWX. Making good progress with my project but am getting stuck with one part and hoping you guys can help.
I have a Workflow Template as follows
Deploy VM
Get IP of the VM Deployed & create a temp host configuration
Change the deployed machine hostname
Where I am getting stuck is once I create the hostname when the next template kicks off the hostname group is missing. I assume this is because of some sort of runspace. How do I move this information around? I need this as later on I want to get into more complicated flows.
What I have so far:
1.
- name: Deploy VM from Template
hosts: xenservers
tasks:
- name: Deploy VM
shell: xe vm-install new-name-label="{{ vm_name }}" template="{{ vm_template }}"
- name: Start VM
shell: xe vm-start vm="{{ vm_name }}"
---
- name: Get VM IP
hosts: xenservers
remote_user: root
tasks:
- name: Get IP
shell: xe vm-list name-label="{{ vm_name }}" params=networks | grep -oE '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -1
register: vm_ip
until: vm_ip.stdout != ""
retries: 15
delay: 5
Here I was setting the hosts and this works locally to the template but when it moves on it fails. I tried this in a task and a play.
- name: Add host to group
add_host:
name: "{{ vm_ip.stdout }}"
groups: deploy_vm
- hosts: deploy_vm
- name: Set hosts
hosts: deployed_vm
tasks:
- name: Set a hostname
hostname:
name: "{{ vm_name }}"
when: ansible_distribution == 'Rocky Linux'
Thanks
I have a long playbook with a number of roles defined. Now i have a requirement for one role i need to pass the host as a variable which will be defined on an earlier role.
eg playbook
---
- name: task1
hosts: app1
gather_facts: no
any_errors_fatal: true
roles:
- role-1
- name: task2
hosts: "{{ host }}"
any_errors_fatal: true
gather_facts: no
roles:
- role-2
My role-1
---
- name: setting the var
set_fact:
host: "app2"
- debug:
var: host
My role-2
---
- debug:
var: host
- name: do something
file:
path: /home/ec2-user/dir1
state: directory
mode: '0755'
however, when I try to run my playbook my role-2 gets skipped because no hosts matched. can someone point me on how to get this setup working.
The thing you want is add_host: and then set the newly created or assigned group as the hosts: the 2nd play:
- hosts: app1
tasks:
- add_host:
name: app2
groups:
- my-group
- hosts: my-group
tasks:
- debug: var=ansible_host
I have a Ansible playbook which does multiple things as below -
Download artifacts fron nexus into local server (Ansible Master).
Copy those artifacts onto multiple remote machines let's say server1/2/3 etc..
And I have used roles in my playbook and the role (repodownload) which downloads the artifacts I want to run it only once because why would i want to download the same thing again. I have tried to use run_once: true but i guess that won't work because that only works for one playbook run but my playbook is running multiple times for multiple hosts.
---
- name: Deploy my Application to tomcat nodes
hosts: '{{ target_env }}'
serial: 1
roles:
- role: repodownload
tags:
- repodownload
- role: copyrepo
tags:
- copyrepo
- role: stoptomcat
tags:
- stoptomcat
- role: deploy
tags:
- deploy
Here target_env is being passed from the command line and it's the remote host group.
Any help is appreciated.
Below is the code from main.yml from repodownload role -
- connection: local
name: Downloading files from Nexus to local server
get_url: url="{{ nexus_url }}/{{item}}/{{ myvm_release_version }}/{{item}}-{{ release_ver }}.war" dest={{ local_server_location }}
with_items:
- "{{ temps }}"
This is a really simple one that I battled with too.
Try this:
- connection: local
name: Downloading files from Nexus to local server
get_url:
url: "{{ nexus_url }}/{{item}}/{{ myvm_release_version }}/{{item}}-{{ release_ver }}.war"
dest: "{{ local_server_location }}"
with_items:
- "{{ temps }}"
run_once: true
Just something else, unrelated to your main question;
When you run a module that has really long args, like in your example above, rather break the params into their own lines nested under the module. It makes for easier reading, and it makes it easier to spot any potential typos or syntax errors early.
Okay extending from your converstation with Zeitounator. The following workaround will work without changing your vars files. Just remember that this is a workaround, might not be the most efficient way to do the job.
---
- name: Download my repo to localhost
# Executes only for first host in target_env and has visibility to group vars of target_env
hosts: '{{ target_env }}[0]'
serial: 1
roles:
- role: repodownload
tags:
- repodownload
- name: Deploy my Application to tomcat nodes
# Executes for all hosts in target_env
hosts: '{{ target_env }}'
serial: 1
roles:
- role: copyrepo
tags:
- copyrepo
- role: stoptomcat
tags:
- stoptomcat
- role: deploy
tags:
- deploy
I'm started to try Ansible, and using example code from Ansible Documentation. After I try several examples, I get error at the beginning of the code. It says
- name: Change the hostname to Windows_Ansible
^ here(Point at name)"
Any advice would be appreciate.
I tried this one
https://docs.ansible.com/ansible/latest/modules/win_hostname_module.html#win-hostname-module
---
- name: Change the hostname to Windows_Ansible
win_hostname:
name: "Windows_Ansible"
register: res
- name: Reboot
win_reboot:
when: res.reboot_required
The below task will change the hostname of the server. Make sure you run on a test server so that it wont create issues. If you just wanted to test some playbook, use the second playbook with win_command
---
- hosts: <remote server name which needs to be added in the inventory>
tasks:
- name: Change the hostname to Windows_Ansible
win_hostname:
name: "Windows_Ansible"
register: res
- name: Reboot
win_reboot:
when: res.reboot_required
---
- hosts: <remote server name which needs to be added in the inventory>
tasks:
- name: Test
win_command: whoami
register: res
I have an ansible playbook YAML file which contains 3 plays.
The first play and the third play run on localhost but the second play runs on remote machine as you can see an example below:
- name: Play1
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here
- name: Play2
hosts: remote_host
tasks:
- ... task here
- name: Play3
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here
I found that, on the first run, Ansible Playbook executes Play1 and Play3 and skips Play2. Then, I try to run again, it executes all of them correctly.
What is wrong here?
The problem is that, at Play2, I use ec2 inventor like tag_Name_my_machine but this instance was not created yet, because it would be created at Play1's task.
Once Play1 finished, it will run Play2 but no host found so it silently skip this play.
The solution is to create dynamic inventor and manually register at Play1's tasks:
Playbook may look like this:
- name: Play1
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Launch new ec2 instance
register: ec2
ec2: ...
- name: create dynamic group
add_host:
name: "{{ ec2.instances[0].private_ip }}"
group: host_dynamic_lastec2_created
- name: Play2
user: ...
hosts: host_dynamic_lastec2_created
become: yes
become_method: sudo
become_user: root
tasks:
- name: do something
shell: ...
- name: Play3
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here