I am new to ansible and AWX. Making good progress with my project but am getting stuck with one part and hoping you guys can help.
I have a Workflow Template as follows
Deploy VM
Get IP of the VM Deployed & create a temp host configuration
Change the deployed machine hostname
Where I am getting stuck is once I create the hostname when the next template kicks off the hostname group is missing. I assume this is because of some sort of runspace. How do I move this information around? I need this as later on I want to get into more complicated flows.
What I have so far:
1.
- name: Deploy VM from Template
hosts: xenservers
tasks:
- name: Deploy VM
shell: xe vm-install new-name-label="{{ vm_name }}" template="{{ vm_template }}"
- name: Start VM
shell: xe vm-start vm="{{ vm_name }}"
---
- name: Get VM IP
hosts: xenservers
remote_user: root
tasks:
- name: Get IP
shell: xe vm-list name-label="{{ vm_name }}" params=networks | grep -oE '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -1
register: vm_ip
until: vm_ip.stdout != ""
retries: 15
delay: 5
Here I was setting the hosts and this works locally to the template but when it moves on it fails. I tried this in a task and a play.
- name: Add host to group
add_host:
name: "{{ vm_ip.stdout }}"
groups: deploy_vm
- hosts: deploy_vm
- name: Set hosts
hosts: deployed_vm
tasks:
- name: Set a hostname
hostname:
name: "{{ vm_name }}"
when: ansible_distribution == 'Rocky Linux'
Thanks
Related
I've created an ansible playbook for create an Ec2 instance and Install python to connect the server via ssh.
The Playbook successfully created an EC2 instance but it doesn't install the python on newly created Ec2 Instance instead it installing the python on my Master Machine.
Can someone help me to resolve this.
My Code:
- hosts: localhost
remote_user: ubuntu
become: yes
tasks:
- name: I'm going to create a Ec2 instance
ec2:
key_name: yahoo
instance_type: t2.micro
region: "ap-south-1"
image: ami-0860c9429baba6ad2
count: 1
vpc_subnet_id: subnet-aa84fbe6
assign_public_ip: yes
tags:
- creation
- name: Going to Install Python
apt:
name: python
state: present
tags:
- Web
- name: Start the service
service:
name: python
state: started
tags:
- start
As is shown in the fine manual, ec2: operations should be combined with add_host: in order to "pivot" off of localhost over to the newly provisioned instance, which one cannot add to an inventory file because it doesn't exist yet
- hosts: localhost
gather_facts: no
tasks:
- name: I'm going to create a Ec2 instance
ec2:
...etc etc
register: run_instances
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: just_launched
loop: "{{ run_instances.instances }}"
- name: Wait for SSH to come up
delegate_to: "{{ item.public_ip }}"
wait_for_connection:
delay: 60
timeout: 320
loop: "{{ run_instances.instances }}"
- hosts: just_launched
# and now you can do fun things to those ec2 instances
I suspect your question about "installing python" is unrelated, but just in case: if you genuinely really do need to add python to those ec2 instances, you cannot use most ansible modules to do that because they're written in python. But that is what the raw: module is designed to fix
- hosts: just_launched
# you cannot gather facts without python, either
gather_facts: no
tasks:
- raw: |
echo watch out for idempotency issues with your playbook using raw
yum install python
- name: NOW gather the facts
setup:
- command: echo and now you have working ansible modules again
I also find your invocation of service: { name: python, state: started } suspicious, but I would guess it has not run yet due to your question here
I'm running an ansible-playbook configured to provision ec2 and configure the machine. I set the connection to local for the playbook because no machine to manage before the script runs. Once provisioned, I supposed to create directory in the remote server. Since the playbook runs in local connection, I set to delete_to: {{ remote_host }} so this directory creation executed in remote host but it still creates the directory in the control machine.
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ubuntu
gather_facts: false
vars_files:
- vars/env.yml
vars:
allow_world_readable_tmpfiles: true
key_name: ansible-test
region: us-east-2
image: ami-0e82959d4ed12de3f # Ubuntu 18.04
id: "practice-akash-ajay"
sec_group: "{{ id }}-sec"
remote_host: ansible-test
remaining_days: 20
acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory
# acme_directory: https://acme-v02.api.letsencrypt.org/directory
cert_name: "{{ app_slug }}.{{ app_domain}}"
intermediate_path: /etc/pki/letsencrypt/intermediate.pem
cert:
common_name: "{{ app_slug }}.{{ app_domain}}"
organization_name: PearlThoughts
email_address: "{{ letsencrypt_email }}"
subject_alt_name:
- "DNS:{{ app_slug }}.{{ app_domain}}"
roles:
- aws
- name: Create certificate storage directory
file:
dest: "{{item.path}}"
mode: 0750
state: directory
delegate_to: {{ remote_host }}
with_items:
- path: ~/lets-seng-test
When you set connection explicitly on the play, it will be used for all tasks in that play. So don't do that. Ansible will by default use a local connection for localhost unless you have explicitly changed that in your inventory (and again, don't do that).
If you remove the connection setting on your play, delegate_to might work the way you expect...but I don't think you want to do that.
If you have your playbook provisioning a new host for you, the way to target that host with Ansible is to have a new play with that host (or it's corresponding group) listed in the target hosts: for the play. Conceptually, you want:
- host: localhost
tasks:
- name: provisiong an AWS instance
aws_ec2: [...]
register: hostinfo
- add_host:
name: myhost
ansible_host: "{{ hostinfo... }}"
- hosts: myhost
tasks:
- name: do something on the new host
command: uptime
You probably need some logic in between provisioning the host and executing tasks against it to ensure that it is up and ready to service requests.
Instead of using the add_host module, a better solution is often to rely on the appropriate inventory plugin.
I'm new to all the Ansible stuff. So most of the time I'm in "Trial and Error"-Mode.
Now I'm facing a challenge with a playbook and I do not know to look further.
The main task of this playbook should be to get a "Show run" from a Cisco Device and save this in a text file on a backup server (which is a remote server).
The only task, which is not working, is the Backup Task.
Here is my playbook:
- hosts: IOSGATEWAY
gather_facts: no
connection: local
tasks:
- name: GET CREDENTIALS
include_vars: path/to/all/all.yml
- name: DEFINE CONNECTION TO GW
set_fact:
connection:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
- name: GET SHOW RUN
ios_command:
provider: "{{ connection }}"
commands:
- show run
register: show_run
- name: SAVE TO BACKUP SERVER
copy:
content: "{{ show_run.stdout[0] }}"
dest: "path/to/Directory/{{ inventory_hostname }}.txt"
delegate_to: BACKUPSERVER
Can someone hint me in the right direction?
You set connection: local for the playbook, so everything you do is executed locally (which is correct for ios_... modules, but not what you actually want for copy module).
I'd recommend to define ansible_connection variable in your inventory per group of hosts/devices, so Ansible will use local connection for your ios devices, and ssh for backup-server.
I have a playbook that I run to deploy a guest VM onto my target node.
After the guest VM is fired up, it is not available to the whole network, but to the host machine only.
Also, after booting up the guest VM, I need to run some commands on that guest to configure it and make it available to all the network members.
---
- block:
- name: Verify the deploy VM script
stat: path="{{ deploy_script }}"
register: deploy_exists
failed_when: deploy_exists.stat.exists == False
no_log: True
rescue:
- name: Copy the deploy script from Ansible
copy:
src: "scripts/new-install.pl"
dest: "/home/orch"
owner: "{{ my_user }}"
group: "{{ my_user }}"
mode: 0750
backup: yes
register: copy_script
- name: Deploy VM
shell: run my VM deploy script
<other tasks>
- name: Run something on the guest VM
shell: my_other_script
args:
cdir: /var/scripts/
- name: Other task on guest VM
shell: uname -r
<and so on>
How can I run those subsequent steps on the guest VM via the host?
My only workaround is to populate a new inventory file with the VMs details and add the use the host as a bastion host.
[myvm]
myvm-01 ansible_connection=ssh ansible_ssh_user=my_user ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p someuser#host_machine"'
However, I want everything to happen on a single playbook, rather than splitting them.
I have resolved it myself.
I managed to dynamically add the host to the inventory and used a group:vars for the newly created hosts to use the VM manager as a bastion host
Playbook:
---
hosts: "{{ vm_manager }}"
become_method: sudo
gather_facts: False
vars_files:
- vars/vars.yml
- vars/vault.yml
pre_tasks:
- name: do stuff here on the VM manager
debug: msg="test"
roles:
- { role: vm_deploy, become: yes, become_user: root }
tasks:
- name: Dinamically add newly created VM to the inventory
add_host:
hostname: "{{ vm_name }}"
groups: vms
ansible_ssh_user: "{{ vm_user }}"
ansible_ssh_pass: "{{ vm_pass }}"
- name: Run the rest of tasks on the VM through the host machine
hosts: "{{ vm_name }}"
become: true
become_user: root
become_method: sudo
post_tasks:
- name: My first task on the VM
static: no
include_role:
name: my_role_for_the_VM
Inventory:
[vm_manager]
vm-manager.local
[vms]
my-test-01
my-test-02
[vms:vars]
ansible_connection=ssh
ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p username#vm-manager.local"'
Run playbook:
ansible-playbook -i hosts -vv playbook.yml -e vm_name=some-test-vm-name
I'm trying to provision new machines on AWS with ec2 module
and to update my hosts file locally so that the next tasks would already use the hosts file.
So, provisioning isn't and issue and even the creation of the local host file:
- name: Provision a set of instances
ec2:
key_name: AWS
region: eu-west-1
group: default
instance_type: t2.micro
image: ami-6f587e1c # For Ubuntu 14.04 LTS use ami-b9b394ca # For Ubuntu 16.04 LTS use ami-6f587e1c
wait: yes
volumes:
- device_name: /dev/xvda
volume_type: gp2
volume_size: 50
wait: true
count: 2
vpc_subnet_id: subnet-xxxxxxxx
assign_public_ip: yes
instance_tags:
Name: Ansible
register: ec2
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
with_items: "{{ ec2.instances }}"
- local_action: file path=./hosts state=absent
ignore_errors: yes
- local_action: file path=./hosts state=touch
- local_action: lineinfile line="[all]" insertafter=EOF dest=./hosts
- local_action: lineinfile line="{{ item.private_ip }} ansible_python_interpreter=/usr/bin/python3" insertafter=EOF dest=./hosts
with_items: "{{ ec2.instances }}"
- name: Wait for SSH to come up
wait_for:
host: "{{ item.private_ip }}"
port: 22
delay: 60
timeout: 600
state: started
with_items: "{{ ec2.instances }}"
- name: refreshing inventory cache
meta: refresh_inventory
- hosts: all
gather_facts: False
tasks:
- command: hostname -i
However the next task which is a simple print of hostname -i (just for the test)
fails because it can't find on Ubuntu 16.04 LTS Python 2.7 (there is python3)
For that, in my dynamic host file I add the following line:
ansible_python_interpreter=/usr/bin/python3
But it seems that ansible ignore it and goes straight to python 2.7 which is missing.
I've tried to reload the inventory file
meta: refresh_inventory
but that didn't helped either.
What am I doing wrong ?
I'm not sure why the refresh did not work but I suggest setting it in the add_host section, it takes any variable.
- name: Add all instance private IPs to host group
add_host:
hostname: "{{ item.private_ip }}"
ansible_ssh_user: ubuntu
groups: aws
ansible_python_interpreter: "/usr/bin/python3"
with_items: "{{ ec2.instances }}"
Also i find it useful to debug with this task
- debug: var=hostvars[inventory_hostname]