Attach elastic IP's to bastion hosts when provisioned automatically - ansible

---
- hosts: localhost
gather_facts: False
roles:
- provision_ec2
# this uses a variable defined in the first role of this playbook, provision_ec2.
- hosts: "{{ hostvars['localhost'].bastion_server_group }}"
become: yes
become_method: sudo
roles:
- hosts_file
# this won't work on bastion servers until we automate a way to connect to the newly provisioned bastion server.
# This would require some proxy command and attaching an elastic IP, then pushing that to the ssh_config.
Because now we do the above mention comments manually every time when we spin a new bastion server, I need your help to know how automate the process of attaching the Elastic IP to the newly provisioned bastion server? I am new to Yaml and ansible, I am learning yaml from last few weeks.
- hosts: '{{HOST_GROUP}}'
gather_facts: False
roles:
- { role: ec2_tags, when: server_type != 'bastion' }
- { role: ec2_tag_volumes, when: server_type == 'app' or server_type == 'util' }

There is ec2_eip module. Use it just after creating your instance with ec2 module.
Example usage:
- ec2_eip:
region: "{{ region }}"
state: present
in_vpc: yes
device_id: "{{ ec2_result.instances[0].id }}"
reuse_existing_ip_allowed: yes

Related

Create Ec2 instance and install Python package using Ansible Playbook

I've created an ansible playbook for create an Ec2 instance and Install python to connect the server via ssh.
The Playbook successfully created an EC2 instance but it doesn't install the python on newly created Ec2 Instance instead it installing the python on my Master Machine.
Can someone help me to resolve this.
My Code:
- hosts: localhost
remote_user: ubuntu
become: yes
tasks:
- name: I'm going to create a Ec2 instance
ec2:
key_name: yahoo
instance_type: t2.micro
region: "ap-south-1"
image: ami-0860c9429baba6ad2
count: 1
vpc_subnet_id: subnet-aa84fbe6
assign_public_ip: yes
tags:
- creation
- name: Going to Install Python
apt:
name: python
state: present
tags:
- Web
- name: Start the service
service:
name: python
state: started
tags:
- start
As is shown in the fine manual, ec2: operations should be combined with add_host: in order to "pivot" off of localhost over to the newly provisioned instance, which one cannot add to an inventory file because it doesn't exist yet
- hosts: localhost
gather_facts: no
tasks:
- name: I'm going to create a Ec2 instance
ec2:
...etc etc
register: run_instances
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: just_launched
loop: "{{ run_instances.instances }}"
- name: Wait for SSH to come up
delegate_to: "{{ item.public_ip }}"
wait_for_connection:
delay: 60
timeout: 320
loop: "{{ run_instances.instances }}"
- hosts: just_launched
# and now you can do fun things to those ec2 instances
I suspect your question about "installing python" is unrelated, but just in case: if you genuinely really do need to add python to those ec2 instances, you cannot use most ansible modules to do that because they're written in python. But that is what the raw: module is designed to fix
- hosts: just_launched
# you cannot gather facts without python, either
gather_facts: no
tasks:
- raw: |
echo watch out for idempotency issues with your playbook using raw
yum install python
- name: NOW gather the facts
setup:
- command: echo and now you have working ansible modules again
I also find your invocation of service: { name: python, state: started } suspicious, but I would guess it has not run yet due to your question here

Ansible creates directory in the control machine even if delegate_to set to remote when running the playbook with local connection

I'm running an ansible-playbook configured to provision ec2 and configure the machine. I set the connection to local for the playbook because no machine to manage before the script runs. Once provisioned, I supposed to create directory in the remote server. Since the playbook runs in local connection, I set to delete_to: {{ remote_host }} so this directory creation executed in remote host but it still creates the directory in the control machine.
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ubuntu
gather_facts: false
vars_files:
- vars/env.yml
vars:
allow_world_readable_tmpfiles: true
key_name: ansible-test
region: us-east-2
image: ami-0e82959d4ed12de3f # Ubuntu 18.04
id: "practice-akash-ajay"
sec_group: "{{ id }}-sec"
remote_host: ansible-test
remaining_days: 20
acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory
# acme_directory: https://acme-v02.api.letsencrypt.org/directory
cert_name: "{{ app_slug }}.{{ app_domain}}"
intermediate_path: /etc/pki/letsencrypt/intermediate.pem
cert:
common_name: "{{ app_slug }}.{{ app_domain}}"
organization_name: PearlThoughts
email_address: "{{ letsencrypt_email }}"
subject_alt_name:
- "DNS:{{ app_slug }}.{{ app_domain}}"
roles:
- aws
- name: Create certificate storage directory
file:
dest: "{{item.path}}"
mode: 0750
state: directory
delegate_to: {{ remote_host }}
with_items:
- path: ~/lets-seng-test
When you set connection explicitly on the play, it will be used for all tasks in that play. So don't do that. Ansible will by default use a local connection for localhost unless you have explicitly changed that in your inventory (and again, don't do that).
If you remove the connection setting on your play, delegate_to might work the way you expect...but I don't think you want to do that.
If you have your playbook provisioning a new host for you, the way to target that host with Ansible is to have a new play with that host (or it's corresponding group) listed in the target hosts: for the play. Conceptually, you want:
- host: localhost
tasks:
- name: provisiong an AWS instance
aws_ec2: [...]
register: hostinfo
- add_host:
name: myhost
ansible_host: "{{ hostinfo... }}"
- hosts: myhost
tasks:
- name: do something on the new host
command: uptime
You probably need some logic in between provisioning the host and executing tasks against it to ensure that it is up and ready to service requests.
Instead of using the add_host module, a better solution is often to rely on the appropriate inventory plugin.

Running playbook in host A to get value and pass on the value to task where it execute locally

Execute a Task to get the hostname in windows machine and store in variable called HstName. Then I have to use the variable to create a folder in GCP dynamically. My GCP module works only on localhost
Ansible version : 2.7
Task 1: Target host windows, to get the hostname
Task 2: "gcp_storage" module uses localhost for creating an object inside storage bucket.
hosts: win
gather_facts: no
serial: 1
connection: winrm
become_method: runas
become_user: admin
vars:
ansible_become_password: '}C7I]8zH#5cPDE8'
gcp_object: "gcp-shareddb-bucket/emeraldpos/stage/"
tasks:
- name: hostname
win_command: hostname
register: Hstname
- set_fact:
Htname: "{{ Hstname.stdout | trim}}"
- debug:
msg: "{{Htname}}"
- hosts: localhost
gather_facts: no
tasks:
- name: Create folder under EMERALDPOS
gc_storage:
bucket: gcp-shareddb-bucket
object: "{{gcp_object}}{{Htname}}"
mode: create
region: us-west2
project: sharedBucket-228905
gs_access_key: "NAKJSAIUS231219237LOKM"
gs_secret_key: "XABSAHSKASAJS:OAKSAJNDADIAJAJD:N<MNMN"
Ansible should create gcp-shareddb-bucket/emeraldpos/stage/HOSTNAME in gcp storage object

How do I apply an Ansible task to multiple hosts from within a playbook?

I am writing an ansible playbook to rotate IAM access keys. It runs on my localhost to create a new IAM Access Key on AWS. I want to push that key to multiple other hosts' ~/.aws/credentials files.
---
- name: Roll IAM access keys
hosts: localhost
connection: local
gather_facts: false
strategy: free
roles:
- iam-rotation
In the iam-rotation role, I have something like this:
- name: Create new Access Key
iam:
iam_type: user
name: "{{ item }}"
state: present
access_key_state: create
key_count: 2
with_items:
- ansible-test-user
register: aws_user
- set_fact:
aws_user_name: "{{ aws_user.results.0.user_name }}"
created_keys_count: "{{ aws_user.results.0.created_keys | length }}"
aws_user_keys: "{{ aws_user.results[0]['keys'] }}"
I want to use push the newly created access keys out to jenkins builders. How would I use the list of hosts from with_items in the task? The debug task is just a placeholder.
# Deploy to all Jenkins builders
- name: Deploy new keys to jenkins builders
debug:
msg: "Deploying key to host {{item}}"
with_items:
- "{{ groups.jenkins_builders }}"
Hosts file that includes the list of hosts I want to apply to
[jenkins_builders]
builder1.example.org
builder2.example.org
builder3.example.org
I am executing the playbook on localhost. But within the playbook I want one task to execute on remote hosts which I'm getting from the hosts file. The question was...
How would I use the list of hosts from with_items in the task?
Separate the tasks into two roles. Then execute the first role against localhost and the second one against jenkins_builders:
---
- name: Rotate IAM access keys
hosts: localhost
connection: local
gather_facts: false
strategy: free
roles:
- iam-rotation
- name: Push out IAM access keys
hosts: jenkins_builders
roles:
- iam-propagation
Per AWS best practices recommendations, if you are running an application on an Amazon EC2 instance and the application needs access to AWS resources, you should use IAM roles for EC2 instead of keys:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
You can use
delegate_to: servername
in the task module, it will run only on the particular

How to deploy a custom VM with ansible and run subsequent steps on the guest VM via the host?

I have a playbook that I run to deploy a guest VM onto my target node.
After the guest VM is fired up, it is not available to the whole network, but to the host machine only.
Also, after booting up the guest VM, I need to run some commands on that guest to configure it and make it available to all the network members.
---
- block:
- name: Verify the deploy VM script
stat: path="{{ deploy_script }}"
register: deploy_exists
failed_when: deploy_exists.stat.exists == False
no_log: True
rescue:
- name: Copy the deploy script from Ansible
copy:
src: "scripts/new-install.pl"
dest: "/home/orch"
owner: "{{ my_user }}"
group: "{{ my_user }}"
mode: 0750
backup: yes
register: copy_script
- name: Deploy VM
shell: run my VM deploy script
<other tasks>
- name: Run something on the guest VM
shell: my_other_script
args:
cdir: /var/scripts/
- name: Other task on guest VM
shell: uname -r
<and so on>
How can I run those subsequent steps on the guest VM via the host?
My only workaround is to populate a new inventory file with the VMs details and add the use the host as a bastion host.
[myvm]
myvm-01 ansible_connection=ssh ansible_ssh_user=my_user ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p someuser#host_machine"'
However, I want everything to happen on a single playbook, rather than splitting them.
I have resolved it myself.
I managed to dynamically add the host to the inventory and used a group:vars for the newly created hosts to use the VM manager as a bastion host
Playbook:
---
hosts: "{{ vm_manager }}"
become_method: sudo
gather_facts: False
vars_files:
- vars/vars.yml
- vars/vault.yml
pre_tasks:
- name: do stuff here on the VM manager
debug: msg="test"
roles:
- { role: vm_deploy, become: yes, become_user: root }
tasks:
- name: Dinamically add newly created VM to the inventory
add_host:
hostname: "{{ vm_name }}"
groups: vms
ansible_ssh_user: "{{ vm_user }}"
ansible_ssh_pass: "{{ vm_pass }}"
- name: Run the rest of tasks on the VM through the host machine
hosts: "{{ vm_name }}"
become: true
become_user: root
become_method: sudo
post_tasks:
- name: My first task on the VM
static: no
include_role:
name: my_role_for_the_VM
Inventory:
[vm_manager]
vm-manager.local
[vms]
my-test-01
my-test-02
[vms:vars]
ansible_connection=ssh
ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p username#vm-manager.local"'
Run playbook:
ansible-playbook -i hosts -vv playbook.yml -e vm_name=some-test-vm-name

Resources